id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
2,031,116,653
https://api.github.com/repos/huggingface/datasets/issues/6480
https://github.com/huggingface/datasets/pull/6480
6,480
Add IterableDataset `__repr__`
closed
2
2023-12-07T16:31:50
2023-12-08T13:33:06
2023-12-08T13:26:54
lhoestq
[]
Example for glue sst2: Dataset ``` DatasetDict({ test: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 1821 }) train: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 67349 }) validation: Dataset({ features: ['sentence',...
true
2,029,040,121
https://api.github.com/repos/huggingface/datasets/issues/6479
https://github.com/huggingface/datasets/pull/6479
6,479
More robust preupload retry mechanism
closed
2
2023-12-06T17:19:38
2023-12-06T19:47:29
2023-12-06T19:41:06
mariosasko
[]
null
true
2,028,071,596
https://api.github.com/repos/huggingface/datasets/issues/6478
https://github.com/huggingface/datasets/issues/6478
6,478
How to load data from lakefs
closed
3
2023-12-06T09:04:11
2024-07-03T19:13:57
2024-07-03T19:13:56
d710055071
[]
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
false
2,028,022,374
https://api.github.com/repos/huggingface/datasets/issues/6477
https://github.com/huggingface/datasets/pull/6477
6,477
Fix PermissionError on Windows CI
closed
2
2023-12-06T08:34:53
2023-12-06T09:24:11
2023-12-06T09:17:52
albertvillanova
[]
Fix #6476.
true
2,028,018,596
https://api.github.com/repos/huggingface/datasets/issues/6476
https://github.com/huggingface/datasets/issues/6476
6,476
CI on windows is broken: PermissionError
closed
0
2023-12-06T08:32:53
2023-12-06T09:17:53
2023-12-06T09:17:53
albertvillanova
[ "bug" ]
See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394 ``` FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\...
false
2,027,373,734
https://api.github.com/repos/huggingface/datasets/issues/6475
https://github.com/huggingface/datasets/issues/6475
6,475
laion2B-en failed to load on Windows with PrefetchVirtualMemory failed
open
6
2023-12-06T00:07:34
2023-12-06T23:26:23
null
doctorpangloss
[]
### Describe the bug I have downloaded laion2B-en, and I'm receiving the following error trying to load it: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 128/128 [00:00<00:00, 1173.79it/s] Traceback (most recent call last): File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <mo...
false
2,027,006,715
https://api.github.com/repos/huggingface/datasets/issues/6474
https://github.com/huggingface/datasets/pull/6474
6,474
Deprecate Beam API and download from HF GCS bucket
closed
2
2023-12-05T19:51:33
2024-03-12T14:56:25
2024-03-12T14:50:12
mariosasko
[]
Deprecate the Beam API and download from the HF GCS bucked. TODO: - [x] Convert the Beam-based [`wikipedia`](https://huggingface.co/datasets/wikipedia) to an Arrow-based dataset ([Hub PR](https://huggingface.co/datasets/wikipedia/discussions/19)) - [x] Make [`natural_questions`](https://huggingface.co/datasets/na...
true
2,026,495,084
https://api.github.com/repos/huggingface/datasets/issues/6473
https://github.com/huggingface/datasets/pull/6473
6,473
Fix CI quality
closed
2
2023-12-05T15:36:23
2023-12-05T18:14:50
2023-12-05T18:08:41
albertvillanova
[]
Fix #6472.
true
2,026,493,439
https://api.github.com/repos/huggingface/datasets/issues/6472
https://github.com/huggingface/datasets/issues/6472
6,472
CI quality is broken
closed
0
2023-12-05T15:35:34
2023-12-06T08:17:34
2023-12-05T18:08:43
albertvillanova
[ "bug", "maintenance" ]
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
false
2,026,100,761
https://api.github.com/repos/huggingface/datasets/issues/6471
https://github.com/huggingface/datasets/pull/6471
6,471
Remove delete doc CI
closed
2
2023-12-05T12:37:50
2023-12-05T12:44:59
2023-12-05T12:38:50
lhoestq
[]
null
true
2,024,724,319
https://api.github.com/repos/huggingface/datasets/issues/6470
https://github.com/huggingface/datasets/issues/6470
6,470
If an image in a dataset is corrupted, we get unescapable error
open
0
2023-12-04T20:58:49
2023-12-04T20:58:49
null
chigozienri
[]
### Describe the bug Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1 ### Steps to reproduce the bug ``` from datasets import load_dataset, VerificationMode dataset = load_dataset( 'sasha/birdsnap', split="train", verification_mode=VerificationMode.ALL_C...
false
2,023,695,839
https://api.github.com/repos/huggingface/datasets/issues/6469
https://github.com/huggingface/datasets/pull/6469
6,469
Don't expand_info in HF glob
closed
3
2023-12-04T12:00:37
2023-12-15T13:18:37
2023-12-15T13:12:30
lhoestq
[]
Finally fix https://github.com/huggingface/datasets/issues/5537
true
2,023,617,877
https://api.github.com/repos/huggingface/datasets/issues/6468
https://github.com/huggingface/datasets/pull/6468
6,468
Use auth to get parquet export
closed
2
2023-12-04T11:18:27
2023-12-04T17:21:22
2023-12-04T17:15:11
lhoestq
[]
added `token` to the `_datasets_server` functions
true
2,023,174,233
https://api.github.com/repos/huggingface/datasets/issues/6467
https://github.com/huggingface/datasets/issues/6467
6,467
New version release request
closed
2
2023-12-04T07:08:26
2023-12-04T15:42:22
2023-12-04T15:42:22
LZHgrla
[ "enhancement" ]
### Feature request Hi! I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0. To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share ...
false
2,022,601,176
https://api.github.com/repos/huggingface/datasets/issues/6466
https://github.com/huggingface/datasets/issues/6466
6,466
Can't align optional features of struct
closed
3
2023-12-03T15:57:07
2024-02-15T15:19:33
2024-02-08T14:38:34
Dref360
[]
### Describe the bug Hello! I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional. I have a column named `speaker`, and this holds some information about a speaker. ```python @dataclass class Speaker: name: str email: Optional[str] ``` ...
false
2,022,212,468
https://api.github.com/repos/huggingface/datasets/issues/6465
https://github.com/huggingface/datasets/issues/6465
6,465
`load_dataset` uses out-of-date cache instead of re-downloading a changed dataset
open
2
2023-12-02T21:35:17
2024-08-20T08:32:11
null
mnoukhov
[]
### Describe the bug When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset ### Steps to reproduce the bug Here is a minimal example script to 1. create an initial dataset and upload 2. download it so it is stored in cache 3. c...
false
2,020,860,462
https://api.github.com/repos/huggingface/datasets/issues/6464
https://github.com/huggingface/datasets/pull/6464
6,464
Add concurrent loading of shards to datasets.load_from_disk
closed
8
2023-12-01T13:13:53
2024-01-26T15:17:43
2024-01-26T15:10:26
kkoutini
[]
In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads. - Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252). - I'...
true
2,020,702,967
https://api.github.com/repos/huggingface/datasets/issues/6463
https://github.com/huggingface/datasets/pull/6463
6,463
Disable benchmarks in PRs
closed
2
2023-12-01T11:35:30
2023-12-01T12:09:09
2023-12-01T12:03:04
lhoestq
[]
In order to keep PR pages less spammy / more readable. Having the benchmarks on commits on `main` is enough imo
true
2,019,238,388
https://api.github.com/repos/huggingface/datasets/issues/6462
https://github.com/huggingface/datasets/pull/6462
6,462
Missing DatasetNotFoundError
closed
2
2023-11-30T18:09:43
2023-11-30T18:36:40
2023-11-30T18:30:30
lhoestq
[]
continuation of https://github.com/huggingface/datasets/pull/6431 this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too
true
2,018,850,731
https://api.github.com/repos/huggingface/datasets/issues/6461
https://github.com/huggingface/datasets/pull/6461
6,461
Fix shard retry mechanism in `push_to_hub`
closed
5
2023-11-30T14:57:14
2023-12-01T17:57:39
2023-12-01T17:51:33
mariosasko
[]
When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that. Fix...
true
2,017,433,899
https://api.github.com/repos/huggingface/datasets/issues/6460
https://github.com/huggingface/datasets/issues/6460
6,460
jsonlines files don't load with `load_dataset`
closed
4
2023-11-29T21:20:11
2023-12-29T02:58:29
2023-12-05T13:30:53
serenalotreck
[]
### Describe the bug While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`. ### Steps to reproduce the bug Code: ``` from datasets import load_dataset dset = load_...
false
2,017,029,380
https://api.github.com/repos/huggingface/datasets/issues/6459
https://github.com/huggingface/datasets/pull/6459
6,459
Retrieve cached datasets that were pushed to hub when offline
closed
3
2023-11-29T16:56:15
2024-03-25T13:55:42
2024-03-25T13:55:42
lhoestq
[]
I drafted the logic to retrieve a no-script dataset in the cache. For example it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ ...
true
2,016,577,761
https://api.github.com/repos/huggingface/datasets/issues/6458
https://github.com/huggingface/datasets/pull/6458
6,458
Lazy data files resolution
closed
20
2023-11-29T13:18:44
2024-02-08T14:41:35
2024-02-08T14:41:35
lhoestq
[]
Related to discussion at https://github.com/huggingface/datasets/pull/6255 this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be u...
true
2,015,650,563
https://api.github.com/repos/huggingface/datasets/issues/6457
https://github.com/huggingface/datasets/issues/6457
6,457
`TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth'
closed
5
2023-11-29T01:57:36
2023-11-29T15:39:03
2023-11-29T02:02:38
wasertech
[]
### Describe the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Steps to reproduce the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Expected behavior Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Environment info Please s...
false
2,015,186,090
https://api.github.com/repos/huggingface/datasets/issues/6456
https://github.com/huggingface/datasets/pull/6456
6,456
Don't require trust_remote_code in inspect_dataset
closed
3
2023-11-28T19:47:07
2023-11-30T10:40:23
2023-11-30T10:34:12
lhoestq
[]
don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose) (not super important but we might as well keep it until the next major release) this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448
true
2,013,001,584
https://api.github.com/repos/huggingface/datasets/issues/6454
https://github.com/huggingface/datasets/pull/6454
6,454
Refactor `dill` logic
closed
5
2023-11-27T20:01:25
2023-11-28T16:29:58
2023-11-28T16:29:31
mariosasko
[]
Refactor the `dill` logic to make it easier to maintain (and fix some issues along the way) It makes the following improvements to the serialization API: * consistent order of a `dict`'s keys * support for hashing `torch.compile`-ed modules and functions * deprecates `datasets.fingerprint.hashregister` as the `ha...
true
2,011,907,787
https://api.github.com/repos/huggingface/datasets/issues/6453
https://github.com/huggingface/datasets/pull/6453
6,453
Update hub-docs reference
closed
3
2023-11-27T09:57:20
2023-11-27T10:23:44
2023-11-27T10:17:34
mishig25
[]
Follow up to huggingface/huggingface.js#296
true
2,011,632,708
https://api.github.com/repos/huggingface/datasets/issues/6452
https://github.com/huggingface/datasets/pull/6452
6,452
Praveen_repo_pull_req
closed
0
2023-11-27T07:07:50
2023-11-27T09:28:00
2023-11-27T09:28:00
Praveenhh
[]
null
true
2,010,693,912
https://api.github.com/repos/huggingface/datasets/issues/6451
https://github.com/huggingface/datasets/issues/6451
6,451
Unable to read "marsyas/gtzan" data
closed
3
2023-11-25T15:13:17
2023-12-01T12:53:46
2023-11-27T09:36:25
gerald-wrona
[]
Hi, this is my code and the error: ``` from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") ``` [error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt) [audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt) Python 3.11.5 ...
false
2,009,491,386
https://api.github.com/repos/huggingface/datasets/issues/6450
https://github.com/huggingface/datasets/issues/6450
6,450
Support multiple image/audio columns in ImageFolder/AudioFolder
closed
1
2023-11-24T10:34:09
2023-11-28T11:07:17
2023-11-24T17:24:38
severo
[ "duplicate", "enhancement" ]
### Feature request Have a metadata.csv file with multiple columns that point to relative image or audio files. ### Motivation Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative aud...
false
2,008,617,992
https://api.github.com/repos/huggingface/datasets/issues/6449
https://github.com/huggingface/datasets/pull/6449
6,449
Fix metadata file resolution when inferred pattern is `**`
closed
6
2023-11-23T17:35:02
2023-11-27T10:02:56
2023-11-24T17:13:02
mariosasko
[]
Refetch metadata files in case they were dropped by `filter_extensions` in the previous step. Fix #6442
true
2,008,614,985
https://api.github.com/repos/huggingface/datasets/issues/6448
https://github.com/huggingface/datasets/pull/6448
6,448
Use parquet export if possible
closed
24
2023-11-23T17:31:57
2023-12-01T17:57:17
2023-12-01T17:50:59
lhoestq
[]
The idea is to make this code work for datasets with scripts if they have a Parquet export ```python ds = load_dataset("squad", trust_remote_code=False) ``` And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts). I also added a `config.USE_P...
true
2,008,195,298
https://api.github.com/repos/huggingface/datasets/issues/6447
https://github.com/huggingface/datasets/issues/6447
6,447
Support one dataset loader per config when using YAML
open
0
2023-11-23T13:03:07
2023-11-23T13:03:07
null
severo
[ "enhancement" ]
### Feature request See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1 I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc. ### Motivation It would be more flexible for the users ### Your contribution No specific contribution
false
2,007,092,708
https://api.github.com/repos/huggingface/datasets/issues/6446
https://github.com/huggingface/datasets/issues/6446
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
closed
3
2023-11-22T20:46:36
2023-11-28T14:46:08
2023-11-28T14:46:08
vymao
[]
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover,...
false
2,006,958,595
https://api.github.com/repos/huggingface/datasets/issues/6445
https://github.com/huggingface/datasets/pull/6445
6,445
Use `filelock` package for file locking
closed
4
2023-11-22T19:04:45
2023-11-23T18:47:30
2023-11-23T18:41:23
mariosasko
[]
Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities πŸ™‚. (Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so ...
true
2,006,842,179
https://api.github.com/repos/huggingface/datasets/issues/6444
https://github.com/huggingface/datasets/pull/6444
6,444
Remove `Table.__getstate__` and `Table.__setstate__`
closed
4
2023-11-22T17:55:10
2023-11-23T15:19:43
2023-11-23T15:13:28
LZHgrla
[]
When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'` ```python from torch import distributed as dist if dist.get_rank() == 0: dataset = process_dataset(*args, ...
true
2,006,568,368
https://api.github.com/repos/huggingface/datasets/issues/6443
https://github.com/huggingface/datasets/issues/6443
6,443
Trouble loading files defined in YAML explicitly
open
6
2023-11-22T15:18:10
2025-06-23T13:46:46
null
severo
[ "bug" ]
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ data/ β”‚ β”œβ”€β”€ abc.csv ...
false
2,006,086,907
https://api.github.com/repos/huggingface/datasets/issues/6442
https://github.com/huggingface/datasets/issues/6442
6,442
Trouble loading image folder with additional features - metadata file ignored
closed
1
2023-11-22T11:01:35
2023-11-24T17:13:03
2023-11-24T17:13:03
linoytsaban
[]
### Describe the bug Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions. When loading a local image folder with captions using `datasets==2.13.0` ``` from datasets import load_dataset data = load_dataset(<image_folder_path>) data.column_names ``` ...
false
2,004,985,857
https://api.github.com/repos/huggingface/datasets/issues/6441
https://github.com/huggingface/datasets/issues/6441
6,441
Trouble Loading a Gated Dataset For User with Granted Permission
closed
3
2023-11-21T19:24:36
2023-12-13T08:27:16
2023-12-13T08:27:16
e-trop
[]
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the d...
false
2,004,509,301
https://api.github.com/repos/huggingface/datasets/issues/6440
https://github.com/huggingface/datasets/issues/6440
6,440
`.map` not hashing under python 3.9
closed
2
2023-11-21T15:14:54
2023-11-28T16:29:33
2023-11-28T16:29:33
changyeli
[]
### Describe the bug The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message: `Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_data...
false
2,002,916,514
https://api.github.com/repos/huggingface/datasets/issues/6439
https://github.com/huggingface/datasets/issues/6439
6,439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
open
0
2023-11-20T20:07:23
2023-11-20T20:07:37
null
AntreasAntoniou
[]
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers takin...
false
2,002,032,804
https://api.github.com/repos/huggingface/datasets/issues/6438
https://github.com/huggingface/datasets/issues/6438
6,438
Support GeoParquet
open
6
2023-11-20T11:54:58
2024-02-07T08:36:51
null
severo
[ "enhancement" ]
### Feature request Support the GeoParquet format ### Motivation GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns. It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub...
false
2,001,272,606
https://api.github.com/repos/huggingface/datasets/issues/6437
https://github.com/huggingface/datasets/issues/6437
6,437
Problem in training iterable dataset
open
5
2023-11-20T03:04:02
2024-05-22T03:14:13
null
21Timothy
[]
### Describe the bug I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have notice...
false
2,000,844,474
https://api.github.com/repos/huggingface/datasets/issues/6436
https://github.com/huggingface/datasets/issues/6436
6,436
TypeError: <lambda>() takes 0 positional arguments but 1 was given
closed
3
2023-11-19T13:10:20
2025-05-05T18:21:21
2023-11-29T16:28:34
ahmadmustafaanis
[]
### Describe the bug ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import Dataset 9 frames [/usr/lo...
false
2,000,690,513
https://api.github.com/repos/huggingface/datasets/issues/6435
https://github.com/huggingface/datasets/issues/6435
6,435
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
closed
3
2023-11-19T04:21:16
2024-01-27T17:14:20
2023-12-04T16:57:43
kopyl
[]
### Describe the bug 1. I ran dataset mapping with `num_proc=6` in it and got this error: `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method` I can't actually find a way to run multi-GPU dataset mapping. Can you help? ### Steps to...
false
1,999,554,915
https://api.github.com/repos/huggingface/datasets/issues/6434
https://github.com/huggingface/datasets/pull/6434
6,434
Use `ruff` for formatting
closed
3
2023-11-17T16:53:22
2023-11-21T14:19:21
2023-11-21T14:13:13
mariosasko
[]
Use `ruff` instead of `black` for formatting to be consistent with `transformers` ([PR](https://github.com/huggingface/transformers/pull/27144)) and `huggingface_hub` ([PR 1](https://github.com/huggingface/huggingface_hub/pull/1783) and [PR 2](https://github.com/huggingface/huggingface_hub/pull/1789)).
true
1,999,419,105
https://api.github.com/repos/huggingface/datasets/issues/6433
https://github.com/huggingface/datasets/pull/6433
6,433
Better `tqdm` wrapper
closed
9
2023-11-17T15:45:15
2023-11-22T16:48:18
2023-11-22T16:42:08
mariosasko
[]
This PR aligns the `tqdm` logic with `huggingface_hub` (without introducing breaking changes), as the current one is error-prone. Additionally, it improves the doc page about the `datasets`' utilities, and the handling of local `fsspec` paths in `cached_path`. Fix #6409
true
1,999,258,140
https://api.github.com/repos/huggingface/datasets/issues/6432
https://github.com/huggingface/datasets/issues/6432
6,432
load_dataset does not load all of the data in my input file
open
1
2023-11-17T14:28:50
2023-11-22T17:34:58
null
demongolem-biz2
[]
### Describe the bug I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements. ### Steps to reproduce the bug train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset(data_...
false
1,997,202,770
https://api.github.com/repos/huggingface/datasets/issues/6431
https://github.com/huggingface/datasets/pull/6431
6,431
Create DatasetNotFoundError and DataFilesNotFoundError
closed
10
2023-11-16T16:02:55
2023-11-22T15:18:51
2023-11-22T15:12:33
albertvillanova
[]
Create `DatasetNotFoundError` and `DataFilesNotFoundError`. Fix #6397. CC: @severo
true
1,996,723,698
https://api.github.com/repos/huggingface/datasets/issues/6429
https://github.com/huggingface/datasets/pull/6429
6,429
Add trust_remote_code argument
closed
14
2023-11-16T12:12:54
2023-11-28T16:10:39
2023-11-28T16:03:43
lhoestq
[]
Draft about adding `trust_remote_code` to `load_dataset`. ```python ds = load_dataset(..., trust_remote_code=True) # run remote code (current default) ``` It would default to `True` (current behavior) and in the next major release it will prompt the user to check the code before running it (we'll communicate o...
true
1,996,306,394
https://api.github.com/repos/huggingface/datasets/issues/6428
https://github.com/huggingface/datasets/pull/6428
6,428
Set dev version
closed
3
2023-11-16T08:12:55
2023-11-16T08:19:39
2023-11-16T08:13:28
albertvillanova
[]
null
true
1,996,248,605
https://api.github.com/repos/huggingface/datasets/issues/6427
https://github.com/huggingface/datasets/pull/6427
6,427
Release: 2.15.0
closed
4
2023-11-16T07:37:20
2023-11-16T08:12:12
2023-11-16T07:43:05
albertvillanova
[]
null
true
1,995,363,264
https://api.github.com/repos/huggingface/datasets/issues/6426
https://github.com/huggingface/datasets/pull/6426
6,426
More robust temporary directory deletion
closed
7
2023-11-15T19:06:42
2023-12-01T15:37:32
2023-12-01T15:31:19
mariosasko
[]
While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler appro...
true
1,995,269,382
https://api.github.com/repos/huggingface/datasets/issues/6425
https://github.com/huggingface/datasets/pull/6425
6,425
Fix deprecation warning when building conda package
closed
3
2023-11-15T18:00:11
2023-12-13T14:22:30
2023-12-13T14:16:00
albertvillanova
[]
When building/releasing conda package, we get this deprecation warning: ``` /usr/share/miniconda/envs/build-datasets/bin/conda-build:11: DeprecationWarning: conda_build.cli.main_build.main is deprecated and will be removed in 4.0.0. Use `conda build` instead. ``` This PR fixes the deprecation warning by using `co...
true
1,995,224,516
https://api.github.com/repos/huggingface/datasets/issues/6424
https://github.com/huggingface/datasets/pull/6424
6,424
[docs] troubleshooting guide
closed
2
2023-11-15T17:28:14
2023-11-30T17:29:55
2023-11-30T17:23:46
MKhalusova
[]
Hi all! This is a PR adding a troubleshooting guide for Datasets docs. I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are: - creating a dataset from a folder and n...
true
1,994,946,847
https://api.github.com/repos/huggingface/datasets/issues/6423
https://github.com/huggingface/datasets/pull/6423
6,423
Fix conda release by adding pyarrow-hotfix dependency
closed
6
2023-11-15T14:57:12
2023-11-15T17:15:33
2023-11-15T17:09:24
albertvillanova
[]
Fix conda release by adding pyarrow-hotfix dependency. Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723 ``` Traceback (most recent call last): File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/t...
true
1,994,579,267
https://api.github.com/repos/huggingface/datasets/issues/6422
https://github.com/huggingface/datasets/issues/6422
6,422
Allow to choose the `writer_batch_size` when using `save_to_disk`
open
2
2023-11-15T11:18:34
2023-11-16T10:00:21
null
NathanGodey
[ "enhancement" ]
### Feature request Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods. ### Motivation The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in R...
false
1,994,451,553
https://api.github.com/repos/huggingface/datasets/issues/6421
https://github.com/huggingface/datasets/pull/6421
6,421
Add pyarrow-hotfix to release docs
closed
3
2023-11-15T10:06:44
2023-11-15T13:49:55
2023-11-15T13:38:22
albertvillanova
[ "maintenance" ]
Add `pyarrow-hotfix` to release docs.
true
1,994,278,903
https://api.github.com/repos/huggingface/datasets/issues/6420
https://github.com/huggingface/datasets/pull/6420
6,420
Set dev version
closed
3
2023-11-15T08:22:19
2023-11-15T08:33:36
2023-11-15T08:22:33
albertvillanova
[]
null
true
1,994,257,873
https://api.github.com/repos/huggingface/datasets/issues/6419
https://github.com/huggingface/datasets/pull/6419
6,419
Release: 2.14.7
closed
6
2023-11-15T08:07:37
2023-11-15T17:35:30
2023-11-15T08:12:59
albertvillanova
[]
Release 2.14.7.
true
1,993,224,629
https://api.github.com/repos/huggingface/datasets/issues/6418
https://github.com/huggingface/datasets/pull/6418
6,418
Remove token value from warnings
closed
3
2023-11-14T17:34:06
2023-11-14T22:26:04
2023-11-14T22:19:45
mariosasko
[]
Fix #6412
true
1,993,149,416
https://api.github.com/repos/huggingface/datasets/issues/6417
https://github.com/huggingface/datasets/issues/6417
6,417
Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error
closed
3
2023-11-14T16:53:20
2023-11-16T20:23:41
2023-11-16T20:23:41
Davo00
[]
### Describe the bug Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab. **Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb **Error**: `ValueError: Arrow type extensi...
false
1,992,954,723
https://api.github.com/repos/huggingface/datasets/issues/6416
https://github.com/huggingface/datasets/pull/6416
6,416
Rename audio_classificiation.py to audio_classification.py
closed
4
2023-11-14T15:15:29
2023-11-15T11:59:32
2023-11-15T11:53:20
carlthome
[]
null
true
1,992,917,248
https://api.github.com/repos/huggingface/datasets/issues/6415
https://github.com/huggingface/datasets/pull/6415
6,415
Fix multi gpu map example
closed
23
2023-11-14T14:57:18
2024-01-31T00:49:15
2023-11-22T15:42:19
lhoestq
[]
- use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES ` - add `if __name__ == "__main__"` fix https://github.com/huggingface/datasets/issues/6186
true
1,992,482,491
https://api.github.com/repos/huggingface/datasets/issues/6414
https://github.com/huggingface/datasets/pull/6414
6,414
Set `usedforsecurity=False` in hashlib methods (FIPS compliance)
closed
10
2023-11-14T10:47:09
2023-11-17T14:23:20
2023-11-17T14:17:00
Wauplin
[]
Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782. **TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we s...
true
1,992,401,594
https://api.github.com/repos/huggingface/datasets/issues/6412
https://github.com/huggingface/datasets/issues/6412
6,412
User token is printed out!
closed
1
2023-11-14T10:01:34
2023-11-14T22:19:46
2023-11-14T22:19:46
mohsen-goodarzi
[]
This line prints user token on command line! Is it safe? https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091
false
1,992,386,630
https://api.github.com/repos/huggingface/datasets/issues/6411
https://github.com/huggingface/datasets/pull/6411
6,411
Fix dependency conflict within CI build documentation
closed
1
2023-11-14T09:52:51
2023-11-14T10:05:59
2023-11-14T10:05:35
albertvillanova
[]
Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`). This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`. Fix #6406.
true
1,992,100,209
https://api.github.com/repos/huggingface/datasets/issues/6410
https://github.com/huggingface/datasets/issues/6410
6,410
Datasets does not load HuggingFace Repository properly
open
2
2023-11-14T06:50:49
2023-11-16T06:54:36
null
MikeDoes
[]
### Describe the bug Dear Datasets team, We just have published a dataset on Huggingface: https://huggingface.co/ai4privacy However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me ...
false
1,991,960,865
https://api.github.com/repos/huggingface/datasets/issues/6409
https://github.com/huggingface/datasets/issues/6409
6,409
using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception
closed
0
2023-11-14T04:21:01
2023-11-22T16:42:09
2023-11-22T16:42:09
neiblegy
[]
### Describe the bug i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows: `AttributeError: 'function' object has no attribute 'close' Exception ignored in: <function TqdmCallback....
false
1,991,902,972
https://api.github.com/repos/huggingface/datasets/issues/6408
https://github.com/huggingface/datasets/issues/6408
6,408
`IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns`
open
0
2023-11-14T03:12:08
2023-11-16T06:24:10
null
shmily326
[]
### Describe the bug IterableDataset lost but not keep columns when map function adding columns with names in remove_columns, Dataset not. May be related to the code below: https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756 ### Steps t...
false
1,991,514,079
https://api.github.com/repos/huggingface/datasets/issues/6407
https://github.com/huggingface/datasets/issues/6407
6,407
Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object"
open
1
2023-11-13T21:27:43
2024-07-30T12:35:09
null
eawer
[]
### Describe the bug I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bu...
false
1,990,469,045
https://api.github.com/repos/huggingface/datasets/issues/6406
https://github.com/huggingface/datasets/issues/6406
6,406
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
closed
0
2023-11-13T11:36:10
2023-11-14T10:05:36
2023-11-14T10:05:36
albertvillanova
[]
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390 ``` ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' ```
false
1,990,358,743
https://api.github.com/repos/huggingface/datasets/issues/6405
https://github.com/huggingface/datasets/issues/6405
6,405
ConfigNamesError on a simple CSV file
closed
3
2023-11-13T10:28:29
2023-11-13T20:01:24
2023-11-13T20:01:24
severo
[ "bug" ]
See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1 ``` Error code: ConfigNamesError Exception: TypeError Message: __init__() missing 1 required positional argument: 'dtype' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runn...
false
1,990,211,901
https://api.github.com/repos/huggingface/datasets/issues/6404
https://github.com/huggingface/datasets/pull/6404
6,404
Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248
closed
15
2023-11-13T09:15:39
2023-11-14T10:29:48
2023-11-14T10:23:29
albertvillanova
[]
Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). Fix #6396.
true
1,990,098,817
https://api.github.com/repos/huggingface/datasets/issues/6403
https://github.com/huggingface/datasets/issues/6403
6,403
Cannot import datasets on google colab (python 3.10.12)
closed
2
2023-11-13T08:14:43
2023-11-16T05:04:22
2023-11-16T05:04:21
nabilaannisa
[]
### Describe the bug I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12) ![image](https://gith...
false
1,989,094,542
https://api.github.com/repos/huggingface/datasets/issues/6402
https://github.com/huggingface/datasets/pull/6402
6,402
Update torch_formatter.py
closed
2
2023-11-11T19:40:41
2024-03-15T11:31:53
2024-03-15T11:25:37
varunneal
[]
Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation.
true
1,988,710,061
https://api.github.com/repos/huggingface/datasets/issues/6401
https://github.com/huggingface/datasets/issues/6401
6,401
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working
closed
2
2023-11-11T04:09:07
2023-11-20T17:45:20
2023-11-20T17:45:20
userbox020
[]
### Describe the bug ``` (datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 360/360 [00:00<00:00, 2.16MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 65.1M/65.1M [00:19<00:00, 3.38MB/s] Downloading data: 100...
false
1,988,571,317
https://api.github.com/repos/huggingface/datasets/issues/6400
https://github.com/huggingface/datasets/issues/6400
6,400
Safely load datasets by disabling execution of dataset loading script
closed
4
2023-11-10T23:48:29
2024-06-13T15:56:13
2024-06-13T15:56:13
irenedea
[ "enhancement" ]
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code e...
false
1,988,368,503
https://api.github.com/repos/huggingface/datasets/issues/6399
https://github.com/huggingface/datasets/issues/6399
6,399
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
open
1
2023-11-10T20:48:46
2024-06-22T00:13:48
null
y-hwang
[]
### Describe the bug Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets. Thank you! ### Steps to repro...
false
1,987,786,446
https://api.github.com/repos/huggingface/datasets/issues/6398
https://github.com/huggingface/datasets/pull/6398
6,398
Remove redundant condition in builders
closed
3
2023-11-10T14:56:43
2023-11-14T10:49:15
2023-11-14T10:43:00
albertvillanova
[]
Minor refactoring to remove redundant condition.
true
1,987,622,152
https://api.github.com/repos/huggingface/datasets/issues/6397
https://github.com/huggingface/datasets/issues/6397
6,397
Raise a different exception for inexisting dataset vs files without known extension
closed
0
2023-11-10T13:22:14
2023-11-22T15:12:34
2023-11-22T15:12:34
severo
[]
See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557 We have the same error for: - https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist - https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files withou...
false
1,987,308,077
https://api.github.com/repos/huggingface/datasets/issues/6396
https://github.com/huggingface/datasets/issues/6396
6,396
Issue with pyarrow 14.0.1
closed
5
2023-11-10T10:02:12
2023-11-14T10:23:30
2023-11-14T10:23:30
severo
[]
See https://github.com/huggingface/datasets-server/pull/2089 for reference ``` from datasets import (Array2D, Dataset, Features) feature_type = Array2D(shape=(2, 2), dtype="float32") content = [[0.0, 0.0], [0.0, 0.0]] features = Features({"col": feature_type}) dataset = Dataset.from_dict({"col": [content]}, fea...
false
1,986,484,124
https://api.github.com/repos/huggingface/datasets/issues/6395
https://github.com/huggingface/datasets/issues/6395
6,395
Add ability to set lock type
closed
1
2023-11-09T22:12:30
2023-11-23T18:50:00
2023-11-23T18:50:00
leoleoasd
[ "enhancement" ]
### Feature request Allow setting file lock type, maybe from an environment variable Currently, it only depends on whether fnctl is available: https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16 ### Motivation In my environment...
false
1,985,947,116
https://api.github.com/repos/huggingface/datasets/issues/6394
https://github.com/huggingface/datasets/issues/6394
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
closed
9
2023-11-09T16:02:15
2024-04-11T12:40:16
2024-04-11T12:40:16
Modexus
[]
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to ...
false
1,984,913,259
https://api.github.com/repos/huggingface/datasets/issues/6393
https://github.com/huggingface/datasets/issues/6393
6,393
Filter occasionally hangs
closed
12
2023-11-09T06:18:30
2025-02-22T00:49:19
2025-02-22T00:49:19
dakinggg
[]
### Describe the bug A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm) There is a trace produced ``` Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10> Traceback (most recent call last): File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", l...
false
1,984,369,545
https://api.github.com/repos/huggingface/datasets/issues/6392
https://github.com/huggingface/datasets/issues/6392
6,392
`push_to_hub` is not robust to hub closing connection
closed
12
2023-11-08T20:44:53
2023-12-20T07:28:24
2023-12-01T17:51:34
msis
[]
### Describe the bug Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error: ``` Pushing dataset shards to the dataset hub: 32%|β–ˆβ–ˆβ–ˆβ– | 54/171 [06:38<14:23, 7.38s/it] Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/...
false
1,984,091,776
https://api.github.com/repos/huggingface/datasets/issues/6391
https://github.com/huggingface/datasets/pull/6391
6,391
Webdataset dataset builder
closed
5
2023-11-08T17:31:59
2024-05-22T16:51:08
2023-11-28T16:33:10
lhoestq
[]
Allow `load_dataset` to support the Webdataset format. It allows users to download/stream data from local files or from the Hugging Face Hub. Moreover it will enable the Dataset Viewer for Webdataset datasets on HF. ## Implementation details - I added a new Webdataset builder - dataset with TAR files are n...
true
1,983,725,707
https://api.github.com/repos/huggingface/datasets/issues/6390
https://github.com/huggingface/datasets/pull/6390
6,390
handle future deprecation argument
closed
1
2023-11-08T14:21:25
2023-11-21T02:10:24
2023-11-14T15:15:59
winglian
[]
getting this error: ``` /root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'. return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ``` Since datasets supports arrow greater than 8.0.0, we need to handle both ...
true
1,983,545,744
https://api.github.com/repos/huggingface/datasets/issues/6389
https://github.com/huggingface/datasets/issues/6389
6,389
Index 339 out of range for dataset of size 339 <-- save_to_file()
open
2
2023-11-08T12:52:09
2023-11-24T09:14:13
null
jaggzh
[]
### Describe the bug When saving out some Audio() data. The data is audio recordings with associated 'sentences'. (They use the audio 'bytes' approach because they're clips within audio files). Code is below the traceback (I can't upload the voice audio/text (it's not even me)). ``` Traceback (most recent call ...
false
1,981,136,093
https://api.github.com/repos/huggingface/datasets/issues/6388
https://github.com/huggingface/datasets/issues/6388
6,388
How to create 3d medical imgae dataset?
open
0
2023-11-07T11:27:36
2023-11-07T11:28:53
null
QingYunA
[ "enhancement" ]
### Feature request I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii') ### Motivation help us to upload 3d medical dataset to huggingface! ### Your contribution I'll submit a PR if I find a way to...
false
1,980,224,020
https://api.github.com/repos/huggingface/datasets/issues/6387
https://github.com/huggingface/datasets/issues/6387
6,387
How to load existing downloaded dataset ?
closed
1
2023-11-06T22:51:44
2023-11-16T18:07:01
2023-11-16T18:07:01
liming-ai
[ "enhancement" ]
Hi @mariosasko @lhoestq @katielink Thanks for your contribution and hard work. ### Feature request First, I download a dataset as normal by: ``` from datasets import load_dataset dataset = load_dataset('username/data_name', cache_dir='data') ``` The dataset format in `data` directory will be: ``` ...
false
1,979,878,014
https://api.github.com/repos/huggingface/datasets/issues/6386
https://github.com/huggingface/datasets/issues/6386
6,386
Formatting overhead
closed
2
2023-11-06T19:06:38
2023-11-06T23:56:12
2023-11-06T23:56:12
d-miketa
[]
### Describe the bug Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst...
false
1,979,308,338
https://api.github.com/repos/huggingface/datasets/issues/6385
https://github.com/huggingface/datasets/issues/6385
6,385
Get an error when i try to concatenate the squad dataset with my own dataset
closed
2
2023-11-06T14:29:22
2023-11-06T16:50:45
2023-11-06T16:50:45
CCDXDX
[]
### Describe the bug Hello, I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last): Cell In[9], line 1 concatenated_dataset = concatenate_datasets([train_dataset, dataset1]) File ~\ana...
false
1,979,117,069
https://api.github.com/repos/huggingface/datasets/issues/6384
https://github.com/huggingface/datasets/issues/6384
6,384
Load the local dataset folder from other place
closed
1
2023-11-06T13:07:04
2023-11-19T05:42:06
2023-11-19T05:42:05
OrangeSodahub
[]
This is from https://github.com/huggingface/diffusers/issues/5573
false
1,978,189,389
https://api.github.com/repos/huggingface/datasets/issues/6383
https://github.com/huggingface/datasets/issues/6383
6,383
imagenet-1k downloads over and over
closed
1
2023-11-06T02:58:58
2024-06-12T13:15:00
2023-11-06T06:02:39
seann999
[]
### Describe the bug What could be causing this? ``` $ python3 Python 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset("imagenet-1k") Downloading builder ...
false
1,977,400,799
https://api.github.com/repos/huggingface/datasets/issues/6382
https://github.com/huggingface/datasets/issues/6382
6,382
Add CheXpert dataset for vision
open
3
2023-11-04T15:36:11
2024-01-10T11:53:52
null
SauravMaheshkar
[ "enhancement", "dataset request" ]
### Feature request ### Name **CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison** ### Paper https://arxiv.org/abs/1901.07031 ### Data https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2 ### Motivation CheXpert is one of the fund...
false
1,975,028,470
https://api.github.com/repos/huggingface/datasets/issues/6381
https://github.com/huggingface/datasets/pull/6381
6,381
Add my dataset
closed
3
2023-11-02T20:59:52
2023-11-08T14:37:46
2023-11-06T15:50:14
keyur536
[]
## medical data **Description:** This dataset, named "medical data," is a collection of text data from various sources, carefully curated and cleaned for use in natural language processing (NLP) tasks. It consists of a diverse range of text, including articles, books, and online content, covering topics from scienc...
true
1,974,741,221
https://api.github.com/repos/huggingface/datasets/issues/6380
https://github.com/huggingface/datasets/pull/6380
6,380
Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET
open
0
2023-11-02T17:28:23
2023-11-02T17:31:19
null
RuntimeRacer
[]
This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections. The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594. Issue Symptoms & Behaviour: - Download of a large archive file during dataset down...
true
1,974,638,850
https://api.github.com/repos/huggingface/datasets/issues/6379
https://github.com/huggingface/datasets/pull/6379
6,379
Avoid redundant warning when encoding NumPy array as `Image`
closed
5
2023-11-02T16:37:58
2023-11-06T17:53:27
2023-11-02T17:08:07
mariosasko
[]
Avoid a redundant warning in `encode_np_array` by removing the identity check as NumPy `dtype`s can be equal without having identical `id`s. Additionally, fix "unreachable" checks in `encode_np_array`.
true
1,973,942,770
https://api.github.com/repos/huggingface/datasets/issues/6378
https://github.com/huggingface/datasets/pull/6378
6,378
Support pyarrow 14.0.0
closed
3
2023-11-02T10:25:10
2023-11-02T15:24:28
2023-11-02T15:15:44
albertvillanova
[]
Support `pyarrow` 14.0.0. Fix #6377 and fix #6374 (root cause). This fix is analog to a previous one: - #6175
true