id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
1,448,211,251
https://api.github.com/repos/huggingface/datasets/issues/5238
https://github.com/huggingface/datasets/pull/5238
5,238
Make `Version` hashable
closed
1
2022-11-14T14:52:55
2022-11-14T15:30:02
2022-11-14T15:27:35
mariosasko
[]
Add `__hash__` to the `Version` class to make it hashable (and remove the unneeded methods), as `Version("0.0.0")` is the default value of `BuilderConfig.version` and the default fields of a dataclass need to be hashable in Python 3.11. Fix https://github.com/huggingface/datasets/issues/5230
true
1,448,202,491
https://api.github.com/repos/huggingface/datasets/issues/5237
https://github.com/huggingface/datasets/pull/5237
5,237
Encode path only for old versions of hfh
closed
1
2022-11-14T14:46:57
2022-11-14T17:38:18
2022-11-14T17:35:59
lhoestq
[]
Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice
true
1,448,190,801
https://api.github.com/repos/huggingface/datasets/issues/5236
https://github.com/huggingface/datasets/pull/5236
5,236
Handle ArrowNotImplementedError caused by try_type being Image or Audio in cast
closed
2
2022-11-14T14:38:59
2022-11-14T16:04:29
2022-11-14T16:01:48
mariosasko
[]
Handle the `ArrowNotImplementedError` thrown when `try_type` is `Image` or `Audio` and the input array cannot be converted to their storage formats. Reproducer: ```python from datasets import Dataset from PIL import Image import requests ds = Dataset.from_dict({"image": [Image.open(requests.get("https://uploa...
true
1,448,052,660
https://api.github.com/repos/huggingface/datasets/issues/5235
https://github.com/huggingface/datasets/pull/5235
5,235
Pin `typer` version in tests to <0.5 to fix Windows CI
closed
0
2022-11-14T13:17:02
2022-11-14T15:43:01
2022-11-14T13:41:12
polinaeterna
[]
Otherwise `click` fails on Windows: ``` Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code exec(code, run_glob...
true
1,447,999,062
https://api.github.com/repos/huggingface/datasets/issues/5234
https://github.com/huggingface/datasets/pull/5234
5,234
fix: dataset path should be absolute
closed
3
2022-11-14T12:47:40
2022-12-07T23:49:22
2022-12-07T23:46:34
vigsterkr
[]
cache_file_name depends on dataset's path. A simple way where this could cause a problem: ``` import os import datasets def add_prefix(example): example["text"] = "Review: " + example["text"] return example ds = datasets.load_from_disk("a/relative/path") os.chdir("/tmp") ds_1 = ds.map(add_...
true
1,447,906,868
https://api.github.com/repos/huggingface/datasets/issues/5233
https://github.com/huggingface/datasets/pull/5233
5,233
Fix shards in IterableDataset.from_generator
closed
1
2022-11-14T11:42:09
2022-11-14T14:16:03
2022-11-14T14:13:22
lhoestq
[]
Allow to define a sharded iterable dataset
true
1,446,294,165
https://api.github.com/repos/huggingface/datasets/issues/5232
https://github.com/huggingface/datasets/issues/5232
5,232
Incompatible dill versions in datasets 2.6.1
closed
2
2022-11-12T06:46:23
2022-11-14T08:24:43
2022-11-14T08:07:59
vinaykakade
[]
### Describe the bug datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1 This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the...
false
1,445,883,267
https://api.github.com/repos/huggingface/datasets/issues/5231
https://github.com/huggingface/datasets/issues/5231
5,231
Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly
closed
1
2022-11-11T18:54:36
2022-11-11T20:42:29
2022-11-11T18:59:50
plamb-viso
[]
I have a Dataset with two Features defined as follows: ``` 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'bbox': Array2D(dtype="int64", shape=(512, 4)), ``` On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of ...
false
1,445,507,580
https://api.github.com/repos/huggingface/datasets/issues/5230
https://github.com/huggingface/datasets/issues/5230
5,230
dataclasses error when importing the library in python 3.11
closed
5
2022-11-11T13:53:49
2023-05-25T04:37:05
2022-11-14T15:27:37
yonikremer
[]
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter note...
false
1,445,121,028
https://api.github.com/repos/huggingface/datasets/issues/5229
https://github.com/huggingface/datasets/issues/5229
5,229
Type error when calling `map` over dataset containing 0-d tensors
closed
2
2022-11-11T08:27:28
2023-01-13T16:00:53
2023-01-13T16:00:53
phipsgabler
[]
### Describe the bug 0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset. ### Steps to reproduce the bug ``` ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_fo...
false
1,444,763,105
https://api.github.com/repos/huggingface/datasets/issues/5228
https://github.com/huggingface/datasets/issues/5228
5,228
Loading a dataset from the hub fails if you happen to have a folder of the same name
open
3
2022-11-11T00:51:54
2023-05-03T23:23:04
null
dakinggg
[]
### Describe the bug I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and ...
false
1,444,620,094
https://api.github.com/repos/huggingface/datasets/issues/5227
https://github.com/huggingface/datasets/issues/5227
5,227
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
closed
2
2022-11-10T21:57:06
2023-10-07T05:04:41
2022-11-10T22:05:43
ScottM-wizard
[]
### Describe the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datas...
false
1,444,385,148
https://api.github.com/repos/huggingface/datasets/issues/5226
https://github.com/huggingface/datasets/issues/5226
5,226
Q: Memory release when removing the column?
closed
3
2022-11-10T18:35:27
2022-11-29T15:10:10
2022-11-29T15:10:10
bayartsogt-ya
[]
### Describe the bug How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks? ```python from datasets import load_dataset common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True) # check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670...
false
1,444,305,183
https://api.github.com/repos/huggingface/datasets/issues/5225
https://github.com/huggingface/datasets/issues/5225
5,225
Add video feature
open
7
2022-11-10T17:36:11
2022-12-02T15:13:15
null
nateraw
[ "enhancement", "help wanted", "vision" ]
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times ...
false
1,443,640,867
https://api.github.com/repos/huggingface/datasets/issues/5224
https://github.com/huggingface/datasets/issues/5224
5,224
Seems to freeze when loading audio dataset with wav files from local folder
closed
4
2022-11-10T10:29:31
2023-04-25T09:54:05
2022-11-22T11:24:19
uriii3
[]
### Describe the bug I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder. I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from term...
false
1,442,610,658
https://api.github.com/repos/huggingface/datasets/issues/5223
https://github.com/huggingface/datasets/pull/5223
5,223
Add SQL guide
closed
4
2022-11-09T19:10:27
2022-11-15T17:40:25
2022-11-15T17:40:21
stevhliu
[]
This PR adapts @nateraw's awesome SQL notebook as a guide for the docs!
true
1,442,412,507
https://api.github.com/repos/huggingface/datasets/issues/5222
https://github.com/huggingface/datasets/issues/5222
5,222
HuggingFace website is incorrectly reporting that my datasets are pickled
closed
4
2022-11-09T16:41:16
2022-11-09T18:10:46
2022-11-09T18:06:57
ProGamerGov
[]
### Describe the bug HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images. Hopefully this is the right location to report this bug. ### Steps to reproduce the bug Inspect my dataset respository here: https://huggingface.co/datasets...
false
1,442,309,094
https://api.github.com/repos/huggingface/datasets/issues/5221
https://github.com/huggingface/datasets/issues/5221
5,221
Cannot push
closed
2
2022-11-09T15:32:05
2022-11-10T18:11:21
2022-11-10T18:11:11
bayartsogt-ya
[]
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl...
false
1,441,664,377
https://api.github.com/repos/huggingface/datasets/issues/5220
https://github.com/huggingface/datasets/issues/5220
5,220
Implicit type conversion of lists in to_pandas
closed
2
2022-11-09T08:40:18
2022-11-10T16:12:26
2022-11-10T16:12:26
sanderland
[]
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original typ...
false
1,441,255,910
https://api.github.com/repos/huggingface/datasets/issues/5219
https://github.com/huggingface/datasets/issues/5219
5,219
Delta Tables usage using Datasets Library
open
4
2022-11-09T02:43:56
2023-03-02T19:29:12
null
reichenbch
[ "enhancement" ]
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library...
false
1,441,254,194
https://api.github.com/repos/huggingface/datasets/issues/5218
https://github.com/huggingface/datasets/issues/5218
5,218
Delta Tables usage using Datasets Library
closed
0
2022-11-09T02:42:18
2022-11-09T02:42:36
2022-11-09T02:42:36
rcv-koo
[ "enhancement" ]
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library...
false
1,441,252,740
https://api.github.com/repos/huggingface/datasets/issues/5217
https://github.com/huggingface/datasets/pull/5217
5,217
Reword E2E training and inference tips in the vision guides
closed
1
2022-11-09T02:40:01
2022-11-10T01:38:09
2022-11-10T01:36:09
sayakpaul
[]
Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730
true
1,441,041,947
https://api.github.com/repos/huggingface/datasets/issues/5216
https://github.com/huggingface/datasets/issues/5216
5,216
save_elasticsearch_index
open
1
2022-11-08T23:06:52
2022-11-09T13:16:45
null
amobash2
[]
Hi, I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset?
false
1,440,334,978
https://api.github.com/repos/huggingface/datasets/issues/5214
https://github.com/huggingface/datasets/pull/5214
5,214
Update github pr docs actions
closed
1
2022-11-08T14:43:37
2022-11-08T15:39:58
2022-11-08T15:39:57
mishig25
[]
null
true
1,440,037,534
https://api.github.com/repos/huggingface/datasets/issues/5213
https://github.com/huggingface/datasets/pull/5213
5,213
Add support for different configs with `push_to_hub`
closed
8
2022-11-08T11:45:47
2022-12-02T16:48:23
2022-12-02T16:44:07
polinaeterna
[ "enhancement" ]
will solve #5151 @lhoestq @albertvillanova @mariosasko This is still a super draft so please ignore code issues but I want to discuss some conceptually important things. I suggest a way to do `.push_to_hub("repo_id", "config_name")` with pushing parquet files to directories named as `config_name` (inside `data...
true
1,439,642,483
https://api.github.com/repos/huggingface/datasets/issues/5212
https://github.com/huggingface/datasets/pull/5212
5,212
Fix CI require_beam maximum compatible dill version
closed
1
2022-11-08T07:30:01
2022-11-15T06:32:27
2022-11-15T06:32:26
albertvillanova
[]
A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`: - d7c942228b8dcf4de64b00a3053dce59b335f618 - ec222b220b79f10c8d7b015769f0999b15959feb This PR fixes the maximum compatible `dill` version with `apache-beam`, which...
true
1,438,544,617
https://api.github.com/repos/huggingface/datasets/issues/5211
https://github.com/huggingface/datasets/pull/5211
5,211
Update Overview.ipynb google colab
closed
3
2022-11-07T15:23:52
2022-11-29T15:59:48
2022-11-29T15:54:17
lhoestq
[]
- removed metrics stuff - added image example - added audio example (with ffmpeg instructions) - updated the "add a new dataset" section
true
1,438,492,507
https://api.github.com/repos/huggingface/datasets/issues/5210
https://github.com/huggingface/datasets/pull/5210
5,210
Tweak readme
closed
3
2022-11-07T14:51:23
2022-11-24T11:35:07
2022-11-24T11:26:16
lhoestq
[]
Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security
true
1,438,367,678
https://api.github.com/repos/huggingface/datasets/issues/5209
https://github.com/huggingface/datasets/issues/5209
5,209
Implement ability to define splits in metadata section of dataset card
closed
9
2022-11-07T13:27:16
2023-07-21T14:36:02
2023-07-21T14:36:01
merveenoyan
[ "enhancement" ]
### Feature request If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits inste...
false
1,438,035,707
https://api.github.com/repos/huggingface/datasets/issues/5208
https://github.com/huggingface/datasets/pull/5208
5,208
Refactor CI hub fixtures to use monkeypatch instead of patch
closed
1
2022-11-07T09:25:05
2022-11-08T06:51:20
2022-11-08T06:49:17
albertvillanova
[]
Minor refactoring of CI to use `pytest` `monkeypatch` instead of `unittest` `patch`.
true
1,437,858,506
https://api.github.com/repos/huggingface/datasets/issues/5207
https://github.com/huggingface/datasets/issues/5207
5,207
Connection error of the HuggingFace's dataset Hub due to SSLError with proxy
open
14
2022-11-07T06:56:23
2025-03-08T09:04:10
null
leemgs
[]
### Describe the bug It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office. Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy), I'm getting the SSLError issue. What should I do to download the datanet stored in Hugg...
false
1,437,223,894
https://api.github.com/repos/huggingface/datasets/issues/5206
https://github.com/huggingface/datasets/issues/5206
5,206
Use logging instead of printing to console
closed
1
2022-11-05T23:48:02
2022-11-06T00:06:00
2022-11-06T00:05:59
bilelomrani1
[]
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingfa...
false
1,437,221,987
https://api.github.com/repos/huggingface/datasets/issues/5205
https://github.com/huggingface/datasets/pull/5205
5,205
Add missing `DownloadConfig.use_auth_token` value
closed
1
2022-11-05T23:36:36
2022-11-08T08:13:00
2022-11-07T16:20:24
alvarobartt
[]
This PR solves https://github.com/huggingface/datasets/issues/5204 Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub.
true
1,437,221,259
https://api.github.com/repos/huggingface/datasets/issues/5204
https://github.com/huggingface/datasets/issues/5204
5,204
`push_to_hub` not propagating `token` through `DownloadConfig`
closed
3
2022-11-05T23:32:20
2022-11-08T10:12:09
2022-11-08T10:12:08
alvarobartt
[]
### Describe the bug When trying to upload a new πŸ€— Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before. But when trying to run `Dataset.push_to_hub` again over the same dataset...
false
1,436,710,518
https://api.github.com/repos/huggingface/datasets/issues/5203
https://github.com/huggingface/datasets/pull/5203
5,203
Update canonical links to Hub links
closed
1
2022-11-04T22:50:50
2022-11-07T18:43:05
2022-11-07T18:40:19
stevhliu
[]
This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200.
true
1,435,886,090
https://api.github.com/repos/huggingface/datasets/issues/5202
https://github.com/huggingface/datasets/issues/5202
5,202
CI fails after bulk edit of canonical datasets
closed
1
2022-11-04T10:51:20
2023-02-16T09:11:10
2023-02-16T09:11:10
albertvillanova
[ "bug" ]
``` ______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', config_name = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, config...
false
1,435,881,554
https://api.github.com/repos/huggingface/datasets/issues/5201
https://github.com/huggingface/datasets/pull/5201
5,201
Do not sort splits in dataset info
closed
5
2022-11-04T10:47:21
2022-11-04T14:47:37
2022-11-04T14:45:09
polinaeterna
[]
I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws What do you think? But I added sorting in tests to fix CI (for the same datase...
true
1,435,831,559
https://api.github.com/repos/huggingface/datasets/issues/5200
https://github.com/huggingface/datasets/issues/5200
5,200
Some links to canonical datasets in the docs are outdated
closed
1
2022-11-04T10:06:21
2022-11-07T18:40:20
2022-11-07T18:40:20
polinaeterna
[ "documentation" ]
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by li...
false
1,434,818,836
https://api.github.com/repos/huggingface/datasets/issues/5199
https://github.com/huggingface/datasets/pull/5199
5,199
Deprecate dummy data generation command
closed
1
2022-11-03T15:05:54
2022-11-04T14:01:50
2022-11-04T13:59:47
mariosasko
[]
Deprecate the `dummy_data` CLI command.
true
1,434,699,165
https://api.github.com/repos/huggingface/datasets/issues/5198
https://github.com/huggingface/datasets/pull/5198
5,198
Add note about the name of a dataset script
closed
1
2022-11-03T13:51:32
2022-11-04T12:47:59
2022-11-04T12:46:01
polinaeterna
[]
Add note that a dataset script should has the same name as a repo/dir, a bit related to this issue https://github.com/huggingface/datasets/issues/5193 also fixed two minor issues in audio docs (broken links)
true
1,434,676,150
https://api.github.com/repos/huggingface/datasets/issues/5197
https://github.com/huggingface/datasets/pull/5197
5,197
[zstd] Use max window log size
open
2
2022-11-03T13:35:58
2022-11-03T13:45:19
null
reyoung
[]
ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags. Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the m...
true
1,434,401,646
https://api.github.com/repos/huggingface/datasets/issues/5196
https://github.com/huggingface/datasets/pull/5196
5,196
Use hfh hf_hub_url function
closed
9
2022-11-03T10:08:09
2022-12-06T11:38:17
2022-11-09T07:15:12
albertvillanova
[]
Small refactoring to use `hf_hub_url` function from `huggingface_hub`. This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`. This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood...
true
1,434,290,689
https://api.github.com/repos/huggingface/datasets/issues/5195
https://github.com/huggingface/datasets/pull/5195
5,195
[wip testing docs]
closed
1
2022-11-03T08:37:34
2023-04-04T15:10:37
2023-04-04T15:10:33
mishig25
[]
null
true
1,434,206,951
https://api.github.com/repos/huggingface/datasets/issues/5194
https://github.com/huggingface/datasets/pull/5194
5,194
Fix docs about dataset_info in YAML
closed
1
2022-11-03T07:10:23
2022-11-03T13:31:27
2022-11-03T13:29:21
albertvillanova
[]
This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card: - #4926 Related to: - #5193
true
1,433,883,780
https://api.github.com/repos/huggingface/datasets/issues/5193
https://github.com/huggingface/datasets/issues/5193
5,193
"One or several metadata. were found, but not in the same directory or in a parent directory"
closed
5
2022-11-02T22:46:25
2022-11-03T13:39:16
2022-11-03T13:35:44
lambda-science
[]
### Describe the bug When loading my own dataset, on loading it I get an error. Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data And the error after loading with: ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ```python Downlo...
false
1,433,199,790
https://api.github.com/repos/huggingface/datasets/issues/5192
https://github.com/huggingface/datasets/pull/5192
5,192
Drop labels in Image and Audio folders if files are on different levels in directory or if there is only one label
closed
9
2022-11-02T14:01:41
2022-11-15T16:32:53
2022-11-15T16:31:07
polinaeterna
[ "bug" ]
Will close https://github.com/huggingface/datasets/issues/5153 Drop labels by default (`drop_labels=None`) when: * there are files on different levels of directory hierarchy by checking their path depth * all files are in the same directory (=only one label was inferred) First one fixes cases like this: ``` r...
true
1,433,191,658
https://api.github.com/repos/huggingface/datasets/issues/5191
https://github.com/huggingface/datasets/pull/5191
5,191
Make torch.Tensor and spacy models cacheable
closed
1
2022-11-02T13:56:18
2022-11-02T17:20:48
2022-11-02T17:18:42
mariosasko
[]
Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models. Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/...
true
1,433,014,626
https://api.github.com/repos/huggingface/datasets/issues/5190
https://github.com/huggingface/datasets/issues/5190
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
closed
1
2022-11-02T11:51:25
2022-11-02T12:55:02
2022-11-02T12:55:02
lewtun
[]
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the ...
false
1,432,769,143
https://api.github.com/repos/huggingface/datasets/issues/5189
https://github.com/huggingface/datasets/issues/5189
5,189
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
open
33
2022-11-02T09:15:02
2022-12-06T12:13:17
null
merveenoyan
[ "enhancement" ]
### Feature request Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark) ```python from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-b...
false
1,432,477,139
https://api.github.com/repos/huggingface/datasets/issues/5188
https://github.com/huggingface/datasets/pull/5188
5,188
add: segmentation guide.
closed
5
2022-11-02T04:34:36
2022-11-04T18:25:57
2022-11-04T18:23:34
sayakpaul
[ "documentation" ]
Closes #5181 I have opened a PR on Hub (https://huggingface.co/datasets/huggingface/documentation-images/discussions/5) to include the images in our central Hub repository. Once the PR is merged I will edit the image links. I have also prepared a [Colab Notebook](https://colab.research.google.com/drive/1BMDCfOT...
true
1,432,375,375
https://api.github.com/repos/huggingface/datasets/issues/5187
https://github.com/huggingface/datasets/pull/5187
5,187
chore: add notebook links to img cls and obj det.
closed
9
2022-11-02T02:30:09
2022-11-03T01:52:24
2022-11-03T01:49:56
sayakpaul
[ "enhancement" ]
Closes https://github.com/huggingface/datasets/issues/5182
true
1,432,045,011
https://api.github.com/repos/huggingface/datasets/issues/5186
https://github.com/huggingface/datasets/issues/5186
5,186
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
closed
3
2022-11-01T20:25:51
2022-11-15T18:24:39
2022-11-15T18:24:39
nateraw
[]
### Describe the bug When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed. ### Steps to reproduce the bug Make a new sqlite db with `sqlite3` and `pandas` from...
false
1,432,021,611
https://api.github.com/repos/huggingface/datasets/issues/5185
https://github.com/huggingface/datasets/issues/5185
5,185
Allow passing a subset of output features to Dataset.map
open
0
2022-11-01T20:07:20
2022-11-01T20:07:34
null
sanderland
[ "enhancement" ]
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, so...
false
1,431,418,066
https://api.github.com/repos/huggingface/datasets/issues/5183
https://github.com/huggingface/datasets/issues/5183
5,183
Loading an external dataset in a format similar to conll2003
closed
0
2022-11-01T13:18:29
2022-11-02T11:57:50
2022-11-02T11:57:50
Taghreed7878
[]
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script: features = datasets.Features( {"tokens": datasets.Sequence(datasets.Value("string")), "ner_tags": datasets.Sequence( datasets.featu...
false
1,431,029,547
https://api.github.com/repos/huggingface/datasets/issues/5182
https://github.com/huggingface/datasets/issues/5182
5,182
Add notebook / other resource links to the task-specific data loading guides
closed
2
2022-11-01T07:57:26
2022-11-03T01:49:57
2022-11-03T01:49:57
sayakpaul
[ "enhancement" ]
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model? For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classificatio...
false
1,431,027,102
https://api.github.com/repos/huggingface/datasets/issues/5181
https://github.com/huggingface/datasets/issues/5181
5,181
Add a guide for semantic segmentation
closed
2
2022-11-01T07:54:50
2022-11-04T18:23:36
2022-11-04T18:23:36
sayakpaul
[ "documentation" ]
Currently, we have these guides for object detection and image classification: * https://huggingface.co/docs/datasets/object_detection * https://huggingface.co/docs/datasets/image_classification I am proposing adding a similar guide for semantic segmentation. I am happy to contribute a PR for it. Cc: @os...
false
1,431,012,438
https://api.github.com/repos/huggingface/datasets/issues/5180
https://github.com/huggingface/datasets/issues/5180
5,180
An example or recommendations for creating large image datasets?
open
2
2022-11-01T07:38:38
2022-11-02T10:17:11
null
sayakpaul
[]
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do? As a user, I was wondering if we have this support for...
false
1,430,826,100
https://api.github.com/repos/huggingface/datasets/issues/5179
https://github.com/huggingface/datasets/issues/5179
5,179
`map()` fails midway due to format incompatibility
closed
9
2022-11-01T03:57:59
2022-11-08T11:35:26
2022-11-08T11:35:26
sayakpaul
[ "bug" ]
### Describe the bug I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset. ```py def get_test_accuracy(model): def fn(batch): inputs = {k:v.to(device...
false
1,430,800,810
https://api.github.com/repos/huggingface/datasets/issues/5178
https://github.com/huggingface/datasets/issues/5178
5,178
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
closed
3
2022-11-01T03:17:55
2022-11-02T08:27:15
2022-11-02T08:24:29
beyondguo
[]
### Describe the bug I tried: `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` and `data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')` but both got: `FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpsta...
false
1,430,238,556
https://api.github.com/repos/huggingface/datasets/issues/5177
https://github.com/huggingface/datasets/pull/5177
5,177
Update create image dataset docs
closed
1
2022-10-31T17:45:56
2022-11-02T17:15:22
2022-11-02T17:13:02
stevhliu
[ "documentation" ]
Based on @osanseviero and community feedback, it wasn't super clear how to upload a dataset to the Hub after creating something like an image captioning dataset. This PR adds a brief section on how to upload the dataset with `push_to_hub`.
true
1,430,214,539
https://api.github.com/repos/huggingface/datasets/issues/5176
https://github.com/huggingface/datasets/issues/5176
5,176
prepare dataset for cloud storage doesn't work
closed
2
2022-10-31T17:28:57
2023-03-28T09:11:46
2023-03-28T09:11:45
araonblake
[]
### Describe the bug Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage. ``` from datasets ...
false
1,428,696,231
https://api.github.com/repos/huggingface/datasets/issues/5175
https://github.com/huggingface/datasets/issues/5175
5,175
Loading an external NER dataset
closed
0
2022-10-30T09:31:55
2022-11-01T13:15:49
2022-11-01T13:15:49
Taghreed7878
[]
I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag. I tried this code snnipet that I found here as an answer to a similar issue: from datasets import Dataset INPUT_COLUMNS = "ID Text NER".split() ...
false
1,427,216,416
https://api.github.com/repos/huggingface/datasets/issues/5174
https://github.com/huggingface/datasets/pull/5174
5,174
Preserve None in list type cast in PyArrow 10
closed
1
2022-10-28T12:48:30
2022-10-28T13:15:33
2022-10-28T13:13:18
mariosasko
[]
The `ListArray` type in PyArrow 10.0.0 supports the `mask` parameter, which allows us to preserve Nones in nested lists in `cast` instead of replacing them with empty lists. Fix https://github.com/huggingface/datasets/issues/3676
true
1,425,880,441
https://api.github.com/repos/huggingface/datasets/issues/5173
https://github.com/huggingface/datasets/pull/5173
5,173
Raise ffmpeg warnings only once
closed
1
2022-10-27T15:58:33
2022-10-28T16:03:05
2022-10-28T16:00:51
polinaeterna
[]
Our warnings looks nice now. `librosa` warning that was raised at each decoding: ``` /usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead. warnings.warn("PySoundFile failed. Trying audioread instead.") ``` is suppressed with `filterwarnin...
true
1,425,523,114
https://api.github.com/repos/huggingface/datasets/issues/5172
https://github.com/huggingface/datasets/issues/5172
5,172
Inconsistency behavior between handling local file protocol and other FS protocols
open
0
2022-10-27T12:03:20
2024-05-08T19:31:13
null
leoleoasd
[]
### Describe the bug These lines us used during load_from_disk: ``` if is_remote_filesystem(fs): dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path) else: fs = fsspec.filesystem("file") dest_dataset_dict_path = dataset_dict_path ``` If a local FS is given, then it will the URL as th...
false
1,425,355,111
https://api.github.com/repos/huggingface/datasets/issues/5171
https://github.com/huggingface/datasets/pull/5171
5,171
Add PB and TB in convert_file_size_to_int
closed
1
2022-10-27T09:50:31
2022-10-27T12:14:27
2022-10-27T12:12:30
lhoestq
[]
null
true
1,425,301,835
https://api.github.com/repos/huggingface/datasets/issues/5170
https://github.com/huggingface/datasets/issues/5170
5,170
[Caching] Deterministic hashing of torch tensors
closed
0
2022-10-27T09:15:15
2022-11-02T17:18:43
2022-11-02T17:18:43
lhoestq
[ "enhancement" ]
Currently this fails ```python import torch from datasets.fingerprint import Hasher t = torch.tensor([1.]) def func(x): return t + x hash1 = Hasher.hash(func) t = torch.tensor([1.]) hash2 = Hasher.hash(func) assert hash1 == hash2 ``` Also as noticed in https://discuss.huggingface.co/t/dataset-ca...
false
1,425,075,254
https://api.github.com/repos/huggingface/datasets/issues/5169
https://github.com/huggingface/datasets/pull/5169
5,169
Add "ipykernel" to list of `co_filename`s to remove
closed
12
2022-10-27T05:56:17
2022-11-02T15:46:00
2022-11-02T15:43:20
gpucce
[]
Should resolve #5157
true
1,424,368,572
https://api.github.com/repos/huggingface/datasets/issues/5168
https://github.com/huggingface/datasets/pull/5168
5,168
Fix CI require beam
closed
2
2022-10-26T16:49:33
2022-10-27T09:25:19
2022-10-27T09:23:26
albertvillanova
[]
This PR: - Fixes the CI `require_beam`: before it was requiring PyTorch instead ```python def require_beam(test_case): if not config.TORCH_AVAILABLE: test_case = unittest.skip("test requires PyTorch")(test_case) return test_case ``` - Fixes a missing `require_beam` in `test_beam_base...
true
1,424,124,477
https://api.github.com/repos/huggingface/datasets/issues/5167
https://github.com/huggingface/datasets/pull/5167
5,167
Add ffmpeg4 installation instructions in warnings
closed
3
2022-10-26T14:21:14
2022-10-27T09:01:12
2022-10-27T08:58:58
polinaeterna
[]
Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users). Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it...
true
1,423,629,582
https://api.github.com/repos/huggingface/datasets/issues/5166
https://github.com/huggingface/datasets/pull/5166
5,166
Support dill 0.3.6
closed
11
2022-10-26T08:24:59
2022-10-28T05:41:05
2022-10-28T05:38:14
albertvillanova
[]
This PR: - ~~Unpins dill to allow installing dill>=0.3.6~~ - ~~Removes the fix on dill for >=0.3.6 because they implemented a deterministic mode (to be confirmed by @anivegesana)~~ - Pins dill<0.3.7 to allow latest dill 0.3.6 - Implements a fix for dill `save_function` for dill 0.3.6 - Additionally had to implemen...
true
1,423,616,677
https://api.github.com/repos/huggingface/datasets/issues/5165
https://github.com/huggingface/datasets/issues/5165
5,165
Memory explosion when trying to access 4d tensors in datasets cast to torch or np
open
0
2022-10-26T08:14:47
2022-10-26T08:14:47
null
clefourrier
[]
### Describe the bug When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors. ### Steps to reproduce the bug MWE: ```python from datasets import load_dataset import numpy as np de...
false
1,422,813,247
https://api.github.com/repos/huggingface/datasets/issues/5164
https://github.com/huggingface/datasets/pull/5164
5,164
WIP: drop labels in Image and Audio folders by default
closed
2
2022-10-25T17:21:49
2022-11-16T14:21:16
2022-11-02T14:03:02
polinaeterna
[]
will fix https://github.com/huggingface/datasets/issues/5153 and redundant labels displaying for most of the images datasets on the Hub (which are used just to store files) TODO: discuss adding `drop_labels` (and `drop_metadata`) params to yaml
true
1,422,540,337
https://api.github.com/repos/huggingface/datasets/issues/5163
https://github.com/huggingface/datasets/pull/5163
5,163
Reduce default max `writer_batch_size`
closed
1
2022-10-25T14:14:52
2022-10-27T12:19:27
2022-10-27T12:16:47
mariosasko
[]
Reduce the default writer_batch_size from 10k to 1k examples. Additionally, align the default values of `batch_size` and `writer_batch_size` in `Dataset.cast` with the values from the corresponding docstring.
true
1,422,461,112
https://api.github.com/repos/huggingface/datasets/issues/5162
https://github.com/huggingface/datasets/issues/5162
5,162
Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6
closed
7
2022-10-25T13:23:50
2022-11-14T08:25:37
2022-10-28T05:38:15
Rijgersberg
[]
### Describe the bug When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears. It is caused by a transitive dependency conflict between `datasets` and `multiprocess`. ### Steps to reproduce the bug ```bash $ echo "dataset...
false
1,422,371,748
https://api.github.com/repos/huggingface/datasets/issues/5161
https://github.com/huggingface/datasets/issues/5161
5,161
Dataset can’t cache model’s outputs
closed
1
2022-10-25T12:19:00
2022-11-03T16:12:52
2022-11-03T16:12:51
jongjyh
[]
### Describe the bug Hi, I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this? ### Ste...
false
1,422,193,938
https://api.github.com/repos/huggingface/datasets/issues/5160
https://github.com/huggingface/datasets/issues/5160
5,160
Automatically add filename for image/audio folder
open
10
2022-10-25T09:56:49
2022-10-26T16:51:46
null
patrickvonplaten
[ "enhancement" ]
### Feature request When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both: a) Automatically displayed in the viewer b) Automatically added as a column to the dataset when doing `load_dataset` In `diffusers` our test rely quite heavily on i...
false
1,422,172,080
https://api.github.com/repos/huggingface/datasets/issues/5159
https://github.com/huggingface/datasets/pull/5159
5,159
fsspec lock reset in multiprocessing
closed
1
2022-10-25T09:41:59
2022-11-03T20:51:15
2022-11-03T20:48:53
lhoestq
[]
`fsspec` added a clean way of resetting its lock - instead of doing it manually
true
1,422,059,287
https://api.github.com/repos/huggingface/datasets/issues/5158
https://github.com/huggingface/datasets/issues/5158
5,158
Fix language and license tag names in all Hub datasets
closed
6
2022-10-25T08:19:29
2022-10-25T11:27:26
2022-10-25T10:42:19
albertvillanova
[ "dataset contribution" ]
While working on this: - #5137 we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license"). This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown. We should fix the "language" and ...
false
1,421,703,577
https://api.github.com/repos/huggingface/datasets/issues/5157
https://github.com/huggingface/datasets/issues/5157
5,157
Consistent caching between python and jupyter
closed
2
2022-10-25T01:34:33
2022-11-02T15:43:22
2022-11-02T15:43:22
gpucce
[ "enhancement" ]
### Feature request I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning th...
false
1,421,667,125
https://api.github.com/repos/huggingface/datasets/issues/5156
https://github.com/huggingface/datasets/issues/5156
5,156
Unable to download dataset using Azure Data Lake Gen 2
closed
4
2022-10-25T00:43:18
2024-02-15T09:48:36
2022-11-17T23:37:08
clarissesimoes
[]
### Describe the bug When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed: ``` Traceback (most recent call last): File "download_hf_dataset.py", line 143, in <module> main() File "download_hf_dataset.py", line 102, in mai...
false
1,421,278,748
https://api.github.com/repos/huggingface/datasets/issues/5155
https://github.com/huggingface/datasets/pull/5155
5,155
TextConfig: added "errors"
closed
3
2022-10-24T18:56:52
2022-11-03T13:38:13
2022-11-03T13:35:35
NightMachinery
[]
This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`.
true
1,421,161,992
https://api.github.com/repos/huggingface/datasets/issues/5154
https://github.com/huggingface/datasets/pull/5154
5,154
Test latest fsspec in CI
closed
2
2022-10-24T17:18:13
2023-09-24T10:06:06
2022-10-25T09:30:45
lhoestq
[]
Following the discussion in https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 I think we need to test the latest fsspec in the CI
true
1,420,833,457
https://api.github.com/repos/huggingface/datasets/issues/5153
https://github.com/huggingface/datasets/issues/5153
5,153
default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir
closed
1
2022-10-24T13:28:18
2022-11-15T16:31:10
2022-11-15T16:31:09
polinaeterna
[ "bug" ]
### Describe the bug By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios As this is a corner case for quick expl...
false
1,420,808,919
https://api.github.com/repos/huggingface/datasets/issues/5152
https://github.com/huggingface/datasets/issues/5152
5,152
refactor FolderBasedBuilder and Image/AudioFolder tests
open
0
2022-10-24T13:11:52
2022-10-24T13:11:52
null
polinaeterna
[ "refactoring" ]
Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader.
false
1,420,791,163
https://api.github.com/repos/huggingface/datasets/issues/5151
https://github.com/huggingface/datasets/issues/5151
5,151
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
open
1
2022-10-24T12:59:18
2022-11-04T14:55:20
null
polinaeterna
[ "enhancement" ]
Now one can push only different splits within one default config of a dataset. Would be nice to allow something like: ``` ds.push_to_hub(repo_name, config=config_name) ``` I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs fo...
false
1,420,684,999
https://api.github.com/repos/huggingface/datasets/issues/5150
https://github.com/huggingface/datasets/issues/5150
5,150
Problems after upgrading to 2.6.1
open
10
2022-10-24T11:32:36
2024-05-12T07:40:03
null
pietrolesci
[]
### Describe the bug Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2. Context: - Each individual dataset in the dict is created with `Dataset.from_pandas` - The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"tr...
false
1,420,415,639
https://api.github.com/repos/huggingface/datasets/issues/5149
https://github.com/huggingface/datasets/pull/5149
5,149
Make iter_files deterministic
closed
1
2022-10-24T08:16:27
2022-10-27T09:53:23
2022-10-27T09:51:09
albertvillanova
[]
Fix #5145.
true
1,420,219,222
https://api.github.com/repos/huggingface/datasets/issues/5148
https://github.com/huggingface/datasets/issues/5148
5,148
Cannot find the rvl_cdip dataset
closed
2
2022-10-24T04:57:42
2022-10-24T12:23:47
2022-10-24T06:25:28
santule
[]
Hi, I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error. dataset = load_dataset("rvl_cdip") Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdi...
false
1,419,522,275
https://api.github.com/repos/huggingface/datasets/issues/5147
https://github.com/huggingface/datasets/issues/5147
5,147
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
open
4
2022-10-22T21:46:38
2022-11-01T22:19:07
null
falcaopetri
[ "enhancement" ]
### Feature request `dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint. I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing. Of co...
false
1,418,331,282
https://api.github.com/repos/huggingface/datasets/issues/5146
https://github.com/huggingface/datasets/pull/5146
5,146
Delete duplicate issue template file
closed
1
2022-10-21T13:18:46
2022-10-21T13:52:30
2022-10-21T13:50:04
albertvillanova
[]
A conflict between two PRs: - #5116 - #5136 was not properly resolved, resulting in a duplicate issue template. This PR removes the duplicate template.
true
1,418,005,452
https://api.github.com/repos/huggingface/datasets/issues/5145
https://github.com/huggingface/datasets/issues/5145
5,145
Dataset order is not deterministic with ZIP archives and `iter_files`
closed
8
2022-10-21T09:00:03
2022-10-27T09:51:49
2022-10-27T09:51:10
fxmarty
[]
### Describe the bug For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order. ### Steps to reproduce the bug In a clean docker container or conda environmen...
false
1,417,974,731
https://api.github.com/repos/huggingface/datasets/issues/5144
https://github.com/huggingface/datasets/issues/5144
5,144
Inconsistent documentation on map remove_columns
closed
3
2022-10-21T08:37:53
2022-11-15T14:15:10
2022-11-15T14:15:10
zhaowei-wang-nlp
[ "documentation", "duplicate", "good first issue", "hacktoberfest" ]
### Describe the bug The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`: When you remove a column, it is only removed after the example has been provided to the mapped function. So it seems that the `remove_columns` parameter remo...
false
1,416,837,186
https://api.github.com/repos/huggingface/datasets/issues/5143
https://github.com/huggingface/datasets/issues/5143
5,143
DownloadManager Git LFS support
closed
2
2022-10-20T15:29:29
2022-10-20T17:17:10
2022-10-20T17:17:10
Muennighoff
[ "enhancement" ]
### Feature request Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right? Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict. Is there a good way to write a dataset loading script for a repo with lfs files? ##...
false
1,416,317,678
https://api.github.com/repos/huggingface/datasets/issues/5142
https://github.com/huggingface/datasets/pull/5142
5,142
Deprecate num_proc parameter in DownloadManager.extract
closed
6
2022-10-20T09:52:52
2022-10-25T18:06:56
2022-10-25T15:56:45
ayushthe1
[]
fixes #5132 : Deprecated the `num_proc` parameter in `DownloadManager.extract` by passing `num_proc` parameter to `map_nested` .
true
1,415,479,438
https://api.github.com/repos/huggingface/datasets/issues/5141
https://github.com/huggingface/datasets/pull/5141
5,141
Raise ImportError instead of OSError
closed
2
2022-10-19T19:30:05
2022-10-25T15:59:25
2022-10-25T15:56:58
ayushthe1
[]
fixes #5134 : Replaced OSError with ImportError if required extraction library is not installed.
true
1,415,075,530
https://api.github.com/repos/huggingface/datasets/issues/5140
https://github.com/huggingface/datasets/pull/5140
5,140
Make the KeyHasher FIPS compliant
closed
0
2022-10-19T14:25:52
2022-11-07T16:20:43
2022-11-07T16:20:43
vvalouch
[]
MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant
true
1,414,642,723
https://api.github.com/repos/huggingface/datasets/issues/5137
https://github.com/huggingface/datasets/issues/5137
5,137
Align task tags in dataset metadata
closed
14
2022-10-19T09:41:42
2022-11-10T05:25:58
2022-10-25T06:17:00
albertvillanova
[ "dataset contribution" ]
## Describe Once we have agreed on a common naming for task tags for all open source projects, we should align on them. ## Steps - [x] Align task tags in canonical datasets - [x] task_categories: 4 datasets - [x] task_ids (by @lhoestq) - [x] Open PRs in community datasets - [x] task_categories: 451 datas...
false
1,414,492,139
https://api.github.com/repos/huggingface/datasets/issues/5136
https://github.com/huggingface/datasets/pull/5136
5,136
Update docs once dataset scripts transferred to the Hub
closed
1
2022-10-19T07:58:27
2022-10-20T08:12:21
2022-10-20T08:10:00
albertvillanova
[]
Todo: - [x] Update docs: - [x] Datasets on GitHub (legacy) - [x] Load: offline - [x] About dataset load: - [x] Maintaining integrity - [x] Security - [x] Update docstrings: - [x] Inspect: - [x] get_dataset_config_info - [x] get_dataset_split_names - [x] Load: - [x] dataset_modu...
true
1,414,413,519
https://api.github.com/repos/huggingface/datasets/issues/5135
https://github.com/huggingface/datasets/issues/5135
5,135
Update docs once dataset scripts transferred to the Hub
closed
0
2022-10-19T06:58:19
2022-10-20T08:10:01
2022-10-20T08:10:01
albertvillanova
[ "documentation" ]
## Describe the bug As discussed in: - https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701 we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub): - #4974 Concretely: - [x] Datasets on GitHub (legacy): https://huggingface.co/docs/dat...
false