id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
βŒ€
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,438,544,617
5,211
Update Overview.ipynb google colab
- removed metrics stuff - added image example - added audio example (with ffmpeg instructions) - updated the "add a new dataset" section
closed
https://github.com/huggingface/datasets/pull/5211
2022-11-07T15:23:52
2022-11-29T15:59:48
2022-11-29T15:54:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,438,492,507
5,210
Tweak readme
Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security
closed
https://github.com/huggingface/datasets/pull/5210
2022-11-07T14:51:23
2022-11-24T11:35:07
2022-11-24T11:26:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,438,367,678
5,209
Implement ability to define splits in metadata section of dataset card
### Feature request If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits instead of loading through `data_files`) e.g GLUE has various splits on viewer but it’s too overkill to ask people to implement loading script, so it would be better to let them define these in the README file instead. Also pinging @polinaeterna @lhoestq @adrinjalali
closed
https://github.com/huggingface/datasets/issues/5209
2022-11-07T13:27:16
2023-07-21T14:36:02
2023-07-21T14:36:01
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,438,035,707
5,208
Refactor CI hub fixtures to use monkeypatch instead of patch
Minor refactoring of CI to use `pytest` `monkeypatch` instead of `unittest` `patch`.
closed
https://github.com/huggingface/datasets/pull/5208
2022-11-07T09:25:05
2022-11-08T06:51:20
2022-11-08T06:49:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,437,858,506
5,207
Connection error of the HuggingFace's dataset Hub due to SSLError with proxy
### Describe the bug It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office. Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy), I'm getting the SSLError issue. What should I do to download the datanet stored in HuggingFace normally? I welcome any comments. I think those comments will be helpful to me. * Dataset address - https://huggingface.co/datasets/moyix/debian_csrc/viewer/moyix--debian_csrc * Log message ``` ............ OMISSION .............. Traceback (most recent call last): File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 587, in <module> main() File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 278, in main raw_datasets = load_dataset( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset builder_instance = load_dataset_builder( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory raise e1 from None File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) [2022-11-07 15:23:38,476] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 6760 [2022-11-07 15:23:38,476] [ERROR] [launch.py:324:sigkill_handler] ['/home/geunsik-lim/anaconda3/envs/deepspeed/bin/python', '-u', './transformers/examples/pytorch/language-modeling/run_clm.py', '--local_rank=0', '--model_name_or_path=Salesforce/codegen-350M-multi', '--per_device_train_batch_size=1', '--learning_rate', '2e-5', '--num_train_epochs', '1', '--output_dir=./codegen-350M-finetuned', '--overwrite_output_dir', '--dataset_name', 'moyix/debian_csrc', '--cache_dir', '/data/home/geunsik-lim/.cache', '--tokenizer_name', 'Salesforce/codegen-350M-multi', '--block_size', '2048', '--gradient_accumulation_steps', '32', '--do_train', '--fp16', '--deepspeed', 'ds_config_zero2.json'] exits with return code = 1 real 0m7.742s user 0m4.930s ``` ### Steps to reproduce the bug Steps to reproduce this behavior. ``` (deepspeed) geunsik-lim@ai02:~/qtlab$ ./test_debian_csrc_dataset.py Traceback (most recent call last): File "/data/home/geunsik-lim/qtlab/./test_debian_csrc_dataset.py", line 6, in <module> dataset = load_dataset("moyix/debian_csrc") File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset builder_instance = load_dataset_builder( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory raise e1 from None File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ cat ./test_debian_csrc_dataset.py #!/usr/bin/env python from datasets import load_dataset dataset = load_dataset("moyix/debian_csrc") ``` 1. Adde proxy address of a company in /etc/profile 2. Download dataset with load_dataset() function of datasets package that is provided by HuggingFace. 3. In this case, the address would be "moyix--debian_csrc". 4. I get the "`ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError`)" error message. ### Expected behavior * error message: ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) ### Environment info * software version information: ``` (deepspeed) geunsik-lim@ai02:~$ (deepspeed) geunsik-lim@ai02:~$ conda list -f pytorch # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel pytorch 1.13.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch (deepspeed) geunsik-lim@ai02:~$ conda list -f python # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel python 3.10.6 haa1d7c7_1 (deepspeed) geunsik-lim@ai02:~$ conda list -f datasets # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel datasets 2.6.1 py_0 huggingface (deepspeed) geunsik-lim@ai02:~$ uname -a Linux ai02 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux (deepspeed) geunsik-lim@ai02:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS" ```
open
https://github.com/huggingface/datasets/issues/5207
2022-11-07T06:56:23
2025-03-08T09:04:10
null
{ "login": "leemgs", "id": 82404, "type": "User" }
[]
false
[]
1,437,223,894
5,206
Use logging instead of printing to console
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L830)) generated by the `DatasetBuilder` are printed to the console instead of passed to `datasets` logger. ### Steps to reproduce the bug ```python >> import datasets >> datasets.load_dataset("some-dataset") Downloading and preparing dataset csv/data to <path>... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 7729.06it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 527.23it/s] Dataset csv downloaded and prepared to <path>. Subsequent calls will reuse this data. ``` ### Expected behavior The logs should not be printed to the console directly but passed to the logger so that the user can redirect them wherever he wants. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-x86_64-i386-64bit - Python version: 3.9.15 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
closed
https://github.com/huggingface/datasets/issues/5206
2022-11-05T23:48:02
2022-11-06T00:06:00
2022-11-06T00:05:59
{ "login": "bilelomrani1", "id": 16692099, "type": "User" }
[]
false
[]
1,437,221,987
5,205
Add missing `DownloadConfig.use_auth_token` value
This PR solves https://github.com/huggingface/datasets/issues/5204 Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub.
closed
https://github.com/huggingface/datasets/pull/5205
2022-11-05T23:36:36
2022-11-08T08:13:00
2022-11-07T16:20:24
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,437,221,259
5,204
`push_to_hub` not propagating `token` through `DownloadConfig`
### Describe the bug When trying to upload a new πŸ€— Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before. But when trying to run `Dataset.push_to_hub` again over the same dataset, instead of updating it, it throws a `ConnectionError` when trying to retrieve the `README.md` that may contain some metadata about the dataset, so as to also update it, but since the `token` is not propagated, the `DownloadConfig` provided to the `datasets.utils.file_utils.get_from_cache` function doesn't contain the `use_auth_token` value set to `token`, it's just using the default one which is None/False. So on, when uploading a dataset via Python with `push_to_hub` with the `token` as a parameter with the HuggingFace API Token as value, it can just be uploaded when the dataset is new, otherwise it fails with to `ConnectionError` due to the `token` not being propagated as `use_auth_token`. ### Steps to reproduce the bug Let's create a new dataset in our HF account via Python as: ```python from datasets import Dataset data = {"a": [1, 2, 3], "b": [4, 5, 6]} ds = Dataset.from_dict(data) ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>) ``` When we create the `Dataset` for the first time it works and there are no issues, but when trying to actually upload a new version of the same dataset (same name under the same username), we encounter the following issue: ```python from datasets import Dataset data = {"a": [1, 2, 3], "b": [4, 5, 6]} ds = Dataset.from_dict(data) ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>) >>> ConnectionError: Couldn't reach https://huggingface.co/datasets/alvarobartt/demo/resolve/main/README.md (ConnectionError('Unauthorized for URL https://huggingface.co/datasets/<HF_USERNAME>/<HF_DATASET>/resolve/main/README.md. Please use the parameter `use_auth_token=True` after logging in with `huggingface-cli login`')) ``` ### Expected behavior Ideally, the `token` parameter provided to `push_to_hub` should be propagated and used to download the `README.md` when trying to update a `Dataset`, instead of throwing that exception, so that the authentication can be done directly through code without running `huggingface-cli login`as mentioned at https://huggingface.co/docs/datasets/upload_dataset#upload-with-python. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
closed
https://github.com/huggingface/datasets/issues/5204
2022-11-05T23:32:20
2022-11-08T10:12:09
2022-11-08T10:12:08
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
false
[]
1,436,710,518
5,203
Update canonical links to Hub links
This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200.
closed
https://github.com/huggingface/datasets/pull/5203
2022-11-04T22:50:50
2022-11-07T18:43:05
2022-11-07T18:40:19
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,435,886,090
5,202
CI fails after bulk edit of canonical datasets
``` ______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', config_name = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, config_name, expected_splits", [ ("squad", "plain_text", ["train", "validation"]), ("dalle-mini/wit", "dalle-mini--wit", ["train"]), ("paws", "labeled_final", ["train", "test", "validation"]), ], ) def test_get_dataset_config_info(path, config_name, expected_splits): info = get_dataset_config_info(path, config_name=config_name) assert info.config_name == config_name > assert list(info.splits.keys()) == expected_splits E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] tests/test_inspect.py:45: AssertionError _ test_get_dataset_info[paws-expected_configs2-expected_splits_in_first_config2] _ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws' expected_configs = ['labeled_final', 'labeled_swap', 'unlabeled_final'] expected_splits_in_first_config = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, expected_configs, expected_splits_in_first_config", [ ("squad", ["plain_text"], ["train", "validation"]), ("dalle-mini/wit", ["dalle-mini--wit"], ["train"]), ("paws", ["labeled_final", "labeled_swap", "unlabeled_final"], ["train", "test", "validation"]), ], ) def test_get_dataset_info(path, expected_configs, expected_splits_in_first_config): infos = get_dataset_infos(path) assert list(infos.keys()) == expected_configs expected_config = expected_configs[0] assert expected_config in infos info = infos[expected_config] assert info.config_name == expected_config > assert list(info.splits.keys()) == expected_splits_in_first_config E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] tests/test_inspect.py:90: AssertionError ______ test_get_dataset_split_names[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', expected_config = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, expected_config, expected_splits", [ ("squad", "plain_text", ["train", "validation"]), ("dalle-mini/wit", "dalle-mini--wit", ["train"]), ("paws", "labeled_final", ["train", "test", "validation"]), ], ) def test_get_dataset_split_names(path, expected_config, expected_splits): infos = get_dataset_infos(path) assert expected_config in infos info = infos[expected_config] assert info.config_name == expected_config > assert list(info.splits.keys()) == expected_splits E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] ```
closed
https://github.com/huggingface/datasets/issues/5202
2022-11-04T10:51:20
2023-02-16T09:11:10
2023-02-16T09:11:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,435,881,554
5,201
Do not sort splits in dataset info
I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws What do you think? But I added sorting in tests to fix CI (for the same dataset).
closed
https://github.com/huggingface/datasets/pull/5201
2022-11-04T10:47:21
2022-11-04T14:47:37
2022-11-04T14:45:09
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,435,831,559
5,200
Some links to canonical datasets in the docs are outdated
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub.
closed
https://github.com/huggingface/datasets/issues/5200
2022-11-04T10:06:21
2022-11-07T18:40:20
2022-11-07T18:40:20
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,434,818,836
5,199
Deprecate dummy data generation command
Deprecate the `dummy_data` CLI command.
closed
https://github.com/huggingface/datasets/pull/5199
2022-11-03T15:05:54
2022-11-04T14:01:50
2022-11-04T13:59:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,434,699,165
5,198
Add note about the name of a dataset script
Add note that a dataset script should has the same name as a repo/dir, a bit related to this issue https://github.com/huggingface/datasets/issues/5193 also fixed two minor issues in audio docs (broken links)
closed
https://github.com/huggingface/datasets/pull/5198
2022-11-03T13:51:32
2022-11-04T12:47:59
2022-11-04T12:46:01
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,434,676,150
5,197
[zstd] Use max window log size
ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags. Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the max window size.
open
https://github.com/huggingface/datasets/pull/5197
2022-11-03T13:35:58
2022-11-03T13:45:19
null
{ "login": "reyoung", "id": 728699, "type": "User" }
[]
true
[]
1,434,401,646
5,196
Use hfh hf_hub_url function
Small refactoring to use `hf_hub_url` function from `huggingface_hub`. This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`. This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood). EDIT: ~~Finally, we use our `config.HUB_DATASETS_URL` when using `hfh.hf_hub_url`~~ There is a breaking change: the `hfh` `hf_hub_url` function uses - `hfh` `HUGGINGFACE_CO_URL_TEMPLATE` URL template, different from the `datasets` `config.HUB_DATASETS_URL` - also, `hfh` `DEFAULT_REVISION`, instead of `datasets` `config.HUB_DEFAULT_VERSION`
closed
https://github.com/huggingface/datasets/pull/5196
2022-11-03T10:08:09
2022-12-06T11:38:17
2022-11-09T07:15:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,434,290,689
5,195
[wip testing docs]
null
closed
https://github.com/huggingface/datasets/pull/5195
2022-11-03T08:37:34
2023-04-04T15:10:37
2023-04-04T15:10:33
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,434,206,951
5,194
Fix docs about dataset_info in YAML
This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card: - #4926 Related to: - #5193
closed
https://github.com/huggingface/datasets/pull/5194
2022-11-03T07:10:23
2022-11-03T13:31:27
2022-11-03T13:29:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,433,883,780
5,193
"One or several metadata. were found, but not in the same directory or in a parent directory"
### Describe the bug When loading my own dataset, on loading it I get an error. Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data And the error after loading with: ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ```python Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.34k/3.34k [00:00<00:00, 4.45MB/s] Using custom data configuration SDH_16k-53e7301a92ab0025 Downloading and preparing dataset None/SDH_16k to /home/corentin/.cache/huggingface/datasets/corentinm7___imagefolder/SDH_16k-53e7301a92ab0025/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.28M/3.28M [00:00<00:00, 4.31MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.75s/it] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.13G/1.13G [00:15<00:00, 74.3MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:16<00:00, 16.09s/it] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:13<00:00, 13.16s/it] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/load.py", line 1742, in load_dataset builder_instance.download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 814, in download_and_prepare self._download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1423, in _download_and_prepare super()._download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 905, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1374, in _prepare_split for key, record in logging.tqdm( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 394, in _generate_examples raise ValueError( ValueError: One or several metadata. were found, but not in the same directory or in a parent directory of /home/corentin/.cache/huggingface/datasets/downloads/extracted/60c4aa8d4da3065bb3d310de4373dffd73bd4dc331aedcb4ee867febe4fdb7cd/validation/sick/2_CG_SDH_TAM_Bin1cKO_ko_pla_4_1640.tif. ``` However the test command is working fine. ```datasets-cli test hugging_face_play/ds_test/SDH_16k.py --save_info --all_configs --force_redownload``` ``` Using custom data configuration SDH_16k Testing builder 'SDH_16k' (1/1) Downloading and preparing dataset sdh_16k/SDH_16k to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.13G/1.13G [00:14<00:00, 76.5MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:15<00:00, 15.66s/it] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.28M/3.28M [00:02<00:00, 1.44MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:03<00:00, 3.21s/it] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 11586.48it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:13<00:00, 13.42s/it] Dataset sdh_16k downloaded and prepared to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d. Subsequent calls will reuse this data. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 605.27it/s] Dataset card saved at hugging_face_play/ds_test/README.md Test successful. ``` ### Steps to reproduce the bug Simply run on python ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ### Expected behavior As the test command worked, this error should not appear ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.6 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
closed
https://github.com/huggingface/datasets/issues/5193
2022-11-02T22:46:25
2022-11-03T13:39:16
2022-11-03T13:35:44
{ "login": "lambda-science", "id": 20109584, "type": "User" }
[]
false
[]
1,433,199,790
5,192
Drop labels in Image and Audio folders if files are on different levels in directory or if there is only one label
Will close https://github.com/huggingface/datasets/issues/5153 Drop labels by default (`drop_labels=None`) when: * there are files on different levels of directory hierarchy by checking their path depth * all files are in the same directory (=only one label was inferred) First one fixes cases like this: ``` repo image3.jpg image4.jpg data image1.jpg image2.jpg ``` Second one fixes cases like this: ``` repo image1.jpg image2.jpg image3.jpg ``` This is mostly to fix the viewer for people who just drop images in the Hub interface into the root dir. I added tests for both of the cases on local and remote files. **I also changed data files for old test on drop_labels** (`test_generate_examples_drop_labels`). The files I provide to `test_generate_examples_drop_labels` now has "canonical" classification structure (two dirs) in order not to change the logic of the test (=not to check these two cases addressed in the PR).
closed
https://github.com/huggingface/datasets/pull/5192
2022-11-02T14:01:41
2022-11-15T16:32:53
2022-11-15T16:31:07
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
true
[]
1,433,191,658
5,191
Make torch.Tensor and spacy models cacheable
Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models. Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/3178
closed
https://github.com/huggingface/datasets/pull/5191
2022-11-02T13:56:18
2022-11-02T17:20:48
2022-11-02T17:18:42
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,433,014,626
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None` Here's an example: ```python from datasets import load_dataset ds = load_dataset("lewtun/audio-test-push") ds["train"][0] # { # "audio": { # "path": None, <-- Is this expected? # "array": array( # [ # 3.97140226e-07, # 7.30310290e-07, # 7.56406735e-07, # ..., # -1.19636677e-01, # -1.16811886e-01, # -1.12441722e-01, # ] # ), # "sampling_rate": 44100, # }, # "song_id": 0, # "genre_id": 0, # "genre": "Electronic", # } ``` Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :) ### Steps to reproduce the bug 1. Create an audio dataset with the `audiofolder` feature 2. Push the dataset to the Hub with `push_to_hub()` 3. Download the Hub dataset and inspect the `audio.path` feature ### Expected behavior `audio.path` points to the file associated with the audio data ### Environment info - `datasets` version: 2.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
closed
https://github.com/huggingface/datasets/issues/5190
2022-11-02T11:51:25
2022-11-02T12:55:02
2022-11-02T12:55:02
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
false
[]
1,432,769,143
5,189
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
### Feature request Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark) ```python from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True) print(next(iter(dataset["train"]))) ``` `datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors. It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default. ```diff from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True) -print(next(iter(dataset["train"]))) +print(next(iter(dataset))) ``` ### Motivation I explained it above πŸ˜… ### Your contribution I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first!
open
https://github.com/huggingface/datasets/issues/5189
2022-11-02T09:15:02
2022-12-06T12:13:17
null
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,432,477,139
5,188
add: segmentation guide.
Closes #5181 I have opened a PR on Hub (https://huggingface.co/datasets/huggingface/documentation-images/discussions/5) to include the images in our central Hub repository. Once the PR is merged I will edit the image links. I have also prepared a [Colab Notebook](https://colab.research.google.com/drive/1BMDCfOTBnyshoME5RSxn5iQy-TWeFbOA?usp=sharing) in case anyone wants to play. - [x] Replace the image links
closed
https://github.com/huggingface/datasets/pull/5188
2022-11-02T04:34:36
2022-11-04T18:25:57
2022-11-04T18:23:34
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,432,375,375
5,187
chore: add notebook links to img cls and obj det.
Closes https://github.com/huggingface/datasets/issues/5182
closed
https://github.com/huggingface/datasets/pull/5187
2022-11-02T02:30:09
2022-11-03T01:52:24
2022-11-03T01:49:56
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
1,432,045,011
5,186
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
### Describe the bug When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed. ### Steps to reproduce the bug Make a new sqlite db with `sqlite3` and `pandas` from a remote [URL](https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv). ```python import sqlite3 import pandas as pd from datasets import Dataset conn = sqlite3.connect('us_covid_data.db') df = pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv') df.to_sql('states', conn, if_exists='replace') ``` Then if you try to query this DB like this: ```python ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db") ``` You run into the error I described above: ```ValueError: Please pass `features` or at least one example when writing data``` However, if you try to pass features, as the error suggests, then you get an error that tells you the underlying problem... ```python from datasets import Dataset, Features, Value features = Features({ 'date': Value('date32'), 'label': Value('string'), 'fips': Value('int32'), 'cases': Value('int32'), 'deaths': Value('int32') }) ds = Dataset.from_sql( '''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db", features=features ) ``` Which results in the actual underlying error: `ImportError: Using URI string without sqlalchemy installed.` ### Expected behavior Instead of `ValueError` about needing to pass features, we should provide the actual underlying error about not having SQLAlchemy installed when it isn't found in the environment. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 10.0.0 - Pandas version: 1.2.5
closed
https://github.com/huggingface/datasets/issues/5186
2022-11-01T20:25:51
2022-11-15T18:24:39
2022-11-15T18:24:39
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
false
[]
1,432,021,611
5,185
Allow passing a subset of output features to Dataset.map
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, sometimes you want to just pass some of the output types, particularly when the first of these modes makes an incorrect type. This currently crashes. ### Motivation To give a little background: this problem appears in converting labels to ids, where the labels happen to be floats rather than strings Consider the following use of map to convert from float to int ```python data = Dataset.from_dict({'y':[1.0,2.0,3.0]}) mapped = data.map(lambda r: {'y': int(r['y'])}) mapped['y'] # is floats, not ints ``` The result is a float again, since after the mapping operation it forces the old datatypes back on the data. Passing `features=Features({"y": Value(dtype="int64")})` to map works in principle, but then extending it a little to e.g. ```python def format_data(r): return {**tokenizer(r["text"]), "y": int(r["y"])} data = Dataset.from_dict({"y": [1.0, 2.0, 3.0], "text": ["one", "two", "three"]}) mapped = data.map( format_data, features=Features({'y': Value(dtype="int64")}), remove_columns=["text"], ) ``` Results in a crash in dataset internals, as it expects either all or no output features to be specified. Of course one can pass a full feature specification, but this becomes tokenizer specific and very awkward. ### Your contribution I've looked at `write_batch` and particularly `col_type = features[col] if features else None`, but checking for `col in features` here makes it fail elsewhere, but the structure makes it hard to understand how and why. I do not think I would have the time myself to get to the bottom of this anytime soon.
open
https://github.com/huggingface/datasets/issues/5185
2022-11-01T20:07:20
2022-11-01T20:07:34
null
{ "login": "sanderland", "id": 48946947, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,431,418,066
5,183
Loading an external dataset in a format similar to conll2003
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script: features = datasets.Features( {"tokens": datasets.Sequence(datasets.Value("string")), "ner_tags": datasets.Sequence( datasets.features.ClassLabel( names=["B-PER", .... etc.]))} ) from datasets import Dataset INPUT_COLUMNS = "tokens ner_tags".split(" ") def read_conll(file): #all_labels = [] example = {col: [] for col in INPUT_COLUMNS} idx = 0 with open(file) as f: for line in f: if line: if line.startswith("-DOCSTART-") and example["tokens"] != []: print(idx, example) yield idx, example idx += 1 example = {col: [] for col in INPUT_COLUMNS} elif line == "\n" or (line.startswith("-DOCSTART-") and example["tokens"] == []): continue else: row_cols = line.split(" ") for i, col in enumerate(example): example[col] = row_cols[i].rstrip() dset = Dataset.from_generator(read_conll, gen_kwargs={"file": "/content/new_train.txt"}, features = features) The following error happened: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <genexpr>(.0) 285 for key in unique_values(itertools.chain(*dicts)): # set merge all keys 286 # Will raise KeyError if the dict don't have the same keys --> 287 yield key, tuple(d[key] for d in dicts) 288 TypeError: tuple indices must be integers or slices, not str What does this mean and what should I modify?
closed
https://github.com/huggingface/datasets/issues/5183
2022-11-01T13:18:29
2022-11-02T11:57:50
2022-11-02T11:57:50
{ "login": "Taghreed7878", "id": 112555442, "type": "User" }
[]
false
[]
1,431,029,547
5,182
Add notebook / other resource links to the task-specific data loading guides
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model? For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb. Applies to https://huggingface.co/docs/datasets/object_detection as well. Cc: @osanseviero @nateraw
closed
https://github.com/huggingface/datasets/issues/5182
2022-11-01T07:57:26
2022-11-03T01:49:57
2022-11-03T01:49:57
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,431,027,102
5,181
Add a guide for semantic segmentation
Currently, we have these guides for object detection and image classification: * https://huggingface.co/docs/datasets/object_detection * https://huggingface.co/docs/datasets/image_classification I am proposing adding a similar guide for semantic segmentation. I am happy to contribute a PR for it. Cc: @osanseviero @nateraw
closed
https://github.com/huggingface/datasets/issues/5181
2022-11-01T07:54:50
2022-11-04T18:23:36
2022-11-04T18:23:36
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,431,012,438
5,180
An example or recommendations for creating large image datasets?
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do? As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset). Cc @lhoestq
open
https://github.com/huggingface/datasets/issues/5180
2022-11-01T07:38:38
2022-11-02T10:17:11
null
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[]
false
[]
1,430,826,100
5,179
`map()` fails midway due to format incompatibility
### Describe the bug I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset. ```py def get_test_accuracy(model): def fn(batch): inputs = {k:v.to(device) for k,v in batch.items() if k in tokenizer.model_input_names} with torch.no_grad(): output = model(**inputs) pred_label = torch.argmax(output.logits, axis=-1) return {"predicted_label": pred_label.cpu().numpy()} return fn ``` This is how the `get_test_accuracy()` is being used: ```py emotions = load_dataset("emotion") def tokenize(batch): return tokenizer(batch["text"], padding=True, truncation=True) emotions_encoded = emotions.map(tokenize, batched=True) emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) new_dataset = emotions_encoded["validation"].map( accuracy_fn, batched=True, batch_size=128 ) ``` Complete code is available in the Colab Notebook provided below. The `map()` process fails midway giving: ```shell AttributeError Traceback (most recent call last) <ipython-input-8-ad24ac288eb4> in <module> 2 3 new_dataset = emotions_encoded["validation"].map( ----> 4 accuracy_fn, batched=True, batch_size=128 5 ) 7 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2588 new_fingerprint=new_fingerprint, 2589 disable_tqdm=disable_tqdm, -> 2590 desc=desc, 2591 ) 2592 else: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 582 self: "Dataset" = kwargs.pop("self") 583 # apply actual function --> 584 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 585 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 586 for dataset in datasets: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 478 # Call actual function 479 --> 480 out = func(self, *args, **kwargs) 481 482 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2970 indices, 2971 check_same_num_examples=len(input_dataset.list_indexes()) > 0, -> 2972 offset=offset, 2973 ) 2974 except NumExamplesMismatchError: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 2850 if with_rank: 2851 additional_args += (rank,) -> 2852 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 2853 if update_data is None: 2854 # Check if the function returns updated examples <ipython-input-6-4e0d280426f6> in fn(batch) 1 def get_test_accuracy(model): 2 def fn(batch): ----> 3 inputs = {k:v.to(device) for k,v in batch.items() 4 if k in tokenizer.model_input_names} 5 with torch.no_grad(): <ipython-input-6-4e0d280426f6> in <dictcomp>(.0) 2 def fn(batch): 3 inputs = {k:v.to(device) for k,v in batch.items() ----> 4 if k in tokenizer.model_input_names} 5 with torch.no_grad(): 6 output = model(**inputs) AttributeError: 'list' object has no attribute 'to' ``` As you'd notice in the notebook, the process fails _midway_ and not at the beginning. Is this expected? ### Steps to reproduce the bug Colab Notebook: https://colab.research.google.com/gist/sayakpaul/d1570d537faf39040d02d77b1ed7de07/scratchpad.ipynb ### Expected behavior The mapping process should complete as is. If you switch the `split` to `test` it works as expected. ### Environment info Colab
closed
https://github.com/huggingface/datasets/issues/5179
2022-11-01T03:57:59
2022-11-08T11:35:26
2022-11-08T11:35:26
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,430,800,810
5,178
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
### Describe the bug I tried: `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` and `data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')` but both got: `FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json` the full report is: ``` FileNotFoundError Traceback (most recent call last) <ipython-input-13-d07c5021090c> in <module> 1 from datasets import load_dataset 2 ----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s] /opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1740 1741 # Download and prepare data -> 1742 builder_instance.download_and_prepare( 1743 download_config=download_config, 1744 download_mode=download_mode, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 812 **download_and_prepare_kwargs, 813 } --> 814 self._download_and_prepare( 815 dl_manager=dl_manager, 816 verify_infos=verify_infos, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1645 options=beam_options, 1646 ) -> 1647 super()._download_and_prepare( 1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs 1649 ) # TODO handle verify_infos in beam datasets /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 881 split_dict = SplitDict(dataset_name=self.name) 882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 884 885 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline) 943 info_url = _base_url(lang) + _INFO_FILE 944 # Use dictionary since testing mock always returns the same result. --> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url}) 946 947 xml_urls = [] /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls) 431 extracted_path(s): `str`, extracted paths of given URL(s). 432 """ --> 433 return self.extract(self.download(url_or_urls)) 434 435 def get_recorded_sizes_checksums(self): /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls) 308 309 start_time = datetime.now() --> 310 downloaded_path_or_paths = map_nested( 311 download_func, 312 url_or_urls, /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc) 427 num_proc = 1 428 if num_proc <= 1 or len(iterable) < parallel_min_length: --> 429 mapped = [ 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 428 if num_proc <= 1 or len(iterable) < parallel_min_length: 429 mapped = [ --> 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 432 ] /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 329 # Singleton first to spare some computation 330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 331 return function(data_struct) 332 333 # Reduce logging to keep things readable in multiprocessing with tqdm /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config) 335 # append the relative path to the base_path 336 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 337 return cached_path(url_or_filename, download_config=download_config) 338 339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]): /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 186 if is_remote_url(url_or_filename): 187 # URL, so get it from the cache (downloading if necessary) --> 188 output_path = get_from_cache( 189 url_or_filename, 190 cache_dir=cache_dir, /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 533 ) 534 elif response is not None and response.status_code == 404: --> 535 raise FileNotFoundError(f"Couldn't find file at {url}") 536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 537 if head_error is not None: FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json ``` ### Steps to reproduce the bug `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` ### Expected behavior download the data ### Environment info python3.6 latest datasets/transformers version
closed
https://github.com/huggingface/datasets/issues/5178
2022-11-01T03:17:55
2022-11-02T08:27:15
2022-11-02T08:24:29
{ "login": "beyondguo", "id": 37113676, "type": "User" }
[]
false
[]
1,430,238,556
5,177
Update create image dataset docs
Based on @osanseviero and community feedback, it wasn't super clear how to upload a dataset to the Hub after creating something like an image captioning dataset. This PR adds a brief section on how to upload the dataset with `push_to_hub`.
closed
https://github.com/huggingface/datasets/pull/5177
2022-10-31T17:45:56
2022-11-02T17:15:22
2022-11-02T17:13:02
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,430,214,539
5,176
prepare dataset for cloud storage doesn't work
### Describe the bug Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage. ``` from datasets import load_dataset, load_dataset_builder dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH') dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet") ``` The above code successfully downloaded dataset, however, it returns error from `download_and_prepare`. > Traceback (most recent call last): > File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module> > dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet") > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare > fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths > cls = get_filesystem_class(protocol) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class > register_implementation(protocol, _import_class(bit["class"])) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 257, in _import_class > mod = importlib.import_module(mod) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/importlib/__init__.py", line 127, in import_module > return _bootstrap._gcd_import(name[level:], package, level) > File "<frozen importlib._bootstrap>", line 1030, in _gcd_import > File "<frozen importlib._bootstrap>", line 1007, in _find_and_load > File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked > File "<frozen importlib._bootstrap>", line 680, in _load_unlocked > File "<frozen importlib._bootstrap_external>", line 850, in exec_module > File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed > File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module> > dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet") > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare > fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths > cls = get_filesystem_class(protocol) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class > register_implementation(protocol, _import_class(bit["class"])) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 258, in _import_class > return getattr(mod, name) > AttributeError: partially initialized module 'gcsfs' has no attribute 'GCSFileSystem' (most likely due to a circular import) ### Steps to reproduce the bug 1. pip install datasets==2.6.1 gcsfs==2022.8.2 2. Run the following code will reproduce the issue (change `LOCAL_PATH` and `Bucket_NAME` accordingly) ``` from datasets import load_dataset, load_dataset_builder dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH') dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet") ``` ### Expected behavior Expecting successful downloading dataset and uploading it to cloud storage. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.5.1
closed
https://github.com/huggingface/datasets/issues/5176
2022-10-31T17:28:57
2023-03-28T09:11:46
2023-03-28T09:11:45
{ "login": "araonblake", "id": 27285078, "type": "User" }
[]
false
[]
1,428,696,231
5,175
Loading an external NER dataset
I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag. I tried this code snnipet that I found here as an answer to a similar issue: from datasets import Dataset INPUT_COLUMNS = "ID Text NER".split() def read_conll(file): example = {col: [] for col in INPUT_COLUMNS} idx = 0 with open(file) as f: for line in f: if line.startswith("-DOCSTART-") or line == "\n" or not line: if example[next(iter(example))]: yield idx, example idx += 1 example = {col: [] for col in INPUT_COLUMNS} else: row_cols = line.split() for i, col in enumerate(example): example[col] = row_cols[i].rstrip() train = Dataset.from_generator(read_conll, gen_kwargs={"file": "some_path"}) But the following error happened: ValueError: Please pass `features` or at least one example when writing data
closed
https://github.com/huggingface/datasets/issues/5175
2022-10-30T09:31:55
2022-11-01T13:15:49
2022-11-01T13:15:49
{ "login": "Taghreed7878", "id": 112555442, "type": "User" }
[]
false
[]
1,427,216,416
5,174
Preserve None in list type cast in PyArrow 10
The `ListArray` type in PyArrow 10.0.0 supports the `mask` parameter, which allows us to preserve Nones in nested lists in `cast` instead of replacing them with empty lists. Fix https://github.com/huggingface/datasets/issues/3676
closed
https://github.com/huggingface/datasets/pull/5174
2022-10-28T12:48:30
2022-10-28T13:15:33
2022-10-28T13:13:18
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,425,880,441
5,173
Raise ffmpeg warnings only once
Our warnings looks nice now. `librosa` warning that was raised at each decoding: ``` /usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead. warnings.warn("PySoundFile failed. Trying audioread instead.") ``` is suppressed with `filterwarnings("ignore")` in a context manager. That means the first warning is also ignored (setting `filterwarnings("once")` didn't work!), so I added info that audioread is used for decoding to our message. Hope it's enough. Tests failed at first because they used to check if the warning was raised at (each) decoding in `librosa` case but now we throw only one warning (at first decoding). I removed this check for warnings, do you think it's fine?
closed
https://github.com/huggingface/datasets/pull/5173
2022-10-27T15:58:33
2022-10-28T16:03:05
2022-10-28T16:00:51
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,425,523,114
5,172
Inconsistency behavior between handling local file protocol and other FS protocols
### Describe the bug These lines us used during load_from_disk: ``` if is_remote_filesystem(fs): dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path) else: fs = fsspec.filesystem("file") dest_dataset_dict_path = dataset_dict_path ``` If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`. ### Steps to reproduce the bug ``` import fsspec.core url = "hdfs:///somewhere/MNIST" # url = "file:///somewhere/MNIST" fs, path = fsspec.core.url_to_fs(url) fs.ls(path) # this will always work load_from_disk(path, fs) # only works for local FS load_from_disk(url, fs) # only works for remote FS ``` ### Expected behavior one of `url` or `path` should always work I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since: ``` fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST' fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST' fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST' ``` and ``` fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST") ``` In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too) ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.4.205.1**HIDDEN** - Python version: 3.7.10 - PyArrow version: 8.0.0 - Pandas version: 1.2.4
open
https://github.com/huggingface/datasets/issues/5172
2022-10-27T12:03:20
2024-05-08T19:31:13
null
{ "login": "leoleoasd", "id": 37735580, "type": "User" }
[]
false
[]
1,425,355,111
5,171
Add PB and TB in convert_file_size_to_int
null
closed
https://github.com/huggingface/datasets/pull/5171
2022-10-27T09:50:31
2022-10-27T12:14:27
2022-10-27T12:12:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,425,301,835
5,170
[Caching] Deterministic hashing of torch tensors
Currently this fails ```python import torch from datasets.fingerprint import Hasher t = torch.tensor([1.]) def func(x): return t + x hash1 = Hasher.hash(func) t = torch.tensor([1.]) hash2 = Hasher.hash(func) assert hash1 == hash2 ``` Also as noticed in https://discuss.huggingface.co/t/dataset-cant-cache-models-outputs/24945, using a model in a `map` function doesn't work well with caching. Indeed the `bert-base-uncased` model has a different hash every time you reload it. Supporting torch tensors may also help in this case. This can be fixed by registering a custom pickling functions for torch tensors - as we did for other objects such as CodeType, FunctionType and Regex in `py_utils.py`
closed
https://github.com/huggingface/datasets/issues/5170
2022-10-27T09:15:15
2022-11-02T17:18:43
2022-11-02T17:18:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,425,075,254
5,169
Add "ipykernel" to list of `co_filename`s to remove
Should resolve #5157
closed
https://github.com/huggingface/datasets/pull/5169
2022-10-27T05:56:17
2022-11-02T15:46:00
2022-11-02T15:43:20
{ "login": "gpucce", "id": 32967787, "type": "User" }
[]
true
[]
1,424,368,572
5,168
Fix CI require beam
This PR: - Fixes the CI `require_beam`: before it was requiring PyTorch instead ```python def require_beam(test_case): if not config.TORCH_AVAILABLE: test_case = unittest.skip("test requires PyTorch")(test_case) return test_case ``` - Fixes a missing `require_beam` in `test_beam_based_builder_download_and_prepare_as_parquet` - Refactors `require_beam` to use `pytest` (`skipif`) instead
closed
https://github.com/huggingface/datasets/pull/5168
2022-10-26T16:49:33
2022-10-27T09:25:19
2022-10-27T09:23:26
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,424,124,477
5,167
Add ffmpeg4 installation instructions in warnings
Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users). Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it look nice are welcome! This is how it looks on Colab: ![image](https://user-images.githubusercontent.com/16348744/198052412-d48018d1-4416-4aa5-9114-f7f9b4af031f.png)
closed
https://github.com/huggingface/datasets/pull/5167
2022-10-26T14:21:14
2022-10-27T09:01:12
2022-10-27T08:58:58
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,423,629,582
5,166
Support dill 0.3.6
This PR: - ~~Unpins dill to allow installing dill>=0.3.6~~ - ~~Removes the fix on dill for >=0.3.6 because they implemented a deterministic mode (to be confirmed by @anivegesana)~~ - Pins dill<0.3.7 to allow latest dill 0.3.6 - Implements a fix for dill `save_function` for dill 0.3.6 - Additionally had to implement a fix for dill `save_code` and `_save_regex` for dill 0.3.6 - Fixes the CI so that the latest dill version is tested (besides the minimum 0.3.1.1 required by apache-beam 2.42.0) Fix #5162.
closed
https://github.com/huggingface/datasets/pull/5166
2022-10-26T08:24:59
2022-10-28T05:41:05
2022-10-28T05:38:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,423,616,677
5,165
Memory explosion when trying to access 4d tensors in datasets cast to torch or np
### Describe the bug When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors. ### Steps to reproduce the bug MWE: ```python from datasets import load_dataset import numpy as np def create_4d_tensor(item): i = item["num_nodes"] item["x_big"] = np.random.rand(i, 2*i, int(i/2), 1) + 1 # we create a big 4d tensor return item if __name__ == "__main__": dataset = load_dataset(path=f"graphs-datasets/PROTEINS") # This works print(dataset["train"].format) print(dataset["train"][0].keys()) dataset = dataset.map( create_4d_tensor, batched=False, writer_batch_size=100, ) # This works print(dataset["train"].format) print(dataset["train"][0].keys()) dataset.set_format("torch") print(dataset["train"].format) # This gets killed :( print(dataset["train"][0].keys()) ``` The problem likely comes from `format_table` [here](https://cs.github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/src/datasets/arrow_dataset.py#L2328) ### Expected behavior No memory explosion when trying to access dataset items after cast. ### Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
open
https://github.com/huggingface/datasets/issues/5165
2022-10-26T08:14:47
2022-10-26T08:14:47
null
{ "login": "clefourrier", "id": 22726840, "type": "User" }
[]
false
[]
1,422,813,247
5,164
WIP: drop labels in Image and Audio folders by default
will fix https://github.com/huggingface/datasets/issues/5153 and redundant labels displaying for most of the images datasets on the Hub (which are used just to store files) TODO: discuss adding `drop_labels` (and `drop_metadata`) params to yaml
closed
https://github.com/huggingface/datasets/pull/5164
2022-10-25T17:21:49
2022-11-16T14:21:16
2022-11-02T14:03:02
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,422,540,337
5,163
Reduce default max `writer_batch_size`
Reduce the default writer_batch_size from 10k to 1k examples. Additionally, align the default values of `batch_size` and `writer_batch_size` in `Dataset.cast` with the values from the corresponding docstring.
closed
https://github.com/huggingface/datasets/pull/5163
2022-10-25T14:14:52
2022-10-27T12:19:27
2022-10-27T12:16:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,422,461,112
5,162
Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6
### Describe the bug When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears. It is caused by a transitive dependency conflict between `datasets` and `multiprocess`. ### Steps to reproduce the bug ```bash $ echo "datasets" > requirements.in $ pip install pip-tools $ pip-compile requirements.in Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets==2.6.1->-r requirements.in (line 1)) Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6 Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1 There are incompatible versions in the resolved dependencies: dill<0.3.6 (from datasets==2.6.1->-r requirements.in (line 1)) dill>=0.3.6 (from multiprocess==0.70.14->datasets==2.6.1->-r requirements.in (line 1)) ``` ### Expected behavior A correctly generated file `requirements.txt` with pinned dependencies ### Environment info Tested with versions `2.6.1, 2.6.0, 2.5.2` on Python 3.8 and 3.10 on Ubuntu 20.04LTS and Python 3.10 on MacOS 12.6 (M1).
closed
https://github.com/huggingface/datasets/issues/5162
2022-10-25T13:23:50
2022-11-14T08:25:37
2022-10-28T05:38:15
{ "login": "Rijgersberg", "id": 8604946, "type": "User" }
[]
false
[]
1,422,371,748
5,161
Dataset can’t cache model’s outputs
### Describe the bug Hi, I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this? ### Steps to reproduce the bug 1. run below code 2. get different hash ``` from transformers import BertModel from transformers import AutoTokenizer import torch token = ['hello'] model = BertModel.from_pretrained("bert-base-uncased").eval() tok = AutoTokenizer.from_pretrained("bert-base-uncased") def abcd(): with torch.no_grad(): out = model(**tok(token,return_tensors='pt'))[0] # out = tok(token) return out from datasets.fingerprint import Hasher my_func = abcd print(Hasher.hash(my_func)) print(abcd()) ``` ### Expected behavior I wanna cache all the model output ### Environment info datasets:2.5.0
closed
https://github.com/huggingface/datasets/issues/5161
2022-10-25T12:19:00
2022-11-03T16:12:52
2022-11-03T16:12:51
{ "login": "jongjyh", "id": 37979232, "type": "User" }
[]
false
[]
1,422,193,938
5,160
Automatically add filename for image/audio folder
### Feature request When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both: a) Automatically displayed in the viewer b) Automatically added as a column to the dataset when doing `load_dataset` In `diffusers` our test rely quite heavily on images and audio files now and it's a bit tedious at the moment to download specific images from a datasets repo. E.g. we have a dataset of images for tests in `diffusers`: https://huggingface.co/datasets/hf-internal-testing/diffusers-images where it would be extremely nice to have direct access to the filename both visually on the datasets page (@severo ) as well as via the `load_datasets` function. We currently have some akward functionality to download images by path name: https://github.com/huggingface/diffusers/blob/2fb8fafa4b761f6fc144cf75a6f6f0ea6af3a1c1/src/diffusers/utils/testing_utils.py#L131 It would be much nicer to just go over `load_dataset(...)` ### Motivation Intuitively the filename is something people understand directly. E.g if you upload a folder of images online, it's nice if you recognize the image as well as the filename next to it directly and that you're able to use it right away. The label on the other hand is less intuitive to understand as you haven't added it yourself. ### Your contribution Not sure if I have the time to add it myself anytime soon, but it would help us a lot for `diffusers`.
open
https://github.com/huggingface/datasets/issues/5160
2022-10-25T09:56:49
2022-10-26T16:51:46
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,422,172,080
5,159
fsspec lock reset in multiprocessing
`fsspec` added a clean way of resetting its lock - instead of doing it manually
closed
https://github.com/huggingface/datasets/pull/5159
2022-10-25T09:41:59
2022-11-03T20:51:15
2022-11-03T20:48:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,422,059,287
5,158
Fix language and license tag names in all Hub datasets
While working on this: - #5137 we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license"). This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown. We should fix the "language" and "license" tag names in all Hub datasets. TODO: - [x] Fix language and license tag names in 402 Hub datasets CC: @julien-c
closed
https://github.com/huggingface/datasets/issues/5158
2022-10-25T08:19:29
2022-10-25T11:27:26
2022-10-25T10:42:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
false
[]
1,421,703,577
5,157
Consistent caching between python and jupyter
### Feature request I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch. If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour? ### Motivation If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again. ### Your contribution I am happy to try a PR if you give me some pointers where the changes should happen
closed
https://github.com/huggingface/datasets/issues/5157
2022-10-25T01:34:33
2022-11-02T15:43:22
2022-11-02T15:43:22
{ "login": "gpucce", "id": 32967787, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,421,667,125
5,156
Unable to download dataset using Azure Data Lake Gen 2
### Describe the bug When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed: ``` Traceback (most recent call last): File "download_hf_dataset.py", line 143, in <module> main() File "download_hf_dataset.py", line 102, in main builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/datasets/builder.py", line 671, in download_and_prepare fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/core.py", line 639, in get_fs_token_paths fs = cls(**options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'account_name' ``` If I don't pass the storage_options argument (leave it as None), it requires the credentials used in ADL Gen 1: `TypeError: __init__() missing 3 required positional arguments: 'tenant_id', 'client_id', and 'client_secret'` Thus, it is not possible to download a dataset from the cloud using Azure Data Lake (adl) Gen2. ### Steps to reproduce the bug Assuming that you have an account on Azure and at Storage Account that can be used for reproduce: 1. Create a dict with the format to connect to Azure Data Lake Gen 2 ``` storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY) # gen 2 filesystem ``` 2. Create a dataset builder for any HF hosted dataset ``` builder = load_dataset_builder(dataset_name) ``` 3. Try to download the dataset passing the storage_options as an argument ``` save_dir = 'adl://my_save_dir' builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") ``` ### Expected behavior Not seeing the error mentioned above and being able to download the dataset to the provided path on ADL ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
closed
https://github.com/huggingface/datasets/issues/5156
2022-10-25T00:43:18
2024-02-15T09:48:36
2022-11-17T23:37:08
{ "login": "clarissesimoes", "id": 87379512, "type": "User" }
[]
false
[]
1,421,278,748
5,155
TextConfig: added "errors"
This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`.
closed
https://github.com/huggingface/datasets/pull/5155
2022-10-24T18:56:52
2022-11-03T13:38:13
2022-11-03T13:35:35
{ "login": "NightMachinery", "id": 36224762, "type": "User" }
[]
true
[]
1,421,161,992
5,154
Test latest fsspec in CI
Following the discussion in https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 I think we need to test the latest fsspec in the CI
closed
https://github.com/huggingface/datasets/pull/5154
2022-10-24T17:18:13
2023-09-24T10:06:06
2022-10-25T09:30:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,420,833,457
5,153
default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir
### Describe the bug By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios As this is a corner case for quick exploration of images or audios on the Hub. ### Steps to reproduce the bug If you have directory like this: ``` repo image1.jpg image2.jpg image3.jpg ``` or ``` repo data image1.jpg image2.jpg image3.jpg ``` doing `ds = load_dataset(repo)` would create `label` feature: ```python print(ds["train"][0]) >> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 0} ``` Also, if you have the following structure: ``` repo data image1.jpg image2.jpg image3.jpg image4.jpg image5.jpg image6.jpg ``` it will infer two labels: ```python print(ds["train"][0]) print(ds["train"][-1]) >> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 1} >> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x415 at 0x7FB5326555B0>, 'label': 0} ``` ### Expected behavior We should have only one base feature (Image/Audio) in such cases. ### Environment info all versions of `datasets`
closed
https://github.com/huggingface/datasets/issues/5153
2022-10-24T13:28:18
2022-11-15T16:31:10
2022-11-15T16:31:09
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,420,808,919
5,152
refactor FolderBasedBuilder and Image/AudioFolder tests
Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader.
open
https://github.com/huggingface/datasets/issues/5152
2022-10-24T13:11:52
2022-10-24T13:11:52
null
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
false
[]
1,420,791,163
5,151
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
Now one can push only different splits within one default config of a dataset. Would be nice to allow something like: ``` ds.push_to_hub(repo_name, config=config_name) ``` I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
open
https://github.com/huggingface/datasets/issues/5151
2022-10-24T12:59:18
2022-11-04T14:55:20
null
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,420,684,999
5,150
Problems after upgrading to 2.6.1
### Describe the bug Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2. Context: - Each individual dataset in the dict is created with `Dataset.from_pandas` - The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds}) - The pandas dataframe, besides text columns, has a column with a dictionary inside and potentially different keys in each row. Correctly the `Dataset.from_pandas` function adds `key: None` to all dictionaries in each row so that the schema can be correctly inferred. ### Steps to reproduce the bug Steps to reproduce: - Upgrade to datasets==2.6.1 - Create a dataset from pandas dataframe with `Dataset.from_pandas` - Create a dataset_dict from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds}) - Save to disk with the `save` function ### Expected behavior Same as in v2.5.2, that is load from disk without errors ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
open
https://github.com/huggingface/datasets/issues/5150
2022-10-24T11:32:36
2024-05-12T07:40:03
null
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[]
false
[]
1,420,415,639
5,149
Make iter_files deterministic
Fix #5145.
closed
https://github.com/huggingface/datasets/pull/5149
2022-10-24T08:16:27
2022-10-27T09:53:23
2022-10-27T09:51:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,420,219,222
5,148
Cannot find the rvl_cdip dataset
Hi, I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error. dataset = load_dataset("rvl_cdip") Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdip/rvl_cdip.py Regards,
closed
https://github.com/huggingface/datasets/issues/5148
2022-10-24T04:57:42
2022-10-24T12:23:47
2022-10-24T06:25:28
{ "login": "santule", "id": 20509836, "type": "User" }
[]
false
[]
1,419,522,275
5,147
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
### Feature request `dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint. I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing. Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700). ### Motivation This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680. Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output: ```python def fn(example, verbose=False): ... ``` Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching. I'm not sure if other methods in the `Dataset` API could benefit from this feature. ### Your contribution Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation. I could contribute with a PR if this feature and approach look good to you.
open
https://github.com/huggingface/datasets/issues/5147
2022-10-22T21:46:38
2022-11-01T22:19:07
null
{ "login": "falcaopetri", "id": 8387736, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,418,331,282
5,146
Delete duplicate issue template file
A conflict between two PRs: - #5116 - #5136 was not properly resolved, resulting in a duplicate issue template. This PR removes the duplicate template.
closed
https://github.com/huggingface/datasets/pull/5146
2022-10-21T13:18:46
2022-10-21T13:52:30
2022-10-21T13:50:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,418,005,452
5,145
Dataset order is not deterministic with ZIP archives and `iter_files`
### Describe the bug For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order. ### Steps to reproduce the bug In a clean docker container or conda environment with datasets==2.6.1, run ```python from datasets import load_dataset from pprint import pprint data = load_dataset("beans", split="validation") pprint(data["image_file_path"]) ``` ### Expected behavior The order of the images is the same on all machines. ### Environment info On the EC2 instance: ``` - `datasets` version: 2.6.1 - Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 - Numpy version: not checked ``` On my local laptop: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.3.5 - Numpy version: 1.23.1 ``` On github actions: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5 - Python version: 3.8.14 - PyArrow version: 9.0.0 - Pandas version: 1.5.1 - Numpy version: 1.23.4 ```
closed
https://github.com/huggingface/datasets/issues/5145
2022-10-21T09:00:03
2022-10-27T09:51:49
2022-10-27T09:51:10
{ "login": "fxmarty", "id": 9808326, "type": "User" }
[]
false
[]
1,417,974,731
5,144
Inconsistent documentation on map remove_columns
### Describe the bug The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`: When you remove a column, it is only removed after the example has been provided to the mapped function. So it seems that the `remove_columns` parameter removes after the mapped functions. However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says: Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept. So one page says "after the mapped function" and another says "before the mapped function." Is there something wrong? ### Steps to reproduce the bug Not about code. ### Expected behavior consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`. ### Environment info datasets V2.6.0
closed
https://github.com/huggingface/datasets/issues/5144
2022-10-21T08:37:53
2022-11-15T14:15:10
2022-11-15T14:15:10
{ "login": "zhaowei-wang-nlp", "id": 22047467, "type": "User" }
[ { "name": "documentation", "color": "0075ca" }, { "name": "duplicate", "color": "cfd3d7" }, { "name": "good first issue", "color": "7057ff" }, { "name": "hacktoberfest", "color": "DF8D62" } ]
false
[]
1,416,837,186
5,143
DownloadManager Git LFS support
### Feature request Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right? Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict. Is there a good way to write a dataset loading script for a repo with lfs files? ### Motivation / ### Your contribution /
closed
https://github.com/huggingface/datasets/issues/5143
2022-10-20T15:29:29
2022-10-20T17:17:10
2022-10-20T17:17:10
{ "login": "Muennighoff", "id": 62820084, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,416,317,678
5,142
Deprecate num_proc parameter in DownloadManager.extract
fixes #5132 : Deprecated the `num_proc` parameter in `DownloadManager.extract` by passing `num_proc` parameter to `map_nested` .
closed
https://github.com/huggingface/datasets/pull/5142
2022-10-20T09:52:52
2022-10-25T18:06:56
2022-10-25T15:56:45
{ "login": "ayushthe1", "id": 114604338, "type": "User" }
[]
true
[]
1,415,479,438
5,141
Raise ImportError instead of OSError
fixes #5134 : Replaced OSError with ImportError if required extraction library is not installed.
closed
https://github.com/huggingface/datasets/pull/5141
2022-10-19T19:30:05
2022-10-25T15:59:25
2022-10-25T15:56:58
{ "login": "ayushthe1", "id": 114604338, "type": "User" }
[]
true
[]
1,415,075,530
5,140
Make the KeyHasher FIPS compliant
MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant
closed
https://github.com/huggingface/datasets/pull/5140
2022-10-19T14:25:52
2022-11-07T16:20:43
2022-11-07T16:20:43
{ "login": "vvalouch", "id": 22592860, "type": "User" }
[]
true
[]
1,414,642,723
5,137
Align task tags in dataset metadata
## Describe Once we have agreed on a common naming for task tags for all open source projects, we should align on them. ## Steps - [x] Align task tags in canonical datasets - [x] task_categories: 4 datasets - [x] task_ids (by @lhoestq) - [x] Open PRs in community datasets - [x] task_categories: 451 datasets - [x] task_ids: 556 datasets
closed
https://github.com/huggingface/datasets/issues/5137
2022-10-19T09:41:42
2022-11-10T05:25:58
2022-10-25T06:17:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
false
[]
1,414,492,139
5,136
Update docs once dataset scripts transferred to the Hub
Todo: - [x] Update docs: - [x] Datasets on GitHub (legacy) - [x] Load: offline - [x] About dataset load: - [x] Maintaining integrity - [x] Security - [x] Update docstrings: - [x] Inspect: - [x] get_dataset_config_info - [x] get_dataset_split_names - [x] Load: - [x] dataset_module_factory - [x] load_dataset_builder - [x] load_dataset - [x] Remove `ADD_NEW_DATASET.md` - [x] Update `.github/ISSUE_TEMPLATE/config.yml` Fix #5135.
closed
https://github.com/huggingface/datasets/pull/5136
2022-10-19T07:58:27
2022-10-20T08:12:21
2022-10-20T08:10:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,414,413,519
5,135
Update docs once dataset scripts transferred to the Hub
## Describe the bug As discussed in: - https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701 we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub): - #4974 Concretely: - [x] Datasets on GitHub (legacy): https://huggingface.co/docs/datasets/main/en/share#datasets-on-github-legacy - [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md - ... This PR complements the work of: - #5067 This PR is a follow-up of PRs: - #3777 CC: @julien-c
closed
https://github.com/huggingface/datasets/issues/5135
2022-10-19T06:58:19
2022-10-20T08:10:01
2022-10-20T08:10:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,413,623,687
5,134
Raise ImportError instead of OSError if required extraction library is not installed
According to the official Python docs, `OSError` should be thrown in the following situations: > This exception is raised when a system function returns a system-related error, including I/O failures such as β€œfile not found” or β€œdisk full” (not for illegal argument types or other incidental errors). Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed.
closed
https://github.com/huggingface/datasets/issues/5134
2022-10-18T17:53:46
2022-10-25T15:56:59
2022-10-25T15:56:59
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" }, { "name": "hacktoberfest", "color": "DF8D62" } ]
false
[]
1,413,623,462
5,133
Tensor operation not functioning in dataset mapping
## Describe the bug I'm doing a torch.mean() operation in data preprocessing, and it's not working. ## Steps to reproduce the bug ``` from transformers import pipeline import torch import numpy as np from datasets import load_dataset device = 'cuda:0' raw_dataset = load_dataset("glue", "sst2") feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device) def extracted_data(examples): # feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device) # feature = torch.mean(feature, dim=1) feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1) print(feature.shape) return {'feature': feature} extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16) ``` ## Results When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768]. ## Environment info - `datasets` version: 2.6.1 - Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
closed
https://github.com/huggingface/datasets/issues/5133
2022-10-18T17:53:35
2022-10-19T04:15:45
2022-10-19T04:15:44
{ "login": "xinghaow99", "id": 50691954, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,413,607,306
5,132
Depracate `num_proc` parameter in `DownloadManager.extract`
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`.
closed
https://github.com/huggingface/datasets/issues/5132
2022-10-18T17:41:05
2022-10-25T15:56:46
2022-10-25T15:56:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" }, { "name": "hacktoberfest", "color": "DF8D62" } ]
false
[]
1,413,534,863
5,131
WikiText 103 tokenizer hangs
See issue here: https://github.com/huggingface/transformers/issues/19702
closed
https://github.com/huggingface/datasets/issues/5131
2022-10-18T16:44:00
2023-08-08T08:42:40
2023-07-21T14:41:51
{ "login": "TrentBrick", "id": 12433427, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,413,435,000
5,130
Avoid extra cast in `class_encode_column`
Pass the updated features to `map` to avoid the `cast` in `class_encode_column`.
closed
https://github.com/huggingface/datasets/pull/5130
2022-10-18T15:31:24
2022-10-19T11:53:02
2022-10-19T11:50:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,413,031,664
5,129
unexpected `cast` or `class_encode_column` result after `rename_column`
## Describe the bug When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("amazon_reviews_multi", "en") data = dataset['train'] data = data.remove_columns( [ "review_id", "product_id", "reviewer_id", "review_title", "language", "product_category", ] ) data = data.rename_column("review_body", "text") data1 = data.class_encode_column("stars") print(set(data1.data.columns[0])) # output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>} data = data.rename_column("stars", "label") print(set(data.data.columns[0])) # output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>} data2 = data.class_encode_column("label") print(set(data2.data.columns[0])) # output: {<pyarrow.Int64Scalar: 0>} ``` ## Expected results the last print should be: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>} ## Actual results but it output: {<pyarrow.Int64Scalar: 0>} ## Environment info - `datasets` version: 2.6.1 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
closed
https://github.com/huggingface/datasets/issues/5129
2022-10-18T11:15:24
2022-10-19T03:02:26
2022-10-19T03:02:26
{ "login": "quaeast", "id": 35144675, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,412,783,855
5,128
Make filename matching more robust
Fix #5046
closed
https://github.com/huggingface/datasets/pull/5128
2022-10-18T08:22:48
2022-10-28T13:07:38
2022-10-28T13:05:06
{ "login": "riccardobucco", "id": 9295277, "type": "User" }
[]
true
[]
1,411,897,544
5,127
[WIP] WebDataset export
I added a first draft of the `IterableDataset.to_wds` method. You can use it to savea dataset loaded in streamign mode as a webdataset locally. The API can be further improved to allow to export to a cloud storage like the HF Hub. I also included sharding with a default max shard size of 500MB (uncompressed), and it is single-processed fo rnow. Choosing the number of shards is not implemented yet - though if we know the size of the `IterableDataset` this is probably doable`. For example ```python >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> ds.to_wds("output_dir", compress=True) >>> import webdataset as wds >>> ds = wds.WebDataset("output_dir/rotten_tomatoes-train-000000.tar.gz").decode() >>> next(iter(ds)) {'__key__': '0', '__url__': 'output_dir/rotten_tomatoes-train-000000.tar.gz', 'label.cls': 1, 'text.txt': 'the rock is destined to be the 21st century\'s new ..., jean-claud van damme or steven segal .'} ``` ### Implementation details The WebDataset format is made of TAR archives containing a series of files per example. For example one pair of `image.jpg` and `label.cls` for image classification. WebDataset automatically decodes serialized data based on the extension of the files, and output a dictionary. For example `{"image.png": np.array(...), "label.cls": 0}` if you choose the numpy decoding. To use the automatic decoding, I store each field of each example as a file with its corresponding extension (jpg, json, cls, etc.) While this is useful to end up with a dictionary with one key per column and appropriate decoding, it can create huge TAR archives if the dataset is made of small samples of text - probably because of useless TAR metadata for each file. This also makes loading super slow: iterating on SQuAD takes 50sec vs 7sec using `datasets` in streaming mode. I haven't taken a look at alternatives for text datasets made out of small samples, but for image datasets this can already be used to run some benchmarks.
closed
https://github.com/huggingface/datasets/pull/5127
2022-10-17T16:50:22
2024-01-11T06:27:04
2024-01-08T14:25:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,411,757,124
5,126
Fix class name of symbolic link
Fix #5098
closed
https://github.com/huggingface/datasets/pull/5126
2022-10-17T15:11:02
2022-11-14T14:40:18
2022-11-14T14:40:18
{ "login": "riccardobucco", "id": 9295277, "type": "User" }
[]
true
[]
1,411,602,813
5,125
Add `pyproject.toml` for `black`
Add `pyproject.toml` as a config file for the `black` tool to support VS Code's auto-formatting on save (and to be more consistent with the other HF projects).
closed
https://github.com/huggingface/datasets/pull/5125
2022-10-17T13:38:47
2024-11-20T13:36:11
2022-10-17T14:21:09
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,411,159,725
5,124
Install tensorflow-macos dependency conditionally
Fix #5118.
closed
https://github.com/huggingface/datasets/pull/5124
2022-10-17T08:45:08
2022-10-19T09:12:17
2022-10-19T09:10:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,410,828,756
5,123
datasets freezes with streaming mode in multiple-gpu
## Describe the bug Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L22 During using multi-gpu in accelerator in one node, the code freezes, but works for 1 GPU: ``` 10/16/2022 14:18:46 - INFO - datasets.info - Loading Dataset Infos from /home/jack/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 Steps: 0%| | 0/400000 [00:00<?, ?it/s]10/16/2022 14:18:47 - INFO - torch.utils.data.dataloader - Shared seed (135290893754684706) sent to store on rank 0 ``` # Code to reproduce please run this code with `accelerate launch code.py` ``` from accelerate import Accelerator from accelerate.logging import get_logger from datasets import load_dataset from torch.utils.data.dataloader import DataLoader import torch from datasets import load_dataset from transformers import AutoTokenizer import torch from accelerate.logging import get_logger from torch.utils.data import IterableDataset from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe logger = get_logger(__name__) class ConstantLengthDataset(IterableDataset): """ Iterable dataset that returns constant length chunks of tokens from stream of text files. Args: tokenizer (Tokenizer): The processor used for proccessing the data. dataset (dataset.Dataset): Dataset with text files. infinite (bool): If True the iterator is reset after dataset reaches end else stops. max_seq_length (int): Length of token sequences to return. num_of_sequences (int): Number of token sequences to keep in buffer. chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer. """ def __init__( self, tokenizer, dataset, infinite=False, max_seq_length=1024, num_of_sequences=1024, chars_per_token=3.6, ): self.tokenizer = tokenizer # self.concat_token_id = tokenizer.bos_token_id self.dataset = dataset self.max_seq_length = max_seq_length self.epoch = 0 self.infinite = infinite self.current_size = 0 self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences self.content_field = "text" def __iter__(self): iterator = iter(self.dataset) more_examples = True while more_examples: buffer, buffer_len = [], 0 while True: if buffer_len >= self.max_buffer_size: break try: buffer.append(next(iterator)[self.content_field]) buffer_len += len(buffer[-1]) except StopIteration: if self.infinite: iterator = iter(self.dataset) self.epoch += 1 logger.info(f"Dataset epoch: {self.epoch}") else: more_examples = False break tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"] all_token_ids = [] for tokenized_input in tokenized_inputs: all_token_ids.extend(tokenized_input) for i in range(0, len(all_token_ids), self.max_seq_length): input_ids = all_token_ids[i : i + self.max_seq_length] if len(input_ids) == self.max_seq_length: self.current_size += 1 yield torch.tensor(input_ids) def shuffle(self, buffer_size=1000): return ShufflerIterDataPipe(self, buffer_size=buffer_size) def create_dataloaders(tokenizer, accelerator): ds_kwargs = {"streaming": True} # In distributed training, the load_dataset function gaurantees that only one process # can concurrently download the dataset. datasets = load_dataset( "c4", "en", cache_dir="cache_dir", **ds_kwargs, ) train_data, valid_data = datasets["train"], datasets["validation"] with accelerator.main_process_first(): train_data = train_data.shuffle(buffer_size=10000, seed=None) train_dataset = ConstantLengthDataset( tokenizer, train_data, infinite=True, max_seq_length=256, ) valid_dataset = ConstantLengthDataset( tokenizer, valid_data, infinite=False, max_seq_length=256, ) train_dataset = train_dataset.shuffle(buffer_size=10000) train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True) eval_dataloader = DataLoader(valid_dataset, batch_size=160) return train_dataloader, eval_dataloader def main(): # Accelerator. logging_dir = "data_save_dir/log" accelerator = Accelerator( gradient_accumulation_steps=1, mixed_precision="bf16", log_with="tensorboard", logging_dir=logging_dir, ) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: accelerator.init_trackers("test") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # Load datasets and create dataloaders. train_dataloader, _ = create_dataloaders(tokenizer, accelerator) train_dataloader = accelerator.prepare(train_dataloader) for step, batch in enumerate(train_dataloader, start=1): print(step) accelerator.end_training() if __name__ == "__main__": main() ``` ## Results expected Being able to run the code for streamining datasets with multi-gpu ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: linux - Python version: 3.9.12 - PyArrow version: 9.0.0 @lhoestq I do not have any idea why this freezing happens, and I removed the streaming mode and this was working fine, so I know this is caused by streaming mode of the dataloader part not working well with multi-gpu setting. Since datasets are large, I hope to keep the streamining mode. I very much appreciate your help.
open
https://github.com/huggingface/datasets/issues/5123
2022-10-17T03:28:16
2023-05-14T06:55:20
null
{ "login": "jackfeinmann5", "id": 59409879, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,410,732,403
5,122
Add warning
Fixes: #5105 I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory.
closed
https://github.com/huggingface/datasets/pull/5122
2022-10-17T01:30:37
2022-11-05T12:23:53
2022-11-05T12:23:53
{ "login": "Salehbigdeli", "id": 34204311, "type": "User" }
[]
true
[]
1,410,681,067
5,121
Bugfix ignore function when creating new_fingerprint for caching
maybe fixes: #5109
closed
https://github.com/huggingface/datasets/pull/5121
2022-10-17T00:03:43
2022-10-17T12:39:36
2022-10-17T12:39:36
{ "login": "Salehbigdeli", "id": 34204311, "type": "User" }
[]
true
[]
1,410,641,221
5,120
Fix `tqdm` zip bug
This PR solves #5117, by wrapping the entire `zip` clause in tqdm. For more information, please checkout this Stack Overflow thread: https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together
closed
https://github.com/huggingface/datasets/pull/5120
2022-10-16T22:19:18
2022-10-23T10:27:53
2022-10-19T08:53:17
{ "login": "david1542", "id": 9879252, "type": "User" }
[]
true
[]
1,410,561,363
5,119
[TYPO] Update new_dataset_script.py
null
closed
https://github.com/huggingface/datasets/pull/5119
2022-10-16T17:36:49
2022-10-19T09:48:19
2022-10-19T09:45:59
{ "login": "cakiki", "id": 3664563, "type": "User" }
[]
true
[]
1,410,547,373
5,118
Installing `datasets` on M1 computers
## Describe the bug I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`. On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1? ## Steps to reproduce the bug Fresh clone this project (on m1), create a virtualenv and run this: ```python pip install -e ".[dev]" ``` ## Expected results Installation should be smooth, and all the dependencies should be installed on M1. ## Actual results You should receive an error, saying pip couldn't find a version that matches this pattern: ``` tensorflow>=2.3,!=2.6.0,!=2.6.1 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.2.dev0 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
closed
https://github.com/huggingface/datasets/issues/5118
2022-10-16T16:50:08
2022-10-19T09:10:08
2022-10-19T09:10:08
{ "login": "david1542", "id": 9879252, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,409,571,346
5,117
Progress bars have color red and never completed to 100%
## Describe the bug Progress bars after transformative operations turn in red and never be completed to 100% ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('rotten_tomatoes', split='test').filter(lambda o: True) ``` ## Expected results Progress bar should be 100% and green ## Actual results Progress bar turn in red and never completed to 100% ## Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.14 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
closed
https://github.com/huggingface/datasets/issues/5117
2022-10-14T16:12:30
2024-06-19T19:03:42
2022-10-23T12:58:41
{ "login": "echatzikyriakidis", "id": 63857529, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,409,549,471
5,116
Use yaml for issue templates + revamp
Use YAML instead of markdown (more expressive) for the issue templates. In addition, update their structure/fields to be more aligned with Transformers. PS: also removes the "add_dataset" PR template, as we no longer accept such PRs.
closed
https://github.com/huggingface/datasets/pull/5116
2022-10-14T15:53:13
2022-10-19T13:05:49
2022-10-19T13:03:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,409,250,020
5,115
Fix iter_batches
The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on one batch of one element: ```python from datasets import Dataset, concatenate_datasets ds = concatenate_datasets([Dataset.from_dict({"a": [i]}) for i in range(10)]) ds2 = ds.map(lambda _: {}, batched=True) assert list(ds2) == list(ds) ``` This was introduced in https://github.com/huggingface/datasets/pull/5030 Close https://github.com/huggingface/datasets/issues/5111 This will require a patch release along with https://github.com/huggingface/datasets/pull/5113 TODO: - [x] fix tests - [x] add more tests
closed
https://github.com/huggingface/datasets/pull/5115
2022-10-14T12:06:14
2022-10-14T15:02:15
2022-10-14T14:59:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,409,236,738
5,114
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
## Describe the bug The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py: ```python if is_remote_filesystem(fs): src_dataset_path = extract_path_from_uri(dataset_path) dataset_path = Dataset._build_local_temp_path(src_dataset_path) fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True) ``` If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train` Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice) Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right: ```python fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True) ``` ## Steps to reproduce the bug ```python fs = gcsfs.GCSFileSystem(**storage_options) dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine dataset.save_to_disk(output_dir, fs=fs) #works fine dataset = load_from_disk(output_dir, fs=fs) # crashes ``` ## Expected results The dataset is loaded ## Actual results FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.6.1.dev0 - Platform: mac os monterey 12.5.1 - Python version: 3.8.13 - PyArrow version:pyarrow==9.0.0
open
https://github.com/huggingface/datasets/issues/5114
2022-10-14T11:54:53
2022-11-19T07:13:10
null
{ "login": "bruno-hays", "id": 48770768, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,409,207,607
5,113
Fix filter indices when batched
This PR fixes a bug introduced by: - #5030 Fix #5112.
closed
https://github.com/huggingface/datasets/pull/5113
2022-10-14T11:30:03
2022-10-24T06:21:09
2022-10-14T12:11:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,409,143,409
5,112
Bug with filtered indices
## Describe the bug As reported by @PartiallyTyped (and by @Muennighoff): - https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524 There is an issue with the indices of a filtered dataset. ## Steps to reproduce the bug ```python ds = Dataset.from_dict({"num": [0, 1, 2, 3]}) ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2) assert all(item["num"] % 2 == 0 for item in ds) ``` ## Expected results The indices of the filtered dataset should correspond to the examples with "language" equals to "english". ## Actual results Indices to items with other languages are included in the filtered dataset indices ## Preliminar investigation It seems a bug introduced by: - #5030
closed
https://github.com/huggingface/datasets/issues/5112
2022-10-14T10:35:47
2022-10-14T13:55:03
2022-10-14T12:11:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,408,143,170
5,111
map and filter not working properly in multiprocessing with the new release 2.6.0
## Describe the bug When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2 In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements. ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset def preprocess(example): return example ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)]) ds1 = ds.map(preprocess, num_proc=2) ds2 = ds.map(preprocess) # the datasets elements are the same for i in range(len(ds1)): assert ds1[i]==ds2[i] print(f'Target column before filtering {ds1["autogenerated"]}') print(f'Target column before filtering {ds2["autogenerated"]}') print(f"datasets version {datasets.__version__}") ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"]) ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"]) # all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept print(ds_filtered_1) print(ds_filtered_2) ``` ``` Target column before filtering [False, False, False, False, False, False, False, False, False, False] Target column before filtering [False, False, False, False, False, False, False, False, False, False] Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 5 }) Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 10 }) ``` ## Expected results Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen ## Actual results Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.0 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/5111
2022-10-13T17:00:55
2022-10-17T08:26:59
2022-10-14T14:59:59
{ "login": "loubnabnl", "id": 44069155, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,407,434,706
5,109
Map caching not working for some class methods
## Describe the bug The cache loading is not working as expected for some class methods with a model stored in an attribute. The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method. This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run. ## Steps to reproduce the bug ```python from datasets import load_dataset from transformers import AutoConfig, AutoModel, AutoTokenizer dataset = load_dataset("ethos", "binary") BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2" class Object: def __init__(self): config = AutoConfig.from_pretrained(BASE_MODELNAME) self.bert = AutoModel.from_config(config=config, add_pooling_layer=False) self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME) def tokenize(self, examples): tokenized_texts = self.tok( examples["text"], padding="max_length", truncation=True, max_length=256, ) return tokenized_texts instance = Object() result = dict() for phase in ["train"]: result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2) ``` ## Expected results Load cache instead of recompute result. ## Actual results Result recomputed from scratch at each run. The cache works fine when deleting `bert` attribute. ## Environment info - `datasets` version: 2.5.3.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.13 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
closed
https://github.com/huggingface/datasets/issues/5109
2022-10-13T09:12:58
2022-10-17T10:38:45
2022-10-17T10:38:45
{ "login": "Mouhanedg56", "id": 23029765, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,407,044,107
5,108
Fix a typo in arrow_dataset.py
null
closed
https://github.com/huggingface/datasets/pull/5108
2022-10-13T02:33:55
2022-10-14T09:47:28
2022-10-14T09:47:27
{ "login": "yangky11", "id": 5431913, "type": "User" }
[]
true
[]