url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
list
https://api.github.com/repos/huggingface/datasets/issues/3421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3421/comments
https://api.github.com/repos/huggingface/datasets/issues/3421/events
https://github.com/huggingface/datasets/pull/3421
1,077,966,571
PR_kwDODunzps4vuvJK
3,421
Adding mMARCO dataset
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
7
2021-12-13T00:56:43Z
2022-10-03T09:37:15Z
2022-10-03T09:37:15Z
null
Adding mMARCO (v1.1) to HF datasets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3421/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3421/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3421.diff", "html_url": "https://github.com/huggingface/datasets/pull/3421", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3421.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3421" }
true
[ "Hi @albertvillanova we've made a major overhaul of the loading script including all configurations we're making available. Could you please review it again?", "@albertvillanova :ping_pong: ", "Thanks @lhbonifacio for adding this dataset.\r\nHi there, i got an error about mmarco:\r\nConnectionError: Couldn't re...
https://api.github.com/repos/huggingface/datasets/issues/3447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3447/comments
https://api.github.com/repos/huggingface/datasets/issues/3447/events
https://github.com/huggingface/datasets/issues/3447
1,082,539,790
I_kwDODunzps5Ahj8O
3,447
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2021-12-16T18:51:13Z
2022-02-17T14:16:27Z
2022-02-17T14:16:27Z
null
## Describe the bug According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir. "Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here. ## Steps to reproduce the bug ``` export HF_DATASETS_OFFLINE=1 python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2 ``` ## Expected results datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time. ## Actual results The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426". ``` 12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53 12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426) Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 17623.13it/s] 12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min 12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1206.99it/s] 12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums. 12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train 12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation 12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data. 100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 53.54it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3447/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3447/timeline
null
completed
null
null
false
[ "Hi ! Indeed it says \"downloading and preparing\" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case", "@lhoestq Thank you for explaining. I am sorry but I was not clear abou...
https://api.github.com/repos/huggingface/datasets/issues/2645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2645/comments
https://api.github.com/repos/huggingface/datasets/issues/2645/events
https://github.com/huggingface/datasets/issues/2645
944,374,284
MDU6SXNzdWU5NDQzNzQyODQ=
2,645
load_dataset processing failed with OS error after downloading a dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2021-07-14T12:23:53Z
2021-07-15T09:34:02Z
2021-07-15T09:34:02Z
null
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ## Expected results there is no error when running load_dataset. ## Actual results Specify the actual results or traceback. Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prep self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 952, in encode_example example = cast_to_python_objects(example) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 219, in cast_to_python_ob return _cast_to_python_objects(obj)[0] File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 165, in _cast_to_python_o import torch File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module> _load_global_deps() File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/home/anaconda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen: cannot load any more object with static TLS During handling of the above exception, another exception occurred: Traceback (most recent call last): File "download_hub_opus100.py", line 9, in <module> this_dataset = load_dataset('opus100', language_pair) File "/home/anaconda3/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepa dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 658, in _download_and_prep + str(e) OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid - Python version: 3.6.6 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2645/timeline
null
completed
null
null
false
[ "Hi ! It looks like an issue with pytorch.\r\n\r\nCould you try to run `import torch` and see if it raises an error ?", "> Hi ! It looks like an issue with pytorch.\r\n> \r\n> Could you try to run `import torch` and see if it raises an error ?\r\n\r\nIt works. Thank you!" ]
https://api.github.com/repos/huggingface/datasets/issues/2928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2928/comments
https://api.github.com/repos/huggingface/datasets/issues/2928/events
https://github.com/huggingface/datasets/pull/2928
997,941,506
PR_kwDODunzps4r0yUb
2,928
Update BibTeX entry
[]
closed
false
null
0
2021-09-16T08:39:20Z
2021-09-16T12:35:34Z
2021-09-16T12:35:34Z
null
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2928/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2928/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2928.diff", "html_url": "https://github.com/huggingface/datasets/pull/2928", "merged_at": "2021-09-16T12:35:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2928.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2928" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5825/comments
https://api.github.com/repos/huggingface/datasets/issues/5825/events
https://github.com/huggingface/datasets/issues/5825
1,697,327,483
I_kwDODunzps5lKyl7
5,825
FileNotFound even though exists
[]
open
false
null
3
2023-05-05T09:49:55Z
2023-05-07T17:43:46Z
null
null
### Describe the bug I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong? ``` Downloading builder script: 100% 2.82k/2.82k [00:00<00:00, 64.2kB/s] Downloading readme: 100% 12.6k/12.6k [00:00<00:00, 585kB/s] --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-2-4b45446a91d5>](https://localhost:8080/#) in <cell line: 4>() 2 lang = "ur" 3 fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl" ----> 4 dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}") 6 frames [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions) 291 if allowed_extensions is not None: 292 error_msg += f" with any supported extension {list(allowed_extensions)}" --> 293 raise FileNotFoundError(error_msg) 294 return sorted(out) 295 FileNotFoundError: Unable to find 'https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl' at /content/https:/huggingface.co/datasets/bigscience/xP3/resolve/main ``` ### Steps to reproduce the bug ``` !pip install -q datasets from datasets import load_dataset lang = "ur" fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl" dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}") ``` ### Expected behavior Correctly downloads ### Environment info latest versions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5825/timeline
null
null
null
null
false
[ "Hi! \r\n\r\nThis would only work if `bigscience/xP3` was a no-code dataset, but it isn't (it has a Python builder script).\r\n\r\nBut this should work: \r\n```python\r\nload_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_a...
https://api.github.com/repos/huggingface/datasets/issues/3275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3275/comments
https://api.github.com/repos/huggingface/datasets/issues/3275/events
https://github.com/huggingface/datasets/pull/3275
1,053,698,898
PR_kwDODunzps4uiN9t
3,275
Force data files extraction if download_mode='force_redownload'
[]
closed
false
null
0
2021-11-15T14:00:24Z
2021-11-15T14:45:23Z
2021-11-15T14:45:23Z
null
Avoids weird issues when redownloading a dataset due to cached data not being fully updated. With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows: ```python dset = load_dataset(..., download_mode="force_redownload") ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3275/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3275.diff", "html_url": "https://github.com/huggingface/datasets/pull/3275", "merged_at": "2021-11-15T14:45:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3275.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3275" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2047/comments
https://api.github.com/repos/huggingface/datasets/issues/2047/events
https://github.com/huggingface/datasets/pull/2047
830,626,430
MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3
2,047
Multilingual dIalogAct benchMark (miam)
[]
closed
false
null
4
2021-03-12T23:02:55Z
2021-03-23T10:36:34Z
2021-03-19T10:47:13Z
null
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2047/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2047.diff", "html_url": "https://github.com/huggingface/datasets/pull/2047", "merged_at": "2021-03-19T10:47:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/2047.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2047" }
true
[ "Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)", "I will run isort again. Hopefully it resolves the current check_code_quality test failure.", "Once the review period is over, feel free to open a PR to add all the missing information ;)", "Hi! I will follow up right ...
https://api.github.com/repos/huggingface/datasets/issues/630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/630/comments
https://api.github.com/repos/huggingface/datasets/issues/630/events
https://github.com/huggingface/datasets/issues/630
701,636,350
MDU6SXNzdWU3MDE2MzYzNTA=
630
Text dataset not working with large files
[]
closed
false
null
11
2020-09-15T06:02:36Z
2020-09-25T22:21:43Z
2020-09-25T22:21:43Z
null
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset dataset = load_dataset("text", data_files=file_path, split='train+test') File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables convert_options=self.config.convert_options, File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` **pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** It gives the same message for both 200MB, 10GB .tx files but not for 700MB file. Can't upload due to size & copyright problem. sorry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/630/timeline
null
completed
null
null
false
[ "Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.", "Can you give us some stats on the data files you use as inputs?", "Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\\n```\r\n\r\nAlso, it gets...
https://api.github.com/repos/huggingface/datasets/issues/1983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1983/comments
https://api.github.com/repos/huggingface/datasets/issues/1983/events
https://github.com/huggingface/datasets/issues/1983
821,746,008
MDU6SXNzdWU4MjE3NDYwMDg=
1,983
The size of CoNLL-2003 is not consistant with the official release.
[]
closed
false
null
4
2021-03-04T04:41:34Z
2022-10-05T13:13:26Z
2022-10-05T13:13:26Z
null
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1983/timeline
null
completed
null
null
false
[ "Hi,\r\n\r\nif you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered ...
https://api.github.com/repos/huggingface/datasets/issues/5894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5894/comments
https://api.github.com/repos/huggingface/datasets/issues/5894/events
https://github.com/huggingface/datasets/pull/5894
1,724,774,910
PR_kwDODunzps5RSjot
5,894
Force overwrite existing filesystem protocol
[]
closed
false
null
2
2023-05-24T21:41:53Z
2023-05-25T06:52:08Z
2023-05-25T06:42:33Z
null
Fix #5876
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5894/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5894/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5894.diff", "html_url": "https://github.com/huggingface/datasets/pull/5894", "merged_at": "2023-05-25T06:42:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/5894.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5894" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/3583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3583/comments
https://api.github.com/repos/huggingface/datasets/issues/3583/events
https://github.com/huggingface/datasets/issues/3583
1,105,195,144
I_kwDODunzps5B3_CI
3,583
Add The Medical Segmentation Decathlon Dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc",...
open
false
null
5
2022-01-16T21:42:25Z
2022-03-18T10:44:42Z
null
null
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735) - **Data:** http://medicaldecathlon.com/ - **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community. (cc @osanseviero @abidlabs ) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3583/timeline
null
null
null
null
false
[ "Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. \r\nI haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue?\r\nIf yes, I've got ...
https://api.github.com/repos/huggingface/datasets/issues/817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/817/comments
https://api.github.com/repos/huggingface/datasets/issues/817/events
https://github.com/huggingface/datasets/issues/817
739,145,369
MDU6SXNzdWU3MzkxNDUzNjk=
817
Add MRQA dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
1
2020-11-09T15:52:19Z
2020-12-04T15:44:42Z
2020-12-04T15:44:41Z
null
## Adding a Dataset - **Name:** MRQA - **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task - **Paper:** https://arxiv.org/abs/1910.09753 - **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019 - **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/817/timeline
null
completed
null
null
false
[ "Done! cf #1117 and #1022" ]
https://api.github.com/repos/huggingface/datasets/issues/915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/915/comments
https://api.github.com/repos/huggingface/datasets/issues/915/events
https://github.com/huggingface/datasets/issues/915
753,118,481
MDU6SXNzdWU3NTMxMTg0ODE=
915
Shall we change the hashing to encoding to reduce potential replicated cache files?
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
open
false
null
2
2020-11-30T03:50:46Z
2020-12-24T05:11:49Z
null
null
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/915/timeline
null
null
null
null
false
[ "This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?", "@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equiv...
https://api.github.com/repos/huggingface/datasets/issues/4327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4327/comments
https://api.github.com/repos/huggingface/datasets/issues/4327/events
https://github.com/huggingface/datasets/issues/4327
1,233,840,020
I_kwDODunzps5JiueU
4,327
`wikipedia` pre-processed datasets
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-05-12T11:25:42Z
2022-08-31T08:26:57Z
2022-08-31T08:26:57Z
null
## Describe the bug [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", "20220301.en") ``` ## Expected results To load the dataset ## Actual results Takes a very long time to load (after downloading) After `Downloading data files: 100%`. It takes hours and gets killed. Tried `wikipedia.simple` and it got processed after ~30mins.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4327/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4327/timeline
null
completed
null
null
false
[ "Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 22...
https://api.github.com/repos/huggingface/datasets/issues/1707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1707/comments
https://api.github.com/repos/huggingface/datasets/issues/1707/events
https://github.com/huggingface/datasets/pull/1707
781,507,545
MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2
1,707
Added generated READMEs for datasets that were missing one.
[]
closed
false
null
1
2021-01-07T18:10:06Z
2021-01-18T14:32:33Z
2021-01-18T14:32:33Z
null
This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible. Code is available here for the moment: https://github.com/madlag/datasets_readme_generator . We will move it to a Hugging Face repository and to https://huggingface.co/datasets/card-creator/ later.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1707/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1707/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1707.diff", "html_url": "https://github.com/huggingface/datasets/pull/1707", "merged_at": "2021-01-18T14:32:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1707.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1707" }
true
[ "Looks like we need to trim the ones with too many configs, will look into it tomorrow!" ]
https://api.github.com/repos/huggingface/datasets/issues/3951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3951/comments
https://api.github.com/repos/huggingface/datasets/issues/3951/events
https://github.com/huggingface/datasets/issues/3951
1,171,568,814
I_kwDODunzps5F1Liu
3,951
Forked streaming datasets try to `open` data urls rather than use network
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-03-16T21:21:02Z
2022-06-10T20:47:26Z
2022-06-10T20:47:26Z
null
## Describe the bug Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else. ## Steps to reproduce the bug ```python from multiprocessing import freeze_support import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets import torch.utils.data # work around #3950 class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset): pass def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset: return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling) if __name__ == '__main__': freeze_support() ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) ds = _ensure_format(ds) model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results I'd expect the dataset to load the url correctly and produce examples. ## Actual results ``` warnings.warn( ***** Running training ***** Num examples = 8000 Num Epochs = 9223372036854775807 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 1000 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__ for key, example in self._iter(): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter yield from ex_iterable File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz' Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll pid, sts = os.waitpid(self.pid, flag) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15. 0%| | 0/1000 [00:02<?, ?it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3951/timeline
null
completed
null
null
false
[ "Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transf...
https://api.github.com/repos/huggingface/datasets/issues/3007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3007/comments
https://api.github.com/repos/huggingface/datasets/issues/3007/events
https://github.com/huggingface/datasets/pull/3007
1,014,775,450
PR_kwDODunzps4sns-n
3,007
Correct a typo
[]
closed
false
null
0
2021-10-04T06:15:47Z
2021-10-04T09:27:57Z
2021-10-04T09:27:57Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3007/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3007/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3007.diff", "html_url": "https://github.com/huggingface/datasets/pull/3007", "merged_at": "2021-10-04T09:27:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/3007.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3007" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4560/comments
https://api.github.com/repos/huggingface/datasets/issues/4560/events
https://github.com/huggingface/datasets/pull/4560
1,283,558,873
PR_kwDODunzps46TY9n
4,560
Add evaluation metadata to imagenet-1k
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
2
2022-06-24T10:12:41Z
2022-09-23T09:39:53Z
2022-09-23T09:37:03Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4560/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4560/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4560.diff", "html_url": "https://github.com/huggingface/datasets/pull/4560", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4560.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4560" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
https://api.github.com/repos/huggingface/datasets/issues/2282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2282/comments
https://api.github.com/repos/huggingface/datasets/issues/2282/events
https://github.com/huggingface/datasets/pull/2282
870,900,332
MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3
2,282
Initialize imdb dataset from don't stop pretraining paper
[]
closed
false
null
0
2021-04-29T11:17:56Z
2021-04-29T11:43:51Z
2021-04-29T11:43:51Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2282/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2282/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2282.diff", "html_url": "https://github.com/huggingface/datasets/pull/2282", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2282.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2282" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4981/comments
https://api.github.com/repos/huggingface/datasets/issues/4981/events
https://github.com/huggingface/datasets/issues/4981
1,375,086,773
I_kwDODunzps5R9ii1
4,981
Can't create a dataset with `float16` features
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
7
2022-09-15T21:03:24Z
2023-03-22T21:40:09Z
null
null
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error. The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases? Thanks! ## Steps to reproduce the bug All of the following raise the following error with the same exact (as far as I can tell) traceback: ```python ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ```python from datasets import Dataset, Features, Value Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16"))) import numpy as np Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16"))) import torch Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16"))) ``` ## Expected results A dataset with `float16` features is successfully created. ## Actual results ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) Cell In [14], line 1 ----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16"))) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split) 865 mapping = features.encode_batch(mapping) 866 mapping = { 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col) 868 for col, data in mapping.items() 869 } --> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping) 871 if info.features is None: 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()}) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs) 734 @classmethod 735 def from_pydict(cls, *args, **kwargs): 736 """ 737 Construct a Table from Arrow arrays or columns 738 (...) 748 :class:`datasets.table.Table`: 749 """ --> 750 return cls(pa.Table.from_pydict(*args, **kwargs)) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type) 192 # otherwise we can finally use the user's type 193 elif type is not None: 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image 195 # Also, when trying type "string", we don't want to convert integers or floats to "string". 196 # We only do it if trying_type is False - since this is what the user asks for. --> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 198 return out 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str) 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str) 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): 1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") -> 1762 return array.cast(pa_type) 1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options) 387 else: 388 options = CastOptions.safe(target_type) --> 389 return call_function("cast", [arr], options) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4981/timeline
null
null
null
null
false
[ "Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",...
https://api.github.com/repos/huggingface/datasets/issues/3888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3888/comments
https://api.github.com/repos/huggingface/datasets/issues/3888/events
https://github.com/huggingface/datasets/issues/3888
1,165,435,529
I_kwDODunzps5FdyKJ
3,888
IterableDataset columns and feature types
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" }, { "color": "...
open
false
null
8
2022-03-10T16:19:12Z
2022-11-29T11:39:24Z
null
null
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None` However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models. Here are a few cases that lead to `features` being `None`: 1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset 2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map` 3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map` Things we can consider, for each point above: 1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features` 2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API) 2.b prefetch the first output value to infer the type 3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user cc @mariosasko @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3888/timeline
null
null
null
null
false
[ "#self-assign", "@alvarobartt I've assigned you the issue since I'm not actively working on it.", "Cool thanks @mariosasko I'll try to fix it in the upcoming days, thanks!", "@lhoestq so in order to address what’s not completed in this issue, do you think it makes sense to add a param `features` to `IterableD...
https://api.github.com/repos/huggingface/datasets/issues/3562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3562/comments
https://api.github.com/repos/huggingface/datasets/issues/3562/events
https://github.com/huggingface/datasets/pull/3562
1,098,341,351
PR_kwDODunzps4wwa44
3,562
Allow multiple task templates of the same type
[]
closed
false
null
0
2022-01-10T20:32:07Z
2022-01-11T14:16:47Z
2022-01-11T14:16:47Z
null
Add support for multiple task templates of the same type. Fixes (partially) #2520. CC: @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3562/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3562/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3562.diff", "html_url": "https://github.com/huggingface/datasets/pull/3562", "merged_at": "2022-01-11T14:16:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3562.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3562" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/6030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6030/comments
https://api.github.com/repos/huggingface/datasets/issues/6030/events
https://github.com/huggingface/datasets/pull/6030
1,803,864,744
PR_kwDODunzps5Vd0ZG
6,030
fixed typo in comment
[]
closed
false
null
2
2023-07-13T22:49:57Z
2023-07-14T14:21:58Z
2023-07-14T14:13:38Z
null
This mistake was a bit confusing, so I thought it was worth sending a PR over.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6030/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6030/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6030.diff", "html_url": "https://github.com/huggingface/datasets/pull/6030", "merged_at": "2023-07-14T14:13:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6030.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6030" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/2896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2896/comments
https://api.github.com/repos/huggingface/datasets/issues/2896/events
https://github.com/huggingface/datasets/pull/2896
993,613,113
MDExOlB1bGxSZXF1ZXN0NzMxNzcwMTE3
2,896
add multi-proc in `to_csv`
[]
closed
false
null
2
2021-09-10T21:35:09Z
2021-10-28T05:47:33Z
2021-10-26T16:00:42Z
null
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_size 425.6553490161896 Time taken on 1 num_proc, 50000 batch_size 623.5897650718689 Time taken on 4 num_proc, 50000 batch_size 380.0402421951294 Time taken on 4 num_proc, 100000 batch_size 361.7168130874634 ``` This is a WIP as writing tests is pending for this PR. I'm also exploring [this](https://arrow.apache.org/docs/python/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2896/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2896/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2896.diff", "html_url": "https://github.com/huggingface/datasets/pull/2896", "merged_at": "2021-10-26T16:00:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2896.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2896" }
true
[ "I think you can just add a test `test_dataset_to_csv_multiproc` in `tests/io/test_csv.py` and we'll be good", "Hi @lhoestq, \r\nI've added `test_dataset_to_csv` apart from `test_dataset_to_csv_multiproc` as no test was there to check generated CSV file when `num_proc=1`. Please let me know if anything is also re...
https://api.github.com/repos/huggingface/datasets/issues/4350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4350/comments
https://api.github.com/repos/huggingface/datasets/issues/4350/events
https://github.com/huggingface/datasets/pull/4350
1,235,505,104
PR_kwDODunzps43zKIV
4,350
Add a new metric: CTC_Consistency
[]
closed
false
null
1
2022-05-13T17:31:19Z
2022-05-19T10:23:04Z
2022-05-19T10:23:03Z
null
Add CTC_Consistency metric Do I also need to modify the `test_metric_common.py` file to make it run on test?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4350/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4350/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4350.diff", "html_url": "https://github.com/huggingface/datasets/pull/4350", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4350.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4350" }
true
[ "Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https://github.com/huggingface/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you." ]
https://api.github.com/repos/huggingface/datasets/issues/5413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5413/comments
https://api.github.com/repos/huggingface/datasets/issues/5413/events
https://github.com/huggingface/datasets/issues/5413
1,524,591,837
I_kwDODunzps5a32zd
5,413
concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers
[]
closed
false
null
1
2023-01-08T17:01:52Z
2023-01-26T09:27:21Z
2023-01-26T09:27:21Z
null
### Describe the bug When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails: ``` File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5499, in _concatenate_map_style_datasets table = concat_tables([dset._data for dset in dsets], axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1778, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1483, in from_tables blocks = _extend_blocks(blocks, table_blocks, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1477, in _extend_blocks result[i].extend(row_blocks) IndexError: list index out of range ``` ### Steps to reproduce the bug dataset = concatenate_datasets([dataset1, dataset2], axis = 1) ### Expected behavior The datasets are correctly concatenated. ### Environment info datasets==2.8.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5413/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5413/timeline
null
completed
null
null
false
[ "Hi ! Thanks for reporting :)\r\n\r\nI managed to reproduce the hub using\r\n```python\r\n\r\nfrom datasets import concatenate_datasets, Dataset, load_from_disk\r\n\r\nDataset.from_dict({\"a\": range(9)}).save_to_disk(\"tmp/ds1\")\r\nds1 = load_from_disk(\"tmp/ds1\")\r\nds1 = concatenate_datasets([ds1, ds1])\r\n\r\...
https://api.github.com/repos/huggingface/datasets/issues/1797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1797/comments
https://api.github.com/repos/huggingface/datasets/issues/1797/events
https://github.com/huggingface/datasets/issues/1797
797,357,901
MDU6SXNzdWU3OTczNTc5MDE=
1,797
Connection error
[]
closed
false
null
1
2021-01-30T07:32:45Z
2021-08-04T18:09:37Z
2021-08-04T18:09:37Z
null
Hi I am hitting to the error, help me and thanks. `train_data = datasets.load_dataset("xsum", split="train")` `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1797/timeline
null
completed
null
null
false
[ "Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4800/comments
https://api.github.com/repos/huggingface/datasets/issues/4800/events
https://github.com/huggingface/datasets/pull/4800
1,331,288,128
PR_kwDODunzps48yIss
4,800
support LargeListArray in pyarrow
[]
open
false
null
17
2022-08-08T03:58:46Z
2022-10-20T16:34:04Z
null
null
```python import numpy as np import datasets a = np.zeros((5000000, 768)) res = datasets.Dataset.from_dict({'embedding': a}) ''' File '/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/arrow_writer.py', line 178, in __arrow_array__ out = numpy_to_pyarrow_listarray(data) File "/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/features/features.py", line 1173, in numpy_to_pyarrow_listarray offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32()) File "pyarrow/array.pxi", line 312, in pyarrow.lib.array File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 2147483904 not in range: -2147483648 to 2147483647 ''' ``` Loading a large numpy array currently raises the error above as the type of offsets is `int32`. And pyarrow has supported [LargeListArray](https://arrow.apache.org/docs/python/generated/pyarrow.LargeListArray.html) for this case.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4800/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4800.diff", "html_url": "https://github.com/huggingface/datasets/pull/4800", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4800" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4800). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this! Can you run `make style` at the repo root to fix the code quality error in CI and add a test?", "Hi, I have fixed...
https://api.github.com/repos/huggingface/datasets/issues/4184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4184/comments
https://api.github.com/repos/huggingface/datasets/issues/4184/events
https://github.com/huggingface/datasets/pull/4184
1,208,592,669
PR_kwDODunzps42cB2j
4,184
[Librispeech] Add 'all' config
[]
closed
false
null
27
2022-04-19T16:27:56Z
2022-08-29T06:35:57Z
2022-04-22T09:45:17Z
null
Add `"all"` config to Librispeech Closed #4179
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4184/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4184/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4184.diff", "html_url": "https://github.com/huggingface/datasets/pull/4184", "merged_at": "2022-04-22T09:45:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/4184.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4184" }
true
[ "Fix https://github.com/huggingface/datasets/issues/4179", "_The documentation is not available anymore as the PR was closed or merged._", "Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do st...
https://api.github.com/repos/huggingface/datasets/issues/4408
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4408/comments
https://api.github.com/repos/huggingface/datasets/issues/4408/events
https://github.com/huggingface/datasets/pull/4408
1,248,687,574
PR_kwDODunzps44ecNI
4,408
Update imagenet gate
[]
closed
false
null
1
2022-05-25T20:32:19Z
2022-05-25T20:45:11Z
2022-05-25T20:36:47Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4408/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4408.diff", "html_url": "https://github.com/huggingface/datasets/pull/4408", "merged_at": "2022-05-25T20:36:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/4408.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4408" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1446/comments
https://api.github.com/repos/huggingface/datasets/issues/1446/events
https://github.com/huggingface/datasets/pull/1446
761,060,323
MDExOlB1bGxSZXF1ZXN0NTM1Nzg1NDk1
1,446
Add Bing Coronavirus Query Set
[]
closed
false
null
0
2020-12-10T09:20:46Z
2020-12-11T17:03:08Z
2020-12-11T17:03:07Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1446/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1446/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1446.diff", "html_url": "https://github.com/huggingface/datasets/pull/1446", "merged_at": "2020-12-11T17:03:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1446.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1446" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5155/comments
https://api.github.com/repos/huggingface/datasets/issues/5155/events
https://github.com/huggingface/datasets/pull/5155
1,421,278,748
PR_kwDODunzps5BcCYr
5,155
TextConfig: added "errors"
[]
closed
false
null
3
2022-10-24T18:56:52Z
2022-11-03T13:38:13Z
2022-11-03T13:35:35Z
null
This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5155/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5155/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5155.diff", "html_url": "https://github.com/huggingface/datasets/pull/5155", "merged_at": "2022-11-03T13:35:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/5155.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5155" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)", "[**@lhoestq**](https://github.com/lhoestq) commented on [Oct 27, 2022, 4:08 PM GMT+3:30](https://github.com/huggingface/datase...
https://api.github.com/repos/huggingface/datasets/issues/2171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2171/comments
https://api.github.com/repos/huggingface/datasets/issues/2171/events
https://github.com/huggingface/datasets/pull/2171
851,090,662
MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw
2,171
Fixed the link to wikiauto training data.
[]
closed
false
null
3
2021-04-06T07:13:11Z
2021-04-06T16:05:42Z
2021-04-06T16:05:09Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2171/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2171/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2171.diff", "html_url": "https://github.com/huggingface/datasets/pull/2171", "merged_at": "2021-04-06T16:05:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2171.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2171" }
true
[ "Also you can ignore the CI failing on `docs`, this has been fixed on master :)", "@lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR!", "Ok !" ]
https://api.github.com/repos/huggingface/datasets/issues/1357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1357/comments
https://api.github.com/repos/huggingface/datasets/issues/1357/events
https://github.com/huggingface/datasets/pull/1357
760,023,525
MDExOlB1bGxSZXF1ZXN0NTM0OTIzMzA4
1,357
Youtube caption corrections
[]
closed
false
null
10
2020-12-09T05:52:34Z
2020-12-15T18:12:56Z
2020-12-15T18:12:56Z
null
This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1357/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1357/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1357.diff", "html_url": "https://github.com/huggingface/datasets/pull/1357", "merged_at": "2020-12-15T18:12:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1357.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1357" }
true
[ "Sorry about forgetting flake8.\r\nRather than use up the circleci resources on a new push with only formatting changes, I will wait to push until the results from all tests finish and/or any feedback comes in... probably tomorrow for me.", "\r\nSo... my normal work is with mercurial and seem to have clearly fork...
https://api.github.com/repos/huggingface/datasets/issues/3318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3318/comments
https://api.github.com/repos/huggingface/datasets/issues/3318/events
https://github.com/huggingface/datasets/pull/3318
1,062,369,717
PR_kwDODunzps4u9m-k
3,318
Finish transition to PyArrow 3.0.0
[]
closed
false
null
0
2021-11-24T12:30:14Z
2021-11-24T15:35:05Z
2021-11-24T15:35:04Z
null
Finish transition to PyArrow 3.0.0 that was started in #3098.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3318/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3318/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3318.diff", "html_url": "https://github.com/huggingface/datasets/pull/3318", "merged_at": "2021-11-24T15:35:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3318.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3318" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2609/comments
https://api.github.com/repos/huggingface/datasets/issues/2609/events
https://github.com/huggingface/datasets/pull/2609
939,616,682
MDExOlB1bGxSZXF1ZXN0Njg1ODA3MTMz
2,609
Fix potential DuplicatedKeysError
[]
closed
false
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
1
2021-07-08T08:38:04Z
2021-07-12T14:13:16Z
2021-07-09T16:42:08Z
null
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2609/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2609/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2609.diff", "html_url": "https://github.com/huggingface/datasets/pull/2609", "merged_at": "2021-07-09T16:42:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2609.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2609" }
true
[ "Finally, I'm splitting this PR." ]
https://api.github.com/repos/huggingface/datasets/issues/1487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1487/comments
https://api.github.com/repos/huggingface/datasets/issues/1487/events
https://github.com/huggingface/datasets/pull/1487
762,794,921
MDExOlB1bGxSZXF1ZXN0NTM3MzA2MTEx
1,487
added conv_ai_3 dataset
[]
closed
false
null
4
2020-12-11T19:26:26Z
2020-12-28T09:38:40Z
2020-12-28T09:38:39Z
null
Dataset : https://github.com/aliannejadi/ClariQ/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1487/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1487.diff", "html_url": "https://github.com/huggingface/datasets/pull/1487", "merged_at": "2020-12-28T09:38:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1487" }
true
[ "@lhoestq Thank you for suggesting changes. I fixed all the changes you suggested. Can you please review it again? ", "@lhoestq Thank you for reviewing and suggesting changes. I made the requested changes. Can you please review it again?", "Thanks @lhoestq for reviewing it again. I made the required changes. Ca...
https://api.github.com/repos/huggingface/datasets/issues/1089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1089/comments
https://api.github.com/repos/huggingface/datasets/issues/1089/events
https://github.com/huggingface/datasets/pull/1089
756,823,690
MDExOlB1bGxSZXF1ZXN0NTMyMzA0MDM2
1,089
add sharc_modified
[]
closed
false
null
0
2020-12-04T05:49:49Z
2020-12-04T10:41:30Z
2020-12-04T10:31:44Z
null
Adding modified ShARC dataset https://github.com/nikhilweee/neural-conv-qa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1089/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1089/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1089.diff", "html_url": "https://github.com/huggingface/datasets/pull/1089", "merged_at": "2020-12-04T10:31:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/1089.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1089" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2876/comments
https://api.github.com/repos/huggingface/datasets/issues/2876/events
https://github.com/huggingface/datasets/pull/2876
990,001,079
MDExOlB1bGxSZXF1ZXN0NzI4NjU3MDc2
2,876
Extend support for streaming datasets that use pathlib.Path.glob
[]
closed
false
null
2
2021-09-07T13:43:45Z
2021-09-10T09:50:49Z
2021-09-10T09:50:48Z
null
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2876/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2876.diff", "html_url": "https://github.com/huggingface/datasets/pull/2876", "merged_at": "2021-09-10T09:50:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2876.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2876" }
true
[ "I am thinking that ideally we should call `fs.glob()` instead...", "Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs." ]
https://api.github.com/repos/huggingface/datasets/issues/5994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5994/comments
https://api.github.com/repos/huggingface/datasets/issues/5994/events
https://github.com/huggingface/datasets/pull/5994
1,776,829,004
PR_kwDODunzps5UB1cA
5,994
Fix select_columns columns order
[]
closed
false
null
4
2023-06-27T12:32:46Z
2023-06-27T15:40:47Z
2023-06-27T15:32:43Z
null
Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`. I also fixed the same issue for `dataset.flatten()` Close https://github.com/huggingface/datasets/issues/5993
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5994/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5994.diff", "html_url": "https://github.com/huggingface/datasets/pull/5994", "merged_at": "2023-06-27T15:32:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/5994.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5994" }
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
https://api.github.com/repos/huggingface/datasets/issues/2984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2984/comments
https://api.github.com/repos/huggingface/datasets/issues/2984/events
https://github.com/huggingface/datasets/issues/2984
1,010,484,326
I_kwDODunzps48OsRm
2,984
Exceeded maximum rows when reading large files
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2021-09-29T04:49:22Z
2021-10-12T06:05:42Z
2021-10-12T06:05:42Z
null
## Describe the bug A clear and concise description of what the bug is. When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error. ## Steps to reproduce the bug ```python dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a single file ``` ## Expected results No error ## Actual results ``` ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 134 with open(file, encoding="utf-8") as f: --> 135 dataset = json.load(f) 136 except json.JSONDecodeError: ~/anaconda3/envs/python/lib/python3.9/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, ~/anaconda3/envs/python/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: ~/anaconda3/envs/python/lib/python3.9/json/decoder.py in decode(self, s, _w) 339 if end != len(s): --> 340 raise JSONDecodeError("Extra data", s, end) 341 return obj JSONDecodeError: Extra data: line 2 column 1 (char 20321) During handling of the above exception, another exception occurred: ArrowInvalid Traceback (most recent call last) <ipython-input-20-ab3718a6482f> in <module> ----> 1 dataset = load_dataset('json', data_files=data_files) ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 841 842 # Download and prepare data --> 843 builder_instance.download_and_prepare( 844 download_config=download_config, 845 download_mode=download_mode, ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 606 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 607 if not downloaded_from_gcs: --> 608 self._download_and_prepare( 609 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 610 ) ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 684 try: 685 # Prepare split will record examples associated to the split --> 686 self._prepare_split(split_generator, **prepare_split_kwargs) 687 except OSError as e: 688 raise OSError( ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1153 generator = self._generate_tables(**split_generator.gen_kwargs) 1154 with ArrowWriter(features=self.info.features, path=fpath) as writer: -> 1155 for key, table in utils.tqdm( 1156 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET) 1157 ): ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 135 dataset = json.load(f) 136 except json.JSONDecodeError: --> 137 raise e 138 raise ValueError( 139 f"Not able to read records in the JSON file at {file}. " ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 114 while True: 115 try: --> 116 pa_table = paj.read_json( 117 BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) 118 ) ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Exceeded maximum rows ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: 3.9 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2984/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2984/timeline
null
completed
null
null
false
[ "Hi @zijwang, thanks for reporting this issue.\r\n\r\nYou did not mention which `datasets` version you are using, but looking at the code in the stack trace, it seems you are using an old version.\r\n\r\nCould you please update `datasets` (`pip install -U datasets`) and check if the problem persists?" ]
https://api.github.com/repos/huggingface/datasets/issues/2730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2730/comments
https://api.github.com/repos/huggingface/datasets/issues/2730/events
https://github.com/huggingface/datasets/issues/2730
955,987,834
MDU6SXNzdWU5NTU5ODc4MzQ=
2,730
Update CommonVoice with new release
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
3
2021-07-29T15:59:59Z
2021-08-07T16:19:19Z
null
null
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220). - **Paper:** https://discourse.mozilla.org/t/common-voice-2021-mid-year-dataset-release/83812 - **Data:** https://commonvoice.mozilla.org/en/datasets - **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2730/timeline
null
null
null
null
false
[ "cc @patrickvonplaten?", "Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj \r\n", "Also see...
https://api.github.com/repos/huggingface/datasets/issues/2806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2806/comments
https://api.github.com/repos/huggingface/datasets/issues/2806/events
https://github.com/huggingface/datasets/pull/2806
971,625,449
MDExOlB1bGxSZXF1ZXN0NzEzMzM5NDUw
2,806
Fix streaming tar files from canonical datasets
[]
closed
false
null
5
2021-08-16T11:10:28Z
2021-10-13T09:04:03Z
2021-10-13T09:04:02Z
null
Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`. However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`). This PR fixes this issue and allows streaming tar files both from: - canonical datasets scripts and - data files. This PR also adds support for compressed tar files: `.tar.gz`, `.tar.bz2`,...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2806/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2806/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2806.diff", "html_url": "https://github.com/huggingface/datasets/pull/2806", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2806.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2806" }
true
[ "In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n...
https://api.github.com/repos/huggingface/datasets/issues/1488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1488/comments
https://api.github.com/repos/huggingface/datasets/issues/1488/events
https://github.com/huggingface/datasets/pull/1488
762,860,679
MDExOlB1bGxSZXF1ZXN0NTM3MzY1ODUz
1,488
Adding NELL
[]
closed
false
null
2
2020-12-11T20:25:25Z
2021-01-07T08:37:07Z
2020-12-21T14:45:00Z
null
NELL is a knowledge base and knowledge graph along with sentences used to create the KB. See http://rtw.ml.cmu.edu/rtw/ for more details.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1488/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1488.diff", "html_url": "https://github.com/huggingface/datasets/pull/1488", "merged_at": "2020-12-21T14:44:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1488.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1488" }
true
[ "hi @lhoestq, I wanted to push another change to this branch b/c I found a bug in the parsing. I need to swap arg1 and arg2. I tried to git push -u origin nell but it didn't work. So I tried to do git push --force -u origin nell which seems to work, but nothing is happening to this branch. I think this is because i...
https://api.github.com/repos/huggingface/datasets/issues/1333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1333/comments
https://api.github.com/repos/huggingface/datasets/issues/1333/events
https://github.com/huggingface/datasets/pull/1333
759,687,836
MDExOlB1bGxSZXF1ZXN0NTM0NjQ4OTI4
1,333
Add Tanzil Dataset
[]
closed
false
null
0
2020-12-08T18:45:15Z
2020-12-10T11:17:56Z
2020-12-10T11:14:43Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1333/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1333/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1333.diff", "html_url": "https://github.com/huggingface/datasets/pull/1333", "merged_at": "2020-12-10T11:14:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/1333.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1333" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/855/comments
https://api.github.com/repos/huggingface/datasets/issues/855/events
https://github.com/huggingface/datasets/pull/855
743,690,839
MDExOlB1bGxSZXF1ZXN0NTIxNTQ2Njkx
855
Fix kor nli csv reader
[]
closed
false
null
0
2020-11-16T09:53:41Z
2020-11-16T13:59:14Z
2020-11-16T13:59:12Z
null
The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason. I fixed that by iterating through the lines directly instead of using a csv reader. I also changed the feature names to match the other NLI datasets (i.e. use "premise", "hypothesis", "label" features) Fix #821
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/855/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/855/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/855.diff", "html_url": "https://github.com/huggingface/datasets/pull/855", "merged_at": "2020-11-16T13:59:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/855.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/855" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2847/comments
https://api.github.com/repos/huggingface/datasets/issues/2847/events
https://github.com/huggingface/datasets/pull/2847
981,589,693
MDExOlB1bGxSZXF1ZXN0NzIxNjA3OTA0
2,847
fix regex to accept negative timezone
[]
closed
false
null
0
2021-08-27T20:54:05Z
2021-09-13T20:39:50Z
2021-09-07T09:34:23Z
null
fix #2846
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2847/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2847.diff", "html_url": "https://github.com/huggingface/datasets/pull/2847", "merged_at": "2021-09-07T09:34:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2847.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2847" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/773/comments
https://api.github.com/repos/huggingface/datasets/issues/773/events
https://github.com/huggingface/datasets/issues/773
731,684,153
MDU6SXNzdWU3MzE2ODQxNTM=
773
Adding CC-100: Monolingual Datasets from Web Crawl Data
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
4
2020-10-28T18:20:41Z
2022-01-26T13:22:54Z
2020-12-14T10:20:07Z
null
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/773/timeline
null
completed
null
null
false
[ "cc @aconneau ;) ", "These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue?\r\n@abhishekkrthakur @yjernite ", "Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)", "...
https://api.github.com/repos/huggingface/datasets/issues/3337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3337/comments
https://api.github.com/repos/huggingface/datasets/issues/3337/events
https://github.com/huggingface/datasets/issues/3337
1,066,232,936
I_kwDODunzps4_jWxo
3,337
Typing of Dataset.__getitem__ could be improved.
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2021-11-29T16:20:11Z
2021-12-14T10:28:54Z
2021-12-14T10:28:54Z
null
## Describe the bug The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload) ## Steps to reproduce the bug Let's have a file `test.py` ```python from typing import List, Dict, Any from datasets import Dataset ds = Dataset.from_dict({ 'a': [1,2,3], 'b': ["1", "2", "3"] }) one_colum: List[str] = ds['a'] some_index: Dict[Any, Any] = ds[1] ``` ## Expected results Running `mypy test.py` should not give any error. ## Actual results ``` test.py:10: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "List[str]") test.py:11: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "Dict[Any, Any]") Found 2 errors in 1 file (checked 1 source file) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3337/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3337/timeline
null
completed
null
null
false
[ "Hi ! Thanks for the suggestion, I didn't know about this decorator.\r\n\r\nIf you are interesting in contributing, feel free to open a pull request to add the overload methods for each typing combination :) To assign you to this issue, you can comment `#self-assign` in this thread.\r\n\r\n`Dataset.__getitem__` is ...
https://api.github.com/repos/huggingface/datasets/issues/6
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6/comments
https://api.github.com/repos/huggingface/datasets/issues/6/events
https://github.com/huggingface/datasets/issues/6
600,330,836
MDU6SXNzdWU2MDAzMzA4MzY=
6
Error when citation is not given in the DatasetInfo
[]
closed
false
null
3
2020-04-15T14:14:54Z
2020-04-29T09:23:22Z
2020-04-29T09:23:22Z
null
The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__ citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) AttributeError: 'NoneType' object has no attribute 'strip' ``` I propose to do the following change in the `info.py` file. The method: ```python def __repr__(self): splits_pprint = _indent("\n".join(["{"] + [ " '{}': {},".format(k, split.num_examples) for k, split in sorted(self.splits.items()) ] + ["}"])) features_pprint = _indent(repr(self.features)) citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) return INFO_STR.format( name=self.name, version=self.version, description=self.description, total_num_examples=self.splits.total_num_examples, features=features_pprint, splits=splits_pprint, citation=citation_pprint, homepage=self.homepage, supervised_keys=self.supervised_keys, # Proto add a \n that we strip. license=str(self.license).strip()) ``` Becomes: ```python def __repr__(self): splits_pprint = _indent("\n".join(["{"] + [ " '{}': {},".format(k, split.num_examples) for k, split in sorted(self.splits.items()) ] + ["}"])) features_pprint = _indent(repr(self.features)) ## the strip is done only is the citation is given citation_pprint = self.citation if self.citation: citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) return INFO_STR.format( name=self.name, version=self.version, description=self.description, total_num_examples=self.splits.total_num_examples, features=features_pprint, splits=splits_pprint, citation=citation_pprint, homepage=self.homepage, supervised_keys=self.supervised_keys, # Proto add a \n that we strip. license=str(self.license).strip()) ``` And now it is ok. @thomwolf are you ok with this fix?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6/timeline
null
completed
null
null
false
[ "Yes looks good to me.\r\nNote that we may refactor quite strongly the `info.py` to make it a lot simpler (it's very complicated for basically a dictionary of info I think)", "No, problem ^^ It might just be a temporary fix :)", "Fixed." ]
https://api.github.com/repos/huggingface/datasets/issues/2748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2748/comments
https://api.github.com/repos/huggingface/datasets/issues/2748/events
https://github.com/huggingface/datasets/pull/2748
958,889,041
MDExOlB1bGxSZXF1ZXN0NzAyMDg4NTk4
2,748
Generate metadata JSON for wikihow dataset
[]
closed
false
null
0
2021-08-03T08:55:40Z
2021-08-03T10:17:51Z
2021-08-03T10:17:51Z
null
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2748/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2748.diff", "html_url": "https://github.com/huggingface/datasets/pull/2748", "merged_at": "2021-08-03T10:17:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/2748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2748" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1205/comments
https://api.github.com/repos/huggingface/datasets/issues/1205/events
https://github.com/huggingface/datasets/pull/1205
757,942,403
MDExOlB1bGxSZXF1ZXN0NTMzMjA4NDI1
1,205
add lst20 with manual download
[]
closed
false
null
2
2020-12-06T14:49:10Z
2020-12-09T16:33:10Z
2020-12-09T16:33:10Z
null
passed on local: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20 ``` Not sure how to test: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20 ``` ``` LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand. It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries. At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is considered large enough for developing joint neural models for NLP. Manually download at https://aiforthai.in.th/corpus.php ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1205/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1205/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1205.diff", "html_url": "https://github.com/huggingface/datasets/pull/1205", "merged_at": "2020-12-09T16:33:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/1205.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1205" }
true
[ "The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead", "@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https://github.com/huggingface/datasets/pull/1253 too. Will fix them before review." ]
https://api.github.com/repos/huggingface/datasets/issues/5883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5883/comments
https://api.github.com/repos/huggingface/datasets/issues/5883/events
https://github.com/huggingface/datasets/pull/5883
1,719,527,597
PR_kwDODunzps5RAkYi
5,883
Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset`
[]
closed
false
null
29
2023-05-22T11:51:07Z
2023-06-08T11:09:03Z
2023-06-06T16:49:15Z
null
## What's in this PR? This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset. The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode/string, is to convert it into `numpy.bytes`, more information in the docstring of https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`. Besides that, some other minor things have been fixed: * Made `batch_size` an optional parameter in `to_tf_dataset` * Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map` * Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy` * Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf` ## What's missing in this PR? I can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5883/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5883/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5883.diff", "html_url": "https://github.com/huggingface/datasets/pull/5883", "merged_at": "2023-06-06T16:49:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5883.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5883" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n\r...
https://api.github.com/repos/huggingface/datasets/issues/410
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/410/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/410/comments
https://api.github.com/repos/huggingface/datasets/issues/410/events
https://github.com/huggingface/datasets/pull/410
659,242,871
MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3
410
20newsgroup
[]
closed
false
null
0
2020-07-17T13:07:57Z
2020-07-20T07:05:29Z
2020-07-20T07:05:28Z
null
Add 20Newsgroup dataset. #353
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/410/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/410/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/410.diff", "html_url": "https://github.com/huggingface/datasets/pull/410", "merged_at": "2020-07-20T07:05:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/410.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/410" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4835/comments
https://api.github.com/repos/huggingface/datasets/issues/4835/events
https://github.com/huggingface/datasets/pull/4835
1,336,994,835
PR_kwDODunzps49FJg9
4,835
Fix documentation card of ethos dataset
[]
closed
false
null
1
2022-08-12T09:51:06Z
2022-08-12T13:13:55Z
2022-08-12T12:59:39Z
null
Fix documentation card of ethos dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4835/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4835/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4835.diff", "html_url": "https://github.com/huggingface/datasets/pull/4835", "merged_at": "2022-08-12T12:59:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/4835.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4835" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/6059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6059/comments
https://api.github.com/repos/huggingface/datasets/issues/6059/events
https://github.com/huggingface/datasets/issues/6059
1,816,537,176
I_kwDODunzps5sRihY
6,059
Provide ability to load label mappings from file
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2023-07-22T02:04:19Z
2023-07-22T02:04:19Z
null
null
### Feature request My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file. It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo` I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred. The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it. ``` class TestDatasetBuilder(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "text": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["label_1", "label_2"]), } ), task_templates=[TextClassification(text_column="text", label_column="label")], ) def _split_generators(self, dl_manager): train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL) test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}), ] def _generate_examples(self, filepath): """Generate AG News examples.""" with open(filepath, encoding="utf-8") as csv_file: csv_reader = csv.DictReader(csv_file) for id_, row in enumerate(csv_reader): yield id_, row ``` ### Motivation Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset. ### Your contribution I'm willing to work on a PR with guidence.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6059/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1276/comments
https://api.github.com/repos/huggingface/datasets/issues/1276/events
https://github.com/huggingface/datasets/pull/1276
758,965,936
MDExOlB1bGxSZXF1ZXN0NTM0MDQyODYy
1,276
add One Million Posts Corpus
[]
closed
false
null
1
2020-12-08T00:50:08Z
2020-12-11T18:28:18Z
2020-12-11T18:28:18Z
null
- **Name:** One Million Posts Corpus - **Description:** The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1276/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1276.diff", "html_url": "https://github.com/huggingface/datasets/pull/1276", "merged_at": "2020-12-11T18:28:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1276.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1276" }
true
[ "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/1100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1100/comments
https://api.github.com/repos/huggingface/datasets/issues/1100/events
https://github.com/huggingface/datasets/pull/1100
756,998,433
MDExOlB1bGxSZXF1ZXN0NTMyNDQ2ODc1
1,100
Urdu fake news
[]
closed
false
null
0
2020-12-04T10:41:20Z
2020-12-04T11:19:00Z
2020-12-04T11:19:00Z
null
Added Bend the Truth urdu fake news dataset. More inforation <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1100/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1100/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1100.diff", "html_url": "https://github.com/huggingface/datasets/pull/1100", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1100.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1100" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4969/comments
https://api.github.com/repos/huggingface/datasets/issues/4969/events
https://github.com/huggingface/datasets/pull/4969
1,369,334,740
PR_kwDODunzps4-wPOk
4,969
Fix data URL and metadata of vivos dataset
[]
closed
false
null
1
2022-09-12T06:12:34Z
2022-09-12T07:16:15Z
2022-09-12T07:14:19Z
null
After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130 This PR updates their data URL and some metadata (homepage, citation and license). Fix #4936.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4969/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4969/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4969.diff", "html_url": "https://github.com/huggingface/datasets/pull/4969", "merged_at": "2022-09-12T07:14:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/4969.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4969" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2663/comments
https://api.github.com/repos/huggingface/datasets/issues/2663/events
https://github.com/huggingface/datasets/issues/2663
946,552,273
MDU6SXNzdWU5NDY1NTIyNzM=
2,663
[`to_json`] add multi-proc sharding support
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
2
2021-07-16T19:41:50Z
2021-09-13T13:56:37Z
2021-09-13T13:56:37Z
null
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards? I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process. I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs. The code I was using: ``` from multiprocessing import cpu_count, Process, Queue [...] filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count()) DATASET_NAME = "oscar" SHARDS = 10 def process_shard(idx): print(f"Sharding {idx}") ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True) # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling print(f"Saving {DATASET_NAME}-{idx}.jsonl") ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False) queue = Queue() processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)] for p in processes: p.start() for p in processes: p.join() ``` Thank you! @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2663/timeline
null
completed
null
null
false
[ "Hi @stas00, \r\nI want to work on this issue and I was thinking why don't we use `imap` [in this loop](https://github.com/huggingface/datasets/blob/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84/src/datasets/io/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow ...
https://api.github.com/repos/huggingface/datasets/issues/2351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2351/comments
https://api.github.com/repos/huggingface/datasets/issues/2351/events
https://github.com/huggingface/datasets/pull/2351
889,584,953
MDExOlB1bGxSZXF1ZXN0NjQyNzI5NDIz
2,351
simpllify faiss index save
[]
closed
false
null
0
2021-05-12T03:54:10Z
2021-05-17T13:41:41Z
2021-05-17T13:41:41Z
null
Fixes #2350 In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU. In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transforms cause it. I propose, instead of using the index object to get the device, to infer it form the `FaissIndex.device` field as it is done in `.add_vectors`. Here we assume that `.device` always corresponds to the index placement and it seems reasonable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2351/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2351/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2351.diff", "html_url": "https://github.com/huggingface/datasets/pull/2351", "merged_at": "2021-05-17T13:41:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2351.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2351" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1268/comments
https://api.github.com/repos/huggingface/datasets/issues/1268/events
https://github.com/huggingface/datasets/pull/1268
758,871,252
MDExOlB1bGxSZXF1ZXN0NTMzOTY0OTQ4
1,268
new pr for Turkish NER
[]
closed
false
null
3
2020-12-07T21:40:26Z
2020-12-09T13:45:05Z
2020-12-09T13:45:05Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1268/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1268.diff", "html_url": "https://github.com/huggingface/datasets/pull/1268", "merged_at": "2020-12-09T13:45:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/1268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1268" }
true
[ "Can you run `make style` to fix the code format ?\r\n\r\nAlso it looks like the file `file_downloaded/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.DUMP` is missing inside the dummy_data.zip\r\n\r\n\r\n(note that `TWNERTC_TC_Coarse...
https://api.github.com/repos/huggingface/datasets/issues/3271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3271/comments
https://api.github.com/repos/huggingface/datasets/issues/3271/events
https://github.com/huggingface/datasets/pull/3271
1,053,482,919
PR_kwDODunzps4uhgi1
3,271
Decode audio from remote
[]
closed
false
null
0
2021-11-15T10:25:56Z
2021-11-16T11:35:58Z
2021-11-16T11:35:58Z
null
Currently the Audio feature type can only decode local audio files, not remote files. To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py cc @albertvillanova @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3271/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3271.diff", "html_url": "https://github.com/huggingface/datasets/pull/3271", "merged_at": "2021-11-16T11:35:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3271.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3271" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2725/comments
https://api.github.com/repos/huggingface/datasets/issues/2725/events
https://github.com/huggingface/datasets/pull/2725
955,020,776
MDExOlB1bGxSZXF1ZXN0Njk4ODMwNjYw
2,725
Pass use_auth_token to request_etags
[]
closed
false
null
0
2021-07-28T16:13:29Z
2021-07-28T16:38:02Z
2021-07-28T16:38:02Z
null
Fix #2724.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2725/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2725/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2725.diff", "html_url": "https://github.com/huggingface/datasets/pull/2725", "merged_at": "2021-07-28T16:38:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2725.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2725" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4663/comments
https://api.github.com/repos/huggingface/datasets/issues/4663/events
https://github.com/huggingface/datasets/pull/4663
1,299,298,693
PR_kwDODunzps47H19n
4,663
Add text decorators
[]
closed
false
null
1
2022-07-08T17:51:48Z
2022-07-18T18:33:14Z
2022-07-18T18:20:49Z
null
This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides! ![underline](https://user-images.githubusercontent.com/59462357/178044392-9596693e-9a4a-479a-a282-f1edbd90be1a.png) TODO: - [x] Open PR to support new Tailwind classes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4663/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4663.diff", "html_url": "https://github.com/huggingface/datasets/pull/4663", "merged_at": "2022-07-18T18:20:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4663.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4663" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/3478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3478/comments
https://api.github.com/repos/huggingface/datasets/issues/3478/events
https://github.com/huggingface/datasets/pull/3478
1,087,860,180
PR_kwDODunzps4wPMWq
3,478
Extend support for streaming datasets that use os.walk
[]
closed
false
null
1
2021-12-23T16:42:55Z
2021-12-24T10:50:20Z
2021-12-24T10:50:19Z
null
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3478/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3478.diff", "html_url": "https://github.com/huggingface/datasets/pull/3478", "merged_at": "2021-12-24T10:50:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3478.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3478" }
true
[ "Nice. I'll update the dataset viewer once merged, and test on these four datasets" ]
https://api.github.com/repos/huggingface/datasets/issues/1429
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1429/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1429/comments
https://api.github.com/repos/huggingface/datasets/issues/1429/events
https://github.com/huggingface/datasets/pull/1429
760,737,818
MDExOlB1bGxSZXF1ZXN0NTM1NTE5MjY5
1,429
extract rar files
[]
closed
false
null
0
2020-12-09T23:01:10Z
2020-12-18T15:03:37Z
2020-12-18T15:03:37Z
null
Unfortunately, I didn't find any native python libraries for extracting rar files. The user has to manually install `sudo apt-get install unrar`. Discussion with @yjernite is in the slack channel.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1429/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1429/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1429.diff", "html_url": "https://github.com/huggingface/datasets/pull/1429", "merged_at": "2020-12-18T15:03:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/1429.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1429" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2463/comments
https://api.github.com/repos/huggingface/datasets/issues/2463/events
https://github.com/huggingface/datasets/pull/2463
915,454,788
MDExOlB1bGxSZXF1ZXN0NjY1MjY3NTA2
2,463
Fix proto_qa download link
[]
closed
false
null
0
2021-06-08T20:23:16Z
2021-06-10T12:49:56Z
2021-06-10T08:31:10Z
null
Fixes #2459 Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2463/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2463.diff", "html_url": "https://github.com/huggingface/datasets/pull/2463", "merged_at": "2021-06-10T08:31:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2463" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4174/comments
https://api.github.com/repos/huggingface/datasets/issues/4174/events
https://github.com/huggingface/datasets/pull/4174
1,205,575,941
PR_kwDODunzps42SnJS
4,174
Fix when map function modifies input in-place
[]
closed
false
null
1
2022-04-15T13:23:15Z
2022-04-15T14:52:07Z
2022-04-15T14:45:58Z
null
When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4174/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4174/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4174.diff", "html_url": "https://github.com/huggingface/datasets/pull/4174", "merged_at": "2022-04-15T14:45:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/4174.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4174" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/191/comments
https://api.github.com/repos/huggingface/datasets/issues/191/events
https://github.com/huggingface/datasets/pull/191
624,394,936
MDExOlB1bGxSZXF1ZXN0NDIyODI3MDMy
191
[Squad es] add dataset_infos
[]
closed
false
null
0
2020-05-25T16:35:52Z
2020-05-25T16:39:59Z
2020-05-25T16:39:58Z
null
@mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/191/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/191/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/191.diff", "html_url": "https://github.com/huggingface/datasets/pull/191", "merged_at": "2020-05-25T16:39:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/191.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/191" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/180/comments
https://api.github.com/repos/huggingface/datasets/issues/180/events
https://github.com/huggingface/datasets/pull/180
622,556,861
MDExOlB1bGxSZXF1ZXN0NDIxMzk5Nzg2
180
Add hall of fame
[]
closed
false
null
0
2020-05-21T14:53:48Z
2020-05-22T16:35:16Z
2020-05-22T16:35:14Z
null
powered by https://github.com/sourcerer-io/hall-of-fame
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/180/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/180/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/180.diff", "html_url": "https://github.com/huggingface/datasets/pull/180", "merged_at": "2020-05-22T16:35:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/180.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/180" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2143/comments
https://api.github.com/repos/huggingface/datasets/issues/2143/events
https://github.com/huggingface/datasets/pull/2143
844,313,228
MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0
2,143
task casting via load_dataset
[]
closed
false
null
0
2021-03-30T10:00:42Z
2021-06-11T13:20:41Z
2021-06-11T13:20:36Z
null
wip not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2143/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2143/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2143.diff", "html_url": "https://github.com/huggingface/datasets/pull/2143", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2143.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2143" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4048/comments
https://api.github.com/repos/huggingface/datasets/issues/4048/events
https://github.com/huggingface/datasets/issues/4048
1,183,804,576
I_kwDODunzps5Gj2yg
4,048
Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "descript...
closed
false
null
3
2022-03-28T18:12:04Z
2022-04-08T12:29:30Z
2022-04-08T12:29:30Z
null
## Describe the bug When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m. Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata. Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first. ## Steps to reproduce the bug ```python load_dataset('amazon_us_reviews', 'PC_v1_00') ``` ## Expected results Dataset is downloaded and extracted successfully. ## Actual results An split size exception is thrown. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4048/timeline
null
completed
null
null
false
[ "Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.", "Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSize...
https://api.github.com/repos/huggingface/datasets/issues/4512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4512/comments
https://api.github.com/repos/huggingface/datasets/issues/4512/events
https://github.com/huggingface/datasets/pull/4512
1,273,378,129
PR_kwDODunzps45xEDN
4,512
Add links to vision tasks scripts in ADD_NEW_DATASET template
[]
closed
false
null
2
2022-06-16T10:35:35Z
2022-07-08T14:07:50Z
2022-07-08T13:56:23Z
null
Add links to vision dataset scripts in the ADD_NEW_DATASET template.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4512/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4512.diff", "html_url": "https://github.com/huggingface/datasets/pull/4512", "merged_at": "2022-07-08T13:56:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/4512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4512" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI failure is unrelated to the PR's changes. Merging." ]
https://api.github.com/repos/huggingface/datasets/issues/3686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3686/comments
https://api.github.com/repos/huggingface/datasets/issues/3686/events
https://github.com/huggingface/datasets/issues/3686
1,127,137,290
I_kwDODunzps5DLsAK
3,686
`Translation` features cannot be `flatten`ed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-02-08T11:33:48Z
2022-03-18T17:28:13Z
2022-03-18T17:28:13Z
null
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("europa_ecdc_tm", "en2fr", split="train[:10]") print(dataset.features) # {'translation': Translation(languages=['en', 'fr'], id=None)} print(dataset[0]) # {'translation': {'en': 'Vaccination against hepatitis C is not yet available.', 'fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.'}} dataset.flatten() ``` ## Expected results `dataset.flatten` should flatten the `Translation` column as if it were a dict of `Value("string")` ```python dataset[0] # {'translation.en': 'Vaccination against hepatitis C is not yet available.', 'translation.fr': 'Aucune vaccination contre l’hépatite C n’est encore disponible.' } dataset.features # {'translation.en': Value("string"), 'translation.fr': Value("string")} ``` ## Actual results ```python In [31]: dset.flatten() --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-31-bb88eb5276ee> in <module> ----> 1 dset.flatten() [...]\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms [...]\site-packages\datasets\arrow_dataset.py in flatten(self, new_fingerprint, max_depth) 1294 break 1295 dataset.info.features = self.features.flatten(max_depth=max_depth) -> 1296 dataset._data = update_metadata_with_features(dataset._data, dataset.features) 1297 logger.info(f'Flattened dataset from depth {depth} to depth {1 if depth + 1 < max_depth else "unknown"}.') 1298 dataset._fingerprint = new_fingerprint [...]\site-packages\datasets\arrow_dataset.py in update_metadata_with_features(table, features) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) [...]\site-packages\datasets\arrow_dataset.py in <dictcomp>(.0) 534 def update_metadata_with_features(table: Table, features: Features): 535 """To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.""" --> 536 features = Features({col_name: features[col_name] for col_name in table.column_names}) 537 if table.schema.metadata is None or b"huggingface" not in table.schema.metadata: 538 pa_metadata = ArrowWriter._build_metadata(DatasetInfo(features=features)) KeyError: 'translation.en' ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3686/timeline
null
completed
null
null
false
[ "Thanks for reporting, @SBrandeis! Some additional feature types that don't behave as expected when flattened: `Audio`, `Image` and `TranslationVariableLanguages`" ]
https://api.github.com/repos/huggingface/datasets/issues/3835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3835/comments
https://api.github.com/repos/huggingface/datasets/issues/3835/events
https://github.com/huggingface/datasets/issues/3835
1,161,029,205
I_kwDODunzps5FM-ZV
3,835
The link given on the gigaword does not work
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2022-03-07T07:56:42Z
2022-03-15T12:30:23Z
2022-03-15T12:30:23Z
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3835/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3835/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3668/comments
https://api.github.com/repos/huggingface/datasets/issues/3668/events
https://github.com/huggingface/datasets/issues/3668
1,122,261,736
I_kwDODunzps5C5Fro
3,668
Couldn't cast array of type string error with cast_column
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
5
2022-02-02T18:33:29Z
2022-07-19T13:36:24Z
2022-07-19T13:36:24Z
null
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264037/152214027-9c42a71a-dd24-463c-a346-57e0287e5a8f.png) This was working with datasets version 1.17.1.dev0 but now with version 1.18.3 produces the error above. ## Steps to reproduce the bug load dataset: ![image](https://user-images.githubusercontent.com/25264037/152216145-159553b6-cddc-4f0b-8607-7e76b600e22a.png) remove columns: ![image](https://user-images.githubusercontent.com/25264037/152214707-7c7e89d1-87d8-4b4f-8cfc-5d7223d35644.png) run my fix_path function. This also creates the audio column that is referring to the absolute file path of the audio ![image](https://user-images.githubusercontent.com/25264037/152214773-51f71ccf-d31b-4449-b63a-1af56436e49f.png) Then I concatenate few other datasets and finally try the cast_column method ![image](https://user-images.githubusercontent.com/25264037/152215032-f341ec86-9d6d-48c9-943b-e2efe37a4d98.png) but get error: ![image](https://user-images.githubusercontent.com/25264037/152215073-b85bd057-98e8-413c-9b05-51e9805f2c24.png) ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: OVH Cloud, AI Training section, container for Huggingface Robust Speech Recognition event image(baaastijn/ovh_huggingface) ![image](https://user-images.githubusercontent.com/25264037/152215161-b4ff7bfb-2736-4afb-9223-761a3338d23c.png) - Python version: 3.8.8 - PyArrow version: ![image](https://user-images.githubusercontent.com/25264037/152215936-4d365760-557e-456b-b5eb-ad1d15cf5073.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3668/timeline
null
completed
null
null
false
[ "Hi ! I wasn't able to reproduce the error, are you still experiencing this ? I tried calling `cast_column` on a string column containing paths.\r\n\r\nIf you manage to share a reproducible code example that would be perfect", "Hi,\r\n\r\nI think my team mate got this solved. Clolsing it for now and will reopen i...
https://api.github.com/repos/huggingface/datasets/issues/3432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3432/comments
https://api.github.com/repos/huggingface/datasets/issues/3432/events
https://github.com/huggingface/datasets/pull/3432
1,079,910,769
PR_kwDODunzps4v1NGS
3,432
Correctly indent builder config in dataset script docs
[]
closed
false
null
0
2021-12-14T15:39:47Z
2021-12-14T17:35:17Z
2021-12-14T17:35:17Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3432/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3432/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3432.diff", "html_url": "https://github.com/huggingface/datasets/pull/3432", "merged_at": "2021-12-14T17:35:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/3432.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3432" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4218/comments
https://api.github.com/repos/huggingface/datasets/issues/4218/events
https://github.com/huggingface/datasets/pull/4218
1,214,748,226
PR_kwDODunzps42vTA0
4,218
Make code for image downloading from image urls cacheable
[]
closed
false
null
1
2022-04-25T16:17:59Z
2022-04-26T17:00:24Z
2022-04-26T13:38:26Z
null
Fix #4199
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4218/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4218/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4218.diff", "html_url": "https://github.com/huggingface/datasets/pull/4218", "merged_at": "2022-04-26T13:38:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/4218.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4218" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/242/comments
https://api.github.com/repos/huggingface/datasets/issues/242/events
https://github.com/huggingface/datasets/issues/242
631,733,683
MDU6SXNzdWU2MzE3MzM2ODM=
242
UnicodeDecodeError when downloading GLUE-MNLI
[]
closed
false
null
2
2020-06-05T16:30:01Z
2020-06-09T16:06:47Z
2020-06-08T08:45:03Z
null
When I run ```python dataset = nlp.load_dataset('glue', 'mnli') ``` I get an encoding error (could it be because I'm using Windows?) : ```python # Lots of error log lines later... ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\5256cc2368cf84497abef1f1a5f66648522d5854b225162148cb8fc78a5a91cc\glue.py in _generate_examples(self, data_file, split, mrpc_files) 529 --> 530 for n, row in enumerate(reader): 531 if is_cola_non_test: ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 6744: character maps to <undefined> ``` Anyway this can be solved by specifying to decode in UTF when reading the csv file. I am proposing a PR if that's okay.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/242/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/242/timeline
null
completed
null
null
false
[ "It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure", "On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts wou...
https://api.github.com/repos/huggingface/datasets/issues/2613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2613/comments
https://api.github.com/repos/huggingface/datasets/issues/2613/events
https://github.com/huggingface/datasets/pull/2613
940,759,852
MDExOlB1bGxSZXF1ZXN0Njg2Nzg0MzY0
2,613
Use ndarray.item instead of ndarray.tolist
[]
closed
false
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
0
2021-07-09T13:19:35Z
2021-07-12T14:12:57Z
2021-07-09T13:50:05Z
null
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#numpy-ndarray-item PS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2613/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2613/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2613.diff", "html_url": "https://github.com/huggingface/datasets/pull/2613", "merged_at": "2021-07-09T13:50:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2613.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2613" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1937/comments
https://api.github.com/repos/huggingface/datasets/issues/1937/events
https://github.com/huggingface/datasets/issues/1937
815,163,943
MDU6SXNzdWU4MTUxNjM5NDM=
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
2
2021-02-24T06:47:33Z
2021-02-26T11:10:06Z
2021-02-26T11:10:06Z
null
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1937/timeline
null
completed
null
null
false
[ "Facing the same issue for [Squad](https://huggingface.co/datasets/viewer/?dataset=squad) and [TriviaQA](https://huggingface.co/datasets/viewer/?dataset=trivia_qa) datasets as well.", "We just fixed the issue, thanks for reporting !" ]
https://api.github.com/repos/huggingface/datasets/issues/4247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4247/comments
https://api.github.com/repos/huggingface/datasets/issues/4247/events
https://github.com/huggingface/datasets/issues/4247
1,218,320,882
I_kwDODunzps5Inhny
4,247
The data preview of XGLUE
[]
closed
false
null
3
2022-04-28T07:30:50Z
2022-04-29T08:23:28Z
2022-04-28T16:08:03Z
null
It seems that something wrong with the data previvew of XGLUE
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4247/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4247/timeline
null
completed
null
null
false
[ "![image](https://user-images.githubusercontent.com/49108847/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n", "Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (...
https://api.github.com/repos/huggingface/datasets/issues/1351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1351/comments
https://api.github.com/repos/huggingface/datasets/issues/1351/events
https://github.com/huggingface/datasets/pull/1351
759,902,770
MDExOlB1bGxSZXF1ZXN0NTM0ODI0NTcw
1,351
added craigslist_bargians
[]
closed
false
null
0
2020-12-09T01:02:31Z
2020-12-10T14:14:34Z
2020-12-10T14:14:34Z
null
`craigslist_bargains` data set from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/) (Cleaned up version of #1278)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1351/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1351/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1351.diff", "html_url": "https://github.com/huggingface/datasets/pull/1351", "merged_at": "2020-12-10T14:14:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/1351.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1351" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2761/comments
https://api.github.com/repos/huggingface/datasets/issues/2761/events
https://github.com/huggingface/datasets/issues/2761
961,568,287
MDU6SXNzdWU5NjE1NjgyODc=
2,761
Error loading C4 realnewslike dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
4
2021-08-05T08:16:58Z
2021-08-08T19:44:34Z
2021-08-08T19:44:34Z
null
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2761/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2761/timeline
null
completed
null
null
false
[ "Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.", "@bhavitvyamalik @lhoestq ...
https://api.github.com/repos/huggingface/datasets/issues/5524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5524/comments
https://api.github.com/repos/huggingface/datasets/issues/5524/events
https://github.com/huggingface/datasets/pull/5524
1,580,219,454
PR_kwDODunzps5JvbMw
5,524
[INVALID PR]
[]
closed
false
null
1
2023-02-10T19:35:50Z
2023-02-10T19:51:45Z
2023-02-10T19:49:12Z
null
Hi to whoever is reading this! 🤗 ## What's in this PR? ~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to check that the Python package installation succeeds before running the tests over the matrix of os?~~ ~~So I just wanted to check whether the time was reduced doing this (which I assume it will), plus whether this is something that can be improved, or just discarded in case you're also using that step to make sure that the package can be installed.~~ ## What's missing? ~~I was just wondering whether you consider replacing `isort` and `flake8` with `ruff` (if possible), since it's way faster, more information at [`ruff`](https://github.com/charliermarsh/ruff). Before creating this PR the average time of the `check_code_quality` job was around 40s.~~ ## Edit Sorry for the inconvenience this may have caused, didn't realise that the config is defined in `setup.cfg` and `pyproject.toml`, so running those without installing the Python package leads to failure, my bad 😞
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5524/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5524.diff", "html_url": "https://github.com/huggingface/datasets/pull/5524", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5524.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5524" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5541/comments
https://api.github.com/repos/huggingface/datasets/issues/5541/events
https://github.com/huggingface/datasets/issues/5541
1,588,633,555
I_kwDODunzps5esJ_T
5,541
Flattening indices in selected datasets is extremely inefficient
[]
closed
false
null
3
2023-02-17T01:52:24Z
2023-02-22T13:15:20Z
2023-02-17T11:12:33Z
null
### Describe the bug If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow. Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping. ### Steps to reproduce the bug The following script reproduces the issue: ```python import gc import os import psutil import tempfile import time from datasets import Dataset DATASET_SIZE = 5000000 def profile(func): def wrapper(*args, **kwargs): mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) start = time.time() # Run function here out = func(*args, **kwargs) end = time.time() mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s") return out return wrapper def main(): ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)]) print(f"Num chunks for original ds: {ds.data['col'].num_chunks}") with tempfile.TemporaryDirectory() as tmpdir: path1 = os.path.join(tmpdir, 'ds1') print("Original ds save/load") profile(ds.save_to_disk)(path1) ds_loaded = profile(Dataset.load_from_disk)(path1) print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}") print("") ds_select = ds.select(reversed(range(len(ds)))) print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}") del ds del ds_loaded gc.collect() # This would happen anyway when we call save_to_disk ds_select = profile(ds_select.flatten_indices)() print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}") print("") path2 = os.path.join(tmpdir, 'ds2') print("Selected ds save/load") profile(ds_select.save_to_disk)(path2) del ds_select gc.collect() ds_select_loaded = profile(Dataset.load_from_disk)(path2) print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}") if __name__ == '__main__': main() ``` Sample result: ``` Num chunks for original ds: 1 Original ds save/load save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s Num chunks for original ds after reloading: 5000 Num chunks for selected ds: 1 flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s Num chunks for selected ds after flattening: 5000000 Selected ds save/load save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s Num chunks for selected ds after reloading: 5000000 ``` ### Expected behavior Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping. ### Environment info - `datasets` version: 2.9.1.dev0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5541/timeline
null
completed
null
null
false
[ "Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -...
https://api.github.com/repos/huggingface/datasets/issues/5385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5385/comments
https://api.github.com/repos/huggingface/datasets/issues/5385/events
https://github.com/huggingface/datasets/issues/5385
1,508,535,532
I_kwDODunzps5Z6mzs
5,385
Is `fs=` deprecated in `load_from_disk()` as well?
[]
closed
false
null
3
2022-12-22T21:00:45Z
2023-01-23T10:50:05Z
2023-01-23T10:50:04Z
null
### Describe the bug The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec: https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340 Is there a reason the same thing shouldn't also apply to `datasets.load.load_from_disk()` as well ? https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/load.py#L1779 ### Steps to reproduce the bug n/a ### Expected behavior n/a ### Environment info n/a
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5385/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5385/timeline
null
completed
null
null
false
[ "Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR? ", "> Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR?\r\n\r\nYeah I can do that sometime next week. Should the storage_options be a new arg here? I’ll look around for anywh...
https://api.github.com/repos/huggingface/datasets/issues/2653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2653/comments
https://api.github.com/repos/huggingface/datasets/issues/2653/events
https://github.com/huggingface/datasets/issues/2653
945,102,321
MDU6SXNzdWU5NDUxMDIzMjE=
2,653
Add SD task for SUPERB
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "closed_at": "2021-09-02T05:34:03Z", "closed_issues": 2, "created_at": "2021-07-09T05:49:00Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/7", "id": 6931350, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/7/labels", "node_id": "MDk6TWlsZXN0b25lNjkzMTM1MA==", "number": 7, "open_issues": 0, "state": "closed", "title": "1.11", "updated_at": "2021-09-02T05:34:03Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/7" }
2
2021-07-15T07:51:40Z
2021-08-04T17:03:52Z
2021-08-04T17:03:52Z
null
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [ ] README: tags + description sections Related to #2619. cc: @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2653/timeline
null
completed
null
null
false
[ "Note that this subset requires us to:\r\n\r\n* generate the LibriMix corpus from LibriSpeech\r\n* prepare the corpus for diarization\r\n\r\nAs suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data\r\n...
https://api.github.com/repos/huggingface/datasets/issues/8
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/8/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/8/comments
https://api.github.com/repos/huggingface/datasets/issues/8/events
https://github.com/huggingface/datasets/pull/8
601,783,243
MDExOlB1bGxSZXF1ZXN0NDA0OTg0NDUz
8
Fix issue 6: error when the citation is missing in the DatasetInfo
[]
closed
false
null
0
2020-04-17T08:04:26Z
2020-04-29T09:27:11Z
2020-04-20T13:24:12Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/8/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/8/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/8.diff", "html_url": "https://github.com/huggingface/datasets/pull/8", "merged_at": "2020-04-20T13:24:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/8.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/8" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1356/comments
https://api.github.com/repos/huggingface/datasets/issues/1356/events
https://github.com/huggingface/datasets/pull/1356
759,994,457
MDExOlB1bGxSZXF1ZXN0NTM0ODk3OTQ1
1,356
Add StackOverflow StackSample dataset
[]
closed
false
null
5
2020-12-09T04:59:51Z
2020-12-21T14:48:21Z
2020-12-21T14:48:21Z
null
This PR adds the StackOverflow StackSample dataset from Kaggle: https://www.kaggle.com/stackoverflow/stacksample Ran through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1356/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1356/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1356.diff", "html_url": "https://github.com/huggingface/datasets/pull/1356", "merged_at": "2020-12-21T14:48:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1356.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1356" }
true
[ "@lhoestq Thanks for the review and suggestions! I've added your comments and pushed the changes. I'm having issues with the dummy data still. When I run the dummy data test\r\n\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample\r\n```\r\nI g...
https://api.github.com/repos/huggingface/datasets/issues/4694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4694/comments
https://api.github.com/repos/huggingface/datasets/issues/4694/events
https://github.com/huggingface/datasets/issues/4694
1,306,958,380
I_kwDODunzps5N5pos
4,694
Distributed data parallel training for streaming datasets
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
6
2022-07-17T01:29:43Z
2023-04-26T18:21:09Z
null
null
### Feature request Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training? ### Motivation Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation? ### Your contribution Does it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`? What is`IterableDatasetShard` expected to do?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4694/timeline
null
null
null
null
false
[ "Hi ! According to https://huggingface.co/docs/datasets/use_with_pytorch#stream-data you can use the pytorch DataLoader with `num_workers>0` to distribute the shards across your workers (it uses `torch.utils.data.get_worker_info()` to get the worker ID and select the right subsets of shards to use)\r\n\r\n<s> EDIT:...
https://api.github.com/repos/huggingface/datasets/issues/3129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3129/comments
https://api.github.com/repos/huggingface/datasets/issues/3129/events
https://github.com/huggingface/datasets/pull/3129
1,032,234,167
PR_kwDODunzps4tezlA
3,129
Support Audio feature for TAR archives in sequential access
[]
closed
false
null
7
2021-10-21T08:56:51Z
2021-11-17T17:42:08Z
2021-11-17T17:42:07Z
null
Add Audio feature support for TAR archived files in sequential access. Fix #3128.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3129/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3129/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3129.diff", "html_url": "https://github.com/huggingface/datasets/pull/3129", "merged_at": "2021-11-17T17:42:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3129.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3129" }
true
[ "Also do you think we can adapt `cast_column` to keep the same value for this new parameter when the user only wants to change the sampling rate ?", "Thanks for your comments, @lhoestq, I will address them afterwards.\r\n\r\nBut, I think it is more important/urgent first address the current blocking non-passing t...
https://api.github.com/repos/huggingface/datasets/issues/4169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4169/comments
https://api.github.com/repos/huggingface/datasets/issues/4169/events
https://github.com/huggingface/datasets/issues/4169
1,203,995,869
I_kwDODunzps5Hw4Td
4,169
Timit_asr dataset cannot be previewed recently
[]
closed
false
null
5
2022-04-14T03:28:31Z
2023-02-03T04:54:57Z
2022-05-06T16:06:51Z
null
## Dataset viewer issue for '*timit_asr*' **Link:** *https://huggingface.co/datasets/timit_asr* Issue: The timit-asr dataset cannot be previewed recently. Am I the one who added this dataset ? Yes-No No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4169/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4169/timeline
null
completed
null
null
false
[ "Thanks for reporting. The bug has already been detected, and we hope to fix it soon.", "TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it", "> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might ta...
https://api.github.com/repos/huggingface/datasets/issues/4576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4576/comments
https://api.github.com/repos/huggingface/datasets/issues/4576/events
https://github.com/huggingface/datasets/pull/4576
1,285,698,576
PR_kwDODunzps46aSN_
4,576
Include `metadata.jsonl` in resolved data files
[]
closed
false
null
5
2022-06-27T12:01:29Z
2022-07-01T12:44:55Z
2022-06-30T10:15:32Z
null
Include `metadata.jsonl` in resolved data files. Fix #4548 @lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4576/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4576/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4576.diff", "html_url": "https://github.com/huggingface/datasets/pull/4576", "merged_at": "2022-06-30T10:15:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4576.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4576" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files ...
https://api.github.com/repos/huggingface/datasets/issues/4675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4675/comments
https://api.github.com/repos/huggingface/datasets/issues/4675/events
https://github.com/huggingface/datasets/issues/4675
1,302,193,649
I_kwDODunzps5NneXx
4,675
Unable to use dataset with PyTorch dataloader
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
1
2022-07-12T15:04:04Z
2022-07-14T14:17:46Z
null
null
## Describe the bug When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below. ## Steps to reproduce the bug ```python from datasets import load_dataset from torch.utils.data import DataLoader ds = load_dataset( "para_crawl", name="enfr", cache_dir="/tmp/test/", split="train", keep_in_memory=True, ) dataloader = DataLoader(ds.with_format("torch"), num_workers=32) print(next(iter(dataloader))) ``` Is there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-/ Thanks in advance for your help! ## Expected results The code should run with no error ## Actual results ``` AttributeError: 'str' object has no attribute 'dtype' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4675/timeline
null
null
null
null
false
[ "Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumP...
https://api.github.com/repos/huggingface/datasets/issues/5454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5454/comments
https://api.github.com/repos/huggingface/datasets/issues/5454/events
https://github.com/huggingface/datasets/issues/5454
1,552,890,419
I_kwDODunzps5cjzoz
5,454
Save and resume the state of a DataLoader
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
open
false
null
2
2023-01-23T10:58:54Z
2023-01-24T01:45:48Z
null
null
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed) What I have in mind (but lmk if you have other ideas or comments): For map-style datasets, this requires to have a PyTorch Sampler state that can be saved and reloaded per node and worker. For iterable datasets, this requires to save the state of the dataset iterator, which includes: - the current shard idx and row position in the current shard - the epoch number - the rng state - the shuffle buffer Right now you can already resume the data loading of an iterable dataset by using `IterableDataset.skip` but it takes a lot of time because it re-iterates on all the past data until it reaches the resuming point. cc @stas00 @sgugger
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5454/timeline
null
null
null
null
false
[ "Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.", "Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra fe...
https://api.github.com/repos/huggingface/datasets/issues/5811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5811/comments
https://api.github.com/repos/huggingface/datasets/issues/5811/events
https://github.com/huggingface/datasets/issues/5811
1,689,919,046
I_kwDODunzps5kuh5G
5,811
load_dataset: TypeError: 'NoneType' object is not callable, on local dataset filename changes
[]
open
false
null
1
2023-04-30T13:27:17Z
2023-05-05T17:44:03Z
null
null
### Describe the bug I've adapted Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py) to train using a local dataset, which has been working. Upon changing the filenames of the `.json` & `.py` files in my local dataset directory, `dataset = load_dataset(path_or_dataset)["train"]` throws the error: ```python 2023-04-30 09:10:52 INFO [training.trainer] Loading dataset from dushowxa-characters Traceback (most recent call last): File "/data/dushowxa-dolly/train_dushowxa.py", line 26, in <module> load_training_dataset() File "/data/dushowxa-dolly/training/trainer.py", line 89, in load_training_dataset dataset = load_dataset(path_or_dataset)["train"] File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1773, in load_dataset builder_instance = load_dataset_builder( File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1528, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( TypeError: 'NoneType' object is not callable ``` The local dataset filenames were of the form `dushowxa-characters/expanse-dushowxa-characters.json` and are now of the form `dushowxa-characters/dushowxa-characters.json` (the word `expanse-` was removed from the filenames). Is this perhaps a dataset caching issue? I have attempted to manually clear caches, but to no effect: ```sh rm -rfv ~/.cache/huggingface/datasets/* rm -rfv ~/.cache/huggingface/modules/* ``` ### Steps to reproduce the bug Run `python3 train_dushowxa.py` (adapted from Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py)). ### Expected behavior Training succeeds as before local dataset filenames were changed. ### Environment info Ubuntu 22.04, Python 3.10.6, venv ```python accelerate>=0.16.0,<1 click>=8.0.4,<9 datasets>=2.10.0,<3 deepspeed>=0.9.0,<1 transformers[torch]>=4.28.1,<5 langchain>=0.0.139 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5811/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5811/timeline
null
null
null
null
false
[ "This error means a `DatasetBuilder` subclass that generates the dataset could not be found inside the script, so make sure `dushowxa-characters/dushowxa-characters.py `is a valid dataset script (assuming `path_or_dataset` is `dushowxa-characters`)\r\n\r\nAlso, we should improve the error to make it more obvious wh...
https://api.github.com/repos/huggingface/datasets/issues/2302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2302/comments
https://api.github.com/repos/huggingface/datasets/issues/2302/events
https://github.com/huggingface/datasets/pull/2302
873,961,435
MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3
2,302
Add SubjQA dataset
[]
closed
false
null
4
2021-05-02T14:51:20Z
2021-05-10T09:21:19Z
2021-05-10T09:21:19Z
null
Hello datasetters 🙂! Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance). I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2 Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2302/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2302/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2302.diff", "html_url": "https://github.com/huggingface/datasets/pull/2302", "merged_at": "2021-05-10T09:21:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2302.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2302" }
true
[ "I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and 🤞 ?", "Hi @lewtun, thanks for adding this dataset!\r\n\r\nIf the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card ...
https://api.github.com/repos/huggingface/datasets/issues/2147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2147/comments
https://api.github.com/repos/huggingface/datasets/issues/2147/events
https://github.com/huggingface/datasets/pull/2147
844,687,831
MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4
2,147
Render docstring return type as inline
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
0
2021-03-30T14:55:43Z
2021-03-31T13:11:05Z
2021-03-31T13:11:05Z
null
This documentation setting will avoid having the return type in a separate line under `Return type`. See e.g. current docs for `Dataset.to_csv`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2147/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2147/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2147.diff", "html_url": "https://github.com/huggingface/datasets/pull/2147", "merged_at": "2021-03-31T13:11:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2147.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2147" }
true
[]