id
int64
953M
3.35B
number
int64
2.72k
7.75k
title
stringlengths
1
290
state
stringclasses
2 values
created_at
timestamp[s]date
2021-07-26 12:21:17
2025-08-23 00:18:43
updated_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-23 12:34:39
closed_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-20 16:35:55
html_url
stringlengths
49
51
pull_request
dict
user_login
stringlengths
3
26
is_pull_request
bool
2 classes
comments
listlengths
0
30
1,317,822,345
4,744
Remove instructions to generate dummy data from our docs
closed
2022-07-26T07:32:58
2022-08-02T23:50:30
2022-08-02T23:50:30
https://github.com/huggingface/datasets/issues/4744
null
albertvillanova
false
[ "Note that for me personally, conceptually all the dummy data (even for \"canonical\" datasets) should be superseded by `datasets-server`, which performs some kind of CI/CD of datasets (including the canonical ones)", "I totally agree: next step should be rethinking if dummy data makes sense for canonical datasets (once we have datasets-server) and eventually remove it.\r\n\r\nBut for now, we could at least start by removing the indication to generate dummy data from our docs." ]
1,317,362,561
4,743
Update map docs
closed
2022-07-25T20:59:35
2022-07-27T16:22:04
2022-07-27T16:10:04
https://github.com/huggingface/datasets/pull/4743
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4743", "html_url": "https://github.com/huggingface/datasets/pull/4743", "diff_url": "https://github.com/huggingface/datasets/pull/4743.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4743.patch", "merged_at": "2022-07-27T16:10:04" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,317,260,663
4,742
Dummy data nowhere to be found
closed
2022-07-25T19:18:42
2022-11-04T14:04:24
2022-11-04T14:04:10
https://github.com/huggingface/datasets/issues/4742
null
BramVanroy
false
[ "Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst of all, please note that you do not need the dummy data: this was the case when we were adding datasets to the `datasets` library (on this GitHub repo), so that we could test the correct loading of all datasets with our CI. However, this is no longer the case for datasets on the Hub.\r\n- We should definitely update our docs.\r\n\r\nSecond, the dummy data is generated locally:\r\n- in your case, the dummy data will be generated inside the directory: `./datasets/hebban-reviews/dummy`\r\n- please note the preceding `./datasets` directory: the reason for this is that the command to generate the dummy data was specifically created for our `datasets` library, and therefore assumes our directory structure: commands are run from the root directory of our GitHub repo, and datasets scripts are under `./datasets` \r\n\r\n\r\n ", "I have opened an Issue to update the instructions on dummy data generation:\r\n- #4744", "Dummy data generation is deprecated now, so I think we can close this issue." ]
1,316,621,272
4,741
Fix to dict conversion of `DatasetInfo`/`Features`
closed
2022-07-25T10:41:27
2022-07-25T12:50:36
2022-07-25T12:37:53
https://github.com/huggingface/datasets/pull/4741
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4741", "html_url": "https://github.com/huggingface/datasets/pull/4741", "diff_url": "https://github.com/huggingface/datasets/pull/4741.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4741.patch", "merged_at": "2022-07-25T12:37:53" }
mariosasko
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,316,478,007
4,740
Fix multiprocessing in map_nested
closed
2022-07-25T08:44:19
2022-07-28T10:53:23
2022-07-28T10:40:31
https://github.com/huggingface/datasets/pull/4740
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4740", "html_url": "https://github.com/huggingface/datasets/pull/4740", "diff_url": "https://github.com/huggingface/datasets/pull/4740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4740.patch", "merged_at": "2022-07-28T10:40:31" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq as a workaround to preserve previous behavior, the parameter `multiprocessing_min_length=16` is passed from `download` to `map_nested`, so that multiprocessing is only used if at least 16 files to be downloaded.\r\n\r\nNote that there is a small breaking change (I think previously it was unintended behavior, so that I have fixed it):\r\n- Before (with default `num_proc=16`) if there were 16 files to be downloaded, multiprocessing was not used\r\n- Now (with default `num_proc=16`) if there are 16 files to be downloaded, multiprocessing is used", "Thanks for the workaround !" ]
1,316,400,915
4,739
Deprecate metrics
closed
2022-07-25T07:35:55
2022-07-28T11:44:27
2022-07-28T11:32:16
https://github.com/huggingface/datasets/pull/4739
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4739", "html_url": "https://github.com/huggingface/datasets/pull/4739", "diff_url": "https://github.com/huggingface/datasets/pull/4739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4739.patch", "merged_at": "2022-07-28T11:32:16" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I mark this as Draft because the deprecated version number needs being updated after the latest release.", "Perhaps now is the time to also update the `inspect_metric` from `evaluate` with the changes introduced in https://github.com/huggingface/datasets/pull/4433 (cc @lvwerra) ", "What do you think of including what changes users have to do to switch to `evaluate` in the warning message ?\r\n(basically replace `datasets.load_metric` by `evaluate.load`)\r\n\r\nI think it can help users migrate to `evaluate` and silence the warnings" ]
1,315,222,166
4,738
Use CI unit/integration tests
closed
2022-07-22T16:48:00
2022-07-26T20:19:22
2022-07-26T20:07:05
https://github.com/huggingface/datasets/pull/4738
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4738", "html_url": "https://github.com/huggingface/datasets/pull/4738", "diff_url": "https://github.com/huggingface/datasets/pull/4738.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4738.patch", "merged_at": "2022-07-26T20:07:05" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think this PR can be merged. Willing to see it in action.\r\n\r\nCC: @lhoestq " ]
1,315,011,004
4,737
Download error on scene_parse_150
closed
2022-07-22T13:28:28
2022-09-01T15:37:11
2022-09-01T15:37:11
https://github.com/huggingface/datasets/issues/4737
null
juliensimon
false
[ "Hi! The server with the data seems to be down. I've reported this issue (https://github.com/CSAILVision/sceneparsing/issues/34) in the dataset repo. ", "The URL seems to work now, and therefore the script as well." ]
1,314,931,996
4,736
Dataset Viewer issue for deepklarity/huggingface-spaces-dataset
closed
2022-07-22T12:14:18
2022-07-22T13:46:38
2022-07-22T13:46:38
https://github.com/huggingface/datasets/issues/4736
null
dk-crazydiv
false
[ "Thanks for reporting. You're right, workers were under-provisioned due to a manual error, and the job queue was full. It's fixed now." ]
1,314,501,641
4,735
Pin rouge_score test dependency
closed
2022-07-22T07:18:21
2022-07-22T07:58:14
2022-07-22T07:45:18
https://github.com/huggingface/datasets/pull/4735
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4735", "html_url": "https://github.com/huggingface/datasets/pull/4735", "diff_url": "https://github.com/huggingface/datasets/pull/4735.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4735.patch", "merged_at": "2022-07-22T07:45:18" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,314,495,382
4,734
Package rouge-score cannot be imported
closed
2022-07-22T07:15:05
2022-07-22T07:45:19
2022-07-22T07:45:18
https://github.com/huggingface/datasets/issues/4734
null
albertvillanova
false
[ "We have added a comment on an existing issue opened in their repo: https://github.com/google-research/google-research/issues/1212#issuecomment-1192267130\r\n- https://github.com/google-research/google-research/issues/1212" ]
1,314,479,616
4,733
rouge metric
closed
2022-07-22T07:06:51
2022-07-22T09:08:02
2022-07-22T09:05:35
https://github.com/huggingface/datasets/issues/4733
null
asking28
false
[ "Fixed by:\r\n- #4735" ]
1,314,371,566
4,732
Document better that loading a dataset passing its name does not use the local script
closed
2022-07-22T06:07:31
2022-08-23T16:32:23
2022-08-23T16:32:23
https://github.com/huggingface/datasets/issues/4732
null
albertvillanova
false
[ "Thanks for the feedback!\r\n\r\nI think since this issue is closely related to loading, I can add a clearer explanation under [Load > local loading script](https://huggingface.co/docs/datasets/main/en/loading#local-loading-script).", "That makes sense but I think having a line about it under https://huggingface.co/docs/datasets/installation#source the \"source\" header here would be useful. My mental model of `pip install -e .` does not include the fact that the source files aren't actually being used. ", "Thanks for sharing your perspective. I think the `load_dataset` function is the only one that pulls from GitHub, and since this use-case is very specific, I don't think we need to include such a broad clarification in the Installation section.\r\n\r\nFeel free to check out the linked PR and let me know if it needs any additional explanation 😊" ]
1,313,773,348
4,731
docs: ✏️ fix TranslationVariableLanguages example
closed
2022-07-21T20:35:41
2022-07-22T07:01:00
2022-07-22T06:48:42
https://github.com/huggingface/datasets/pull/4731
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4731", "html_url": "https://github.com/huggingface/datasets/pull/4731", "diff_url": "https://github.com/huggingface/datasets/pull/4731.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4731.patch", "merged_at": "2022-07-22T06:48:42" }
severo
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,313,421,263
4,730
Loading imagenet-1k validation split takes much more RAM than expected
closed
2022-07-21T15:14:06
2022-07-21T16:41:04
2022-07-21T16:41:04
https://github.com/huggingface/datasets/issues/4730
null
fxmarty
false
[ "My bad, `482 * 418 * 50000 * 3 / 1000000 = 30221 MB` ( https://stackoverflow.com/a/42979315 ).\r\n\r\nMeanwhile `256 * 256 * 50000 * 3 / 1000000 = 9830 MB`. We are loading the non-cropped images and that is why we take so much RAM." ]
1,313,374,015
4,729
Refactor Hub tests
closed
2022-07-21T14:43:13
2022-07-22T15:09:49
2022-07-22T14:56:29
https://github.com/huggingface/datasets/pull/4729
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4729", "html_url": "https://github.com/huggingface/datasets/pull/4729", "diff_url": "https://github.com/huggingface/datasets/pull/4729.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4729.patch", "merged_at": "2022-07-22T14:56:29" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,312,897,454
4,728
load_dataset gives "403" error when using Financial Phrasebank
closed
2022-07-21T08:43:32
2022-08-04T08:32:35
2022-08-04T08:32:35
https://github.com/huggingface/datasets/issues/4728
null
rohitvincent
false
[ "Hi @rohitvincent, thanks for reporting.\r\n\r\nUnfortunately I'm not able to reproduce your issue:\r\n```python\r\nIn [2]: from datasets import load_dataset, DownloadMode\r\n ...: load_dataset(path='financial_phrasebank',name='sentences_allagree', download_mode=\"force_redownload\")\r\nDownloading builder script: 6.04kB [00:00, 2.87MB/s] \r\nDownloading metadata: 13.7kB [00:00, 7.24MB/s] \r\nDownloading and preparing dataset financial_phrasebank/sentences_allagree (download: 665.91 KiB, generated: 296.26 KiB, post-processed: Unknown size, total: 962.17 KiB) to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 682k/682k [00:00<00:00, 7.66MB/s]\r\nDataset financial_phrasebank downloaded and prepared to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 918.80it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label'],\r\n num_rows: 2264\r\n })\r\n})\r\n```\r\n\r\nAre you able to access the link? https://www.researchgate.net/profile/Pekka-Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip", "Yes was able to download from the link manually. But still, get the same error when I use load_dataset.", "Fixed once data files are hosted on the Hub:\r\n- #4598" ]
1,312,645,391
4,727
Dataset Viewer issue for TheNoob3131/mosquito-data
closed
2022-07-21T05:24:48
2022-07-21T07:51:56
2022-07-21T07:45:01
https://github.com/huggingface/datasets/issues/4727
null
thenerd31
false
[ "The preview is working OK:\r\n\r\n![Screenshot from 2022-07-21 09-46-09](https://user-images.githubusercontent.com/8515462/180158929-bd8faad4-6392-4fc1-8d9c-df38aa9f8438.png)\r\n\r\n" ]
1,312,082,175
4,726
Fix broken link to the Hub
closed
2022-07-20T22:57:27
2022-07-21T14:33:18
2022-07-21T08:00:54
https://github.com/huggingface/datasets/pull/4726
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4726", "html_url": "https://github.com/huggingface/datasets/pull/4726", "diff_url": "https://github.com/huggingface/datasets/pull/4726.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4726.patch", "merged_at": "2022-07-21T08:00:54" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,311,907,096
4,725
the_pile datasets URL broken.
closed
2022-07-20T20:57:30
2022-07-22T06:09:46
2022-07-21T07:38:19
https://github.com/huggingface/datasets/issues/4725
null
TrentBrick
false
[ "Thanks for reporting, @TrentBrick. We are addressing the change with their data host server.\r\n\r\nOn the meantime, if you would like to work with your fixed local copy of the_pile script, you should use:\r\n```python\r\nload_dataset(\"path/to/your/local/the_pile/the_pile.py\",...\r\n```\r\ninstead of just `load_dataset(\"the_pile\",...`.\r\n\r\nThe latter downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~/.cache/huggingface/modules`) and uses that.", "@TrentBrick, I have checked the URLs and both hosts work, the original (https://the-eye.eu/) and the mirror (https://mystic.the-eye.eu/). See e.g.:\r\n- https://mystic.the-eye.eu/public/AI/pile/\r\n- https://mystic.the-eye.eu/public/AI/pile_preliminary_components/\r\n\r\nPlease, let me know if you still find any issue loading this dataset by using current server URLs.", "Great this is working now. Re the download from GitHub... I'm sure thought went into doing this but could it be made more clear maybe here? https://huggingface.co/docs/datasets/installation for example under installing from source? I spent over an hour questioning my sanity as I kept trying to edit this file, uninstall and reinstall the repo, git reset to previous versions of the file etc.", "Thanks for the quick reply and help too\r\n", "Thanks @TrentBrick for the suggestion about improving our docs: we should definitely do this if you find they are not clear enough.\r\n\r\nCurrently, our docs explain how to load a dataset from a local loading script here: [Load > Local loading script](https://huggingface.co/docs/datasets/loading#local-loading-script)\r\n\r\nI've opened an issue here:\r\n- #4732\r\n\r\nFeel free to comment on it any additional explanation/suggestion/requirement related to this problem." ]
1,311,127,404
4,724
Download and prepare as Parquet for cloud storage
closed
2022-07-20T13:39:02
2022-09-05T17:27:25
2022-09-05T17:25:27
https://github.com/huggingface/datasets/pull/4724
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4724", "html_url": "https://github.com/huggingface/datasets/pull/4724", "diff_url": "https://github.com/huggingface/datasets/pull/4724.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4724.patch", "merged_at": "2022-09-05T17:25:27" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added some docs for dask and took your comments into account\r\n\r\ncc @philschmid if you also want to take a look :)", "Just noticed that it would be more convenient to pass the output dir to download_and_prepare directly, to bypass the caching logic which prepares the dataset at `<cache_dir>/<name>/<version>/<hash>/`. And this way the cache is only used for the downloaded files. What do you think ?\r\n\r\n```python \r\n\r\nbuilder = load_datadet_builder(\"squad\")\r\n# or with a custom cache\r\nbuilder = load_datadet_builder(\"squad\", cache_dir=\"path/to/local/cache/for/downloaded/files\")\r\n\r\n# download and prepare to s3\r\nbuilder.download_and_prepare(\"s3://my_bucket/squad\")\r\n```", "Might be of interest: \r\nPyTorch and AWS introduced better support for S3 streaming in `torchtext`. \r\n![image](https://user-images.githubusercontent.com/32632186/183354186-a7f005e3-4167-4d80-ad1a-c62dd51ad7b6.png)\r\n", "Having thought about it a bit more, I also agree with @philschmid in that it's important to follow the existing APIs (pandas/dask), which means we should support the following at some point:\r\n\r\n* remote data files resolution for the packaged modules to support `load_dataset(\"<format>\", data_files=\"<fs_url>\")`\r\n* `to_<format>(\"<fs_url>\")`\r\n* `load_from_disk` and `save_to_disk` already expose the `fs` param, but it would be cool to support specifying `fsspec` URLs directly as the source/destination path (perhaps we can then deprecate `fs` to be fully aligned with pandas/dask)\r\n\r\nIMO these are the two main issues with the current approach:\r\n* relying on the builder API to generate the formatted files results in a non-friendly format due to how our caching works (a lot of nested subdirectories)\r\n* this approach still downloads the files needed to generate a dataset locally. Considering one of our goals is to align the streaming API with the non-streaming one, this could be avoided by running `to_<format>` on streamed/iterable datasets", "Alright I did the last change I wanted to do, here is the final API:\r\n\r\n```python\r\nbuilder = load_dataset_builder(...)\r\nbuilder.download_and_prepare(\"s3://...\", storage_options={\"token\": ...})\r\n```\r\n\r\nand it creates the arrow files directly in the specified directory, not in a nested subdirectory structure as we do in the cache !\r\n\r\n> this approach still downloads the files needed to generate a dataset locally. Considering one of our goals is to align the streaming API with the non-streaming one, this could be avoided by running to_<format> on streamed/iterable datasets\r\n\r\nYup this can be explored in some future work I think. Though to keep things simple and clear I would keep the streaming behaviors only when you load a dataset in streaming mode, and not include it in `download_and_prepare` (because it wouldn't be aligned with the name of the function, which imply to 1. download and 2. prepare ^^). Maybe an API like that can make sense for those who need full streaming\r\n\r\n```python\r\nds = load_dataset(..., streaming=True)\r\nds.to_parquet(\"s3://...\")\r\n```", "totally agree with your comment on the meaning of \"loading\", I'll update the docs", "I took your comments into account and reverted all the changes related to `cache_dir` to keep the support for remote `cache_dir` for beam datasets. I also updated the wording in the docs to not use \"load\" when it's not appropriate :)" ]
1,310,970,604
4,723
Refactor conftest fixtures
closed
2022-07-20T12:15:22
2022-07-21T14:37:11
2022-07-21T14:24:18
https://github.com/huggingface/datasets/pull/4723
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4723", "html_url": "https://github.com/huggingface/datasets/pull/4723", "diff_url": "https://github.com/huggingface/datasets/pull/4723.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4723.patch", "merged_at": "2022-07-21T14:24:18" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,310,785,916
4,722
Docs: Fix same-page haslinks
closed
2022-07-20T10:04:37
2022-07-20T17:02:33
2022-07-20T16:49:36
https://github.com/huggingface/datasets/pull/4722
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4722", "html_url": "https://github.com/huggingface/datasets/pull/4722", "diff_url": "https://github.com/huggingface/datasets/pull/4722.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4722.patch", "merged_at": "2022-07-20T16:49:36" }
mishig25
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,310,253,552
4,721
PyArrow Dataset error when calling `load_dataset`
open
2022-07-20T01:16:03
2022-07-22T14:11:47
null
https://github.com/huggingface/datasets/issues/4721
null
piraka9011
false
[ "Hi ! It looks like a bug in `pyarrow`. If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nTo achieve that you can try to lower the value of `max_shard_size` and also don't use `map` before `push_to_hub`.\r\n\r\nDo you have a minimum reproducible example that we can share with the Arrow team for further debugging ?", "> If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nYup, I did not encounter this bug when I was testing my script with a slice of <1000 samples for my dataset.\r\n\r\n> Do you have a minimum reproducible example...\r\n\r\nNot sure if I can get more minimal than the script I shared above. Are you asking for a sample json file?\r\nJust generate a random manifest list, I can add that to the above script if that's what you mean?\r\n", "Actually this is probably linked to this open issue: https://issues.apache.org/jira/browse/ARROW-5030.\r\n\r\nsetting `max_shard_size=\"2GB\"` should do the job (or `max_shard_size=\"1GB\"` if you want to be on the safe side, especially given that there can be some variance in the shard sizes if the dataset is not evenly distributed)" ]
1,309,980,195
4,720
Dataset Viewer issue for shamikbose89/lancaster_newsbooks
closed
2022-07-19T20:00:07
2022-09-08T16:47:21
2022-09-08T16:47:21
https://github.com/huggingface/datasets/issues/4720
null
shamikbose
false
[ "It seems like the list of splits could not be obtained:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"shamikbose89/lancaster_newsbooks\", \"default\")\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/shamikbose89--lancaster_newsbooks/2d1c63d269bf7b9342accce0a95960b1710ab4bc774248878bd80eb96c1afaf7/lancaster_newsbooks.py\", line 73, in _split_generators\r\n data_dir = dl_manager.download_and_extract(_URL)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 916, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 879, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 348, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 884, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 388, in _get_extraction_protocol\r\n return _get_extraction_protocol_with_magic_number(f)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 354, in _get_extraction_protocol_with_magic_number\r\n f.seek(0)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 684, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nping @huggingface/datasets ", "Oh, I removed the 'split' key from `kwargs`. I put it back in, but there's still the same error", "It looks like the data host doesn't support http range requests, which is necessary to glob inside a ZIP archive in streaming mode. Can you try hosting the dataset elsewhere ? Or download each file separately from https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 ?", "@lhoestq Thanks! That seems to have solved it. I can get the splits with the `get_dataset_split_names()` function. The dataset viewer is still not loading properly, though. The new error is\r\n```\r\nStatus code: 400\r\nException: BadZipFile\r\nMessage: File is not a zip file\r\n```\r\n\r\nPS. The dataset loads properly and can be accessed" ]
1,309,854,492
4,719
Issue loading TheNoob3131/mosquito-data dataset
closed
2022-07-19T17:47:37
2022-07-20T06:46:57
2022-07-20T06:46:02
https://github.com/huggingface/datasets/issues/4719
null
thenerd31
false
[ "I am also getting a ValueError: 'Couldn't cast' at the bottom. Is this because of some delimiter issue? My dataset is on the Huggingface Hub. If you could look at it, that would be greatly appreciated.", "Hi @thenerd31, thanks for reporting.\r\n\r\nPlease note that your issue is not caused by the Hugging Face Datasets library, but it has to do with the specific implementation of your dataset on the Hub.\r\n\r\nTherefore, I'm transferring this discussion to your own dataset Community tab: https://huggingface.co/datasets/TheNoob3131/mosquito-data/discussions/1" ]
1,309,520,453
4,718
Make Extractor accept Path as input
closed
2022-07-19T13:25:06
2022-07-22T13:42:27
2022-07-22T13:29:43
https://github.com/huggingface/datasets/pull/4718
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4718", "html_url": "https://github.com/huggingface/datasets/pull/4718", "diff_url": "https://github.com/huggingface/datasets/pull/4718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4718.patch", "merged_at": "2022-07-22T13:29:43" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,309,512,483
4,717
Dataset Viewer issue for LawalAfeez/englishreview-ds-mini
closed
2022-07-19T13:19:39
2022-07-20T08:32:57
2022-07-20T08:32:57
https://github.com/huggingface/datasets/issues/4717
null
lawalAfeez820
false
[ "It's currently working, as far as I understand\r\n\r\nhttps://huggingface.co/datasets/LawalAfeez/englishreview-ds-mini/viewer/LawalAfeez--englishreview-ds-mini/train\r\n\r\n<img width=\"1556\" alt=\"Capture d’écran 2022-07-19 à 09 24 01\" src=\"https://user-images.githubusercontent.com/1676121/179761130-2d7980b9-c0f6-4093-8b1d-f0a3872fef3f.png\">\r\n\r\n---\r\n\r\nWhat was your issue?" ]
1,309,455,838
4,716
Support "tags" yaml tag
closed
2022-07-19T12:34:31
2022-07-20T13:44:50
2022-07-20T13:31:56
https://github.com/huggingface/datasets/pull/4716
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4716", "html_url": "https://github.com/huggingface/datasets/pull/4716", "diff_url": "https://github.com/huggingface/datasets/pull/4716.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4716.patch", "merged_at": "2022-07-20T13:31:56" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "IMO `DatasetMetadata` shouldn't crash with attributes that it doesn't know, btw", "Yea this PR is mostly to have a validation that this field contains a list of strings.\r\n\r\nRegarding unknown fields, the tagging app currently returns an error if a field is unknown using the `DatasetMetadata`. We can change that though" ]
1,309,405,980
4,715
Fix POS tags
closed
2022-07-19T11:52:54
2022-07-19T12:54:34
2022-07-19T12:41:16
https://github.com/huggingface/datasets/pull/4715
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4715", "html_url": "https://github.com/huggingface/datasets/pull/4715", "diff_url": "https://github.com/huggingface/datasets/pull/4715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4715.patch", "merged_at": "2022-07-19T12:41:15" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI failures are about missing content in the dataset cards or bad tags, and this is unrelated to this PR. Merging :)" ]
1,309,265,682
4,714
Fix named split sorting and remove unnecessary casting
closed
2022-07-19T09:48:28
2022-07-22T09:39:45
2022-07-22T09:10:57
https://github.com/huggingface/datasets/pull/4714
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4714", "html_url": "https://github.com/huggingface/datasets/pull/4714", "diff_url": "https://github.com/huggingface/datasets/pull/4714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4714.patch", "merged_at": "2022-07-22T09:10:57" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "hahaha what a timing, I added my comment right after you merged x)\r\n\r\nyou can ignore my (nit), it's fine", "Sorry, just too sync... :sweat_smile: " ]
1,309,184,756
4,713
Document installation of sox OS dependency for audio
closed
2022-07-19T08:42:35
2022-07-21T08:16:59
2022-07-21T08:04:15
https://github.com/huggingface/datasets/pull/4713
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4713", "html_url": "https://github.com/huggingface/datasets/pull/4713", "diff_url": "https://github.com/huggingface/datasets/pull/4713.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4713.patch", "merged_at": "2022-07-21T08:04:15" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,309,177,302
4,712
Highlight non-commercial license in amazon_reviews_multi dataset card
closed
2022-07-19T08:36:20
2022-07-27T16:09:40
2022-07-27T15:57:41
https://github.com/huggingface/datasets/pull/4712
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4712", "html_url": "https://github.com/huggingface/datasets/pull/4712", "diff_url": "https://github.com/huggingface/datasets/pull/4712.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4712.patch", "merged_at": "2022-07-27T15:57:41" }
sbroadhurst-hf
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,309,138,570
4,711
Document how to create a dataset loading script for audio/vision
closed
2022-07-19T08:03:40
2023-07-25T16:07:52
2023-07-25T16:07:52
https://github.com/huggingface/datasets/issues/4711
null
albertvillanova
false
[ "I'm closing this issue as both the Audio and Image sections now have a \"Create dataset\" page that contains the info about writing the loading script version of a dataset." ]
1,308,958,525
4,710
Add object detection processing tutorial
closed
2022-07-19T04:23:46
2022-07-21T20:10:35
2022-07-21T19:56:42
https://github.com/huggingface/datasets/pull/4710
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4710", "html_url": "https://github.com/huggingface/datasets/pull/4710", "diff_url": "https://github.com/huggingface/datasets/pull/4710.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4710.patch", "merged_at": "2022-07-21T19:56:42" }
nateraw
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Great idea! Now that we have more than one task, it makes sense to separate image classification and object detection so it'll be easier for users to follow.", "@lhoestq do we want to do that in this PR, or should we merge it and let @stevhliu reorganize separately? " ]
1,308,633,093
4,709
WMT21 & WMT22
open
2022-07-18T21:05:33
2023-06-20T09:02:11
null
https://github.com/huggingface/datasets/issues/4709
null
Muennighoff
false
[ "Hi ! That would be awesome to have them indeed, thanks for opening this issue\r\n\r\nI just added you to the WMT org on the HF Hub if you're interested in adding those datasets.\r\n\r\nFeel free to create a dataset repository for each dataset and upload the data files there :) preferably in ZIP archives instead of TAR archives (the current WMT scripts don't support streaming TAR archives, so it would break the dataset preview). We've also had issues with the `statmt.org` host (data unavailable, slow download speed), that's why I think it's better if we re-host the files on the Hub.\r\n\r\n`wmt21` (and wmt22) can be added <s>in this GitHub repository I think</s> on the HF Hub under the `WMT` org (we'll move the previous ones to this org soon as well).\r\nTo add it, you can copy paste the code of the previous one (e.g. wmt19), and add the new data:\r\n- in wmt_utils.py, add the new data subsets. You need to provide the download URLs, as well as the target and source languages\r\n- in wmt21.py (renamed from wmt19.py), you can specify the subsets that WMT21 uses (i.e. the one you just added)\r\n- in wmt_utils.py, define the python function that must be used to parse the subsets you added. To do so, you must go in `_generate_examples` and chose the proper `sub_generator` based on the subset name. For example, the `paracrawl_v3` subset uses the `_parse_tmx` function:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/datasets/wmt19/wmt_utils.py#L834-L835\r\n\r\nHopefully the data is in a format that is already supported and there's no need to write a new `_parse_*` function for the new subsets. Let me know if you have questions or if I can help :)", "@Muennighoff , @lhoestq let me know if you want me to look into this. Happy to help bring WMT21 & WMT22 datasets into 🤗 ! ", "Hi @srhrshr :) Sure, feel free to create a dataset repository on the Hub and start from the implementation of WMT19 if you want. Then we can move the dataset under the WMT org (we'll move the other ones there as well).\r\n\r\nLet me know if you have questions or if I can help", "#self-assign", "#self-assign", "Hello @lhoestq ,\r\n\r\nWould it be possible for me to be granted in the WMT organization (on hf ofc) in order to facilitate dataset uploads? I've already initiated the joining process at this link: https://huggingface.co/wmt\r\n\r\nI appreciate your help with this. Thank you!", "Hi ! Cool I just added you" ]
1,308,279,700
4,708
Fix require torchaudio and refactor test requirements
closed
2022-07-18T17:24:28
2022-07-22T06:30:56
2022-07-22T06:18:11
https://github.com/huggingface/datasets/pull/4708
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4708", "html_url": "https://github.com/huggingface/datasets/pull/4708", "diff_url": "https://github.com/huggingface/datasets/pull/4708.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4708.patch", "merged_at": "2022-07-22T06:18:11" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,308,251,405
4,707
Dataset Viewer issue for TheNoob3131/mosquito-data
closed
2022-07-18T17:07:19
2022-07-18T19:44:46
2022-07-18T17:15:50
https://github.com/huggingface/datasets/issues/4707
null
thenerd31
false
[ "Thanks for reporting. I refreshed the dataset viewer and it now works as expected.\r\n\r\nhttps://huggingface.co/datasets/TheNoob3131/mosquito-data\r\n\r\n<img width=\"1135\" alt=\"Capture d’écran 2022-07-18 à 13 15 22\" src=\"https://user-images.githubusercontent.com/1676121/179566497-e47f1a27-fd84-4a8d-9d7f-2e0f2da803df.png\">\r\n\r\nWe will investigate why it occurred in the first place\r\n", "By chance, could you provide some details about the operations done on the dataset: was it private? gated?", "Yes, it was a private dataset, and when I made it public, the Dataset Preview did not work. \r\n\r\nHowever, now when I make the dataset private, it says that the Dataset Preview has been disabled. Why is this?", "Thanks for the details. For now, the dataset viewer is always disabled on private datasets (see https://huggingface.co/docs/hub/datasets-viewer for more details)", "Hi, it was working fine for a few hours, but then I can't see the dataset viewer again (public dataset). Why is this still happening?\r\nIt's the same error too:\r\n![image](https://user-images.githubusercontent.com/53668030/179602465-f220f971-d3aa-49ba-a31b-60510f4c2a89.png)\r\n", "OK? This is a bug, thanks for help spotting and reproducing it (it occurs when a dataset is switched to private, then to public). We will be working on it, meanwhile, I've restored the dataset viewer manually again." ]
1,308,198,454
4,706
Fix empty examples in xtreme dataset for bucc18 config
closed
2022-07-18T16:22:46
2022-07-19T06:41:14
2022-07-19T06:29:17
https://github.com/huggingface/datasets/pull/4706
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4706", "html_url": "https://github.com/huggingface/datasets/pull/4706", "diff_url": "https://github.com/huggingface/datasets/pull/4706.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4706.patch", "merged_at": "2022-07-19T06:29:17" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I guess the report link is this instead: https://huggingface.co/datasets/xtreme/discussions/1" ]
1,308,161,794
4,705
Fix crd3
closed
2022-07-18T15:53:44
2022-07-21T17:18:44
2022-07-21T17:06:30
https://github.com/huggingface/datasets/pull/4705
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4705", "html_url": "https://github.com/huggingface/datasets/pull/4705", "diff_url": "https://github.com/huggingface/datasets/pull/4705.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4705.patch", "merged_at": "2022-07-21T17:06:30" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,308,147,876
4,704
Skip tests only for lz4/zstd params if not installed
closed
2022-07-18T15:41:40
2022-07-19T13:02:31
2022-07-19T12:49:18
https://github.com/huggingface/datasets/pull/4704
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4704", "html_url": "https://github.com/huggingface/datasets/pull/4704", "diff_url": "https://github.com/huggingface/datasets/pull/4704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4704.patch", "merged_at": "2022-07-19T12:49:18" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,307,844,097
4,703
Make cast in `from_pandas` more robust
closed
2022-07-18T11:55:49
2022-07-22T11:17:42
2022-07-22T11:05:24
https://github.com/huggingface/datasets/pull/4703
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4703", "html_url": "https://github.com/huggingface/datasets/pull/4703", "diff_url": "https://github.com/huggingface/datasets/pull/4703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4703.patch", "merged_at": "2022-07-22T11:05:24" }
mariosasko
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,307,793,811
4,702
Domain specific dataset discovery on the Hugging Face hub
open
2022-07-18T11:14:03
2024-02-12T09:53:43
null
https://github.com/huggingface/datasets/issues/4702
null
davanstrien
false
[ "Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.", "> Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a good idea to make our current taxonomy more complex.\r\n\r\nThanks, for letting me know. Will you allow the topic tags to be user-generated or only chosen from a list?", "Thanks for opening this issue @davanstrien.\r\n\r\nAs we discussed last week, the tag approach would be in principle the simpler to be implemented, either the domain tag (with closed vocabulary: more reliable but also more rigid), or the topic tag (with open vocabulary: more flexible for user needs)", "Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n\r\n(where i suggested using `tags: - foo - bar` IIRC.\r\n\r\nThanks a ton!", "> Hi @davanstrien If i remember correctly this was also discussed inside a hf.co Discussion, would you be able to link it here too?\r\n> \r\n> (where i suggested using `tags: - foo - bar` IIRC.\r\n> \r\n> Thanks a ton!\r\n\r\nThis doesn't ring a bell - I did a quick search of https://discuss.huggingface.co but didn't find anything. \r\n\r\nThe `tags: ` approach sounds like a good option for this. It would be especially nice if these could suggest existing tags, but this probably won't be easily possible through the current interface. \r\n", "I opened a PR to add \"tags\" to the YAML validator:\r\nhttps://github.com/huggingface/datasets/pull/4716\r\n\r\nI also added \"tags\" to the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), with suggestions like \"bio\" or \"newspapers\"", "Thanks @lhoestq for the initiative.\r\n \r\nJust one question: are \"tags\" already supported on the Hub? \r\n\r\nI think they aren't. Thus, the Hub should support them so that they are properly displayed.", "I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)", "> I think they're not displayed, but at least it should enable users to filter by tag in using `huggingface_hub` or using the appropriate query params on the website (not sure if it's possible yet though)\r\n\r\nI think this would already be a helpful start. I'm happy to try this out with the datasets added to https://huggingface.co/organizations/biglam and use the `huggingface_hub` to filter those datasets using the tags. ", "Is this abandoned? \r\nI'm looking for a transport logistics dataset; how can I find one?", "@younes-io Full text search is probably your best bet: https://huggingface.co/search/full-text?type=dataset" ]
1,307,689,625
4,701
Added more information in the README about contributors of the Arabic Speech Corpus
closed
2022-07-18T09:48:03
2022-07-28T10:33:05
2022-07-28T10:33:05
https://github.com/huggingface/datasets/pull/4701
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4701", "html_url": "https://github.com/huggingface/datasets/pull/4701", "diff_url": "https://github.com/huggingface/datasets/pull/4701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4701.patch", "merged_at": "2022-07-28T10:33:04" }
nawarhalabi
true
[]
1,307,599,161
4,700
Support extract lz4 compressed data files
closed
2022-07-18T08:41:31
2022-07-18T14:43:59
2022-07-18T14:31:47
https://github.com/huggingface/datasets/pull/4700
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4700", "html_url": "https://github.com/huggingface/datasets/pull/4700", "diff_url": "https://github.com/huggingface/datasets/pull/4700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4700.patch", "merged_at": "2022-07-18T14:31:47" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,307,555,592
4,699
Fix Authentification Error while streaming
closed
2022-07-18T08:03:41
2022-07-20T13:10:44
2022-07-20T13:10:43
https://github.com/huggingface/datasets/pull/4699
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4699", "html_url": "https://github.com/huggingface/datasets/pull/4699", "diff_url": "https://github.com/huggingface/datasets/pull/4699.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4699.patch", "merged_at": null }
hkjeon13
true
[ "Hi, thanks for working on this, but the fix for this has already been merged in https://github.com/huggingface/datasets/pull/4608." ]
1,307,539,585
4,698
Enable streaming dataset to use the "all" split
closed
2022-07-18T07:47:39
2025-05-21T13:17:19
2025-05-21T13:17:19
https://github.com/huggingface/datasets/pull/4698
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4698", "html_url": "https://github.com/huggingface/datasets/pull/4698", "diff_url": "https://github.com/huggingface/datasets/pull/4698.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4698.patch", "merged_at": null }
cakiki
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4698). All of your documentation changes will be reflected on that endpoint.", "@albertvillanova \r\nAdding the validation split causes these two `assert_called_once` assertions to fail with `AssertionError: Expected 'ArrowWriter' to have been called once. Called 2 times`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/main/tests/test_builder.py#L548-L562\r\n\r\nIt might be better to create a new dummy generator for the streaming tests, WDYT? Alternatively we could test for `self.call_count` equalling 2.", "@cakiki have you read my comment in the issue page?\r\nhttps://github.com/huggingface/datasets/issues/4637#issuecomment-1175984812", "Streaming with `split=all` seems to be working, will fix the failing test next", "Not sure if marking the PR as \"ready for review\" actually notified you, so tagging @albertvillanova just in case :smiley_cat: ", "cc @lhoestq ", "Hi @cakiki, still interested in working on this? :) ", "@albertvillanova So sorry; I have no idea how this slipped through the cracks. Yes, I'd still like to work on this. Is it okay if I DM you on slack?", "Sure!! And nevermind!" ]
1,307,332,253
4,697
Trouble with streaming frgfm/imagenette vision dataset with TAR archive
closed
2022-07-18T02:51:09
2022-08-01T15:10:57
2022-08-01T15:10:57
https://github.com/huggingface/datasets/issues/4697
null
frgfm
false
[ "Hi @frgfm, thanks for reporting.\r\n\r\nAs the error message says, streaming mode is not supported out of the box when the dataset contains TAR archive files.\r\n\r\nTo make the dataset streamable, you have to use `dl_manager.iter_archive`.\r\n\r\nThere are several examples in other datasets, e.g. food101: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nAnd yes, as the link you pointed out, for the streaming to be possible, the metadata file must be loaded before all of the images:\r\n- either this is the case when iterating the archive (and you get the metadata file before the images)\r\n- or you have to extract the metadata file by hand and upload it separately to the Hub", "Hi @albertvillanova :wave:\r\n\r\nThanks! Yeah I saw that but since I didn't have any metadata, I wasn't sure whether I should create them myself.\r\n\r\nSo one last question:\r\nWhat is the metadata supposed to be for archives? The relative path of all files in it?\r\n_(Sorry I'm a bit confused since it's quite hard to debug using the single error message from the data preview :sweat_smile: )_", "Hi @frgfm, streaming a dataset that contains a TAR file requires some tweaks because (contrary to ZIP files), tha TAR archive does not allow random access to any of the contained member files. Instead they have to be accessed sequentially (in the order in which they were put into the TAR file when created) and yielded.\r\n\r\nSo when iterating over the TAR file content, when an image file is found, we need to yield it (and not keeping it in memory, which will require huge RAM memory for large datasets). But when yielding an image file, we also need to yield with it what we call \"metadata\": the class label, and other textual information (for example, for audio files, sometimes we also add info such as the speaker ID, their sex, their age,...).\r\n\r\nAll this information usually is stored in what we call the metadata file: either a JSON or a CSV/TSV file.\r\n\r\nBut if this is also inside the TAR archive, we need to find this file in the first place when iterating the TAR archive, so that we already have this information when we find an image file and we can yield the image file and its metadata info.\r\n\r\nTherefore:\r\n- either the TAR archive contains the metadata file as the first member when iterating it (something we cannot change as it is done at the creation of the TAR file)\r\n- or if not, then we need to have the metadata file elsewhere\r\n - in these cases, what we do (if the dataset license allows it) is:\r\n - we download the TAR file locally, we extract the metadata file and we host the metadata on the Hub\r\n - we modify the dataset loading script so that it first downloads the metadata file (and reads it) and only then starts iterating the content of the TAR archive file\r\n\r\nSee an example of this process we recently did for \"google/fleurs\" (their metadata files for \"train\" were at the end of the TAR archives, after all audio files): https://huggingface.co/datasets/google/fleurs/discussions/4\r\n- we uploaded the metadata file to the Hub\r\n- we adapted the loading script to use it", "Hi @albertvillanova :wave: \r\n\r\nThanks, since my last message, I went through the repo of https://huggingface.co/datasets/food101/blob/main/food101.py and managed to get it to work in the end :pray: \r\n\r\nHere it is: https://huggingface.co/datasets/frgfm/imagenette\r\n\r\nI appreciate you opening an issue to document the process, it might help a few!", "Great to see that you manage to make your dataset streamable. :rocket: \r\n\r\nI'm closing this issue, as for the docs update there is another issue opened:\r\n- #4711" ]
1,307,183,099
4,696
Cannot load LinCE dataset
closed
2022-07-17T19:01:54
2022-07-18T09:20:40
2022-07-18T07:24:22
https://github.com/huggingface/datasets/issues/4696
null
finiteautomata
false
[ "Hi @finiteautomata, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"lince\", \"ner_spaeng\")\r\nDownloading builder script: 20.8kB [00:00, 9.09MB/s] \r\nDownloading metadata: 31.2kB [00:00, 13.5MB/s] \r\nDownloading and preparing dataset lince/ner_spaeng (download: 2.93 MiB, generated: 18.45 MiB, post-processed: Unknown size, total: 21.38 MiB) to .../.cache/huggingface/datasets/lince/ner_spaeng/1.0.0/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.08M/3.08M [00:01<00:00, 2.73MB/s]\r\nDataset lince downloaded and prepared to .../.cache/huggingface/datasets/lince/ner_spaeng/1.0.0/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 630.66it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 33611\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 10085\r\n })\r\n test: Dataset({\r\n features: ['idx', 'words', 'lid', 'ner'],\r\n num_rows: 23527\r\n })\r\n})\r\n``` \r\n\r\nPlease note that for this dataset, the original data files are not hosted on the Hugging Face Hub, but on https://ritual.uh.edu\r\nAnd sometimes, the server might be temporarily unavailable, as your error message said (trying to connect to the server timed out):\r\n```\r\nConnectionError: Couldn't reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError(\"HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))\")))\r\n```\r\nIn these cases you could:\r\n- either contact the owners of the data server where the data is hosted to inform them about the issue in their server\r\n- or re-try after waiting some time: usually these issues are just temporary", "Great, thanks for checking out!" ]
1,307,134,701
4,695
Add MANtIS dataset
closed
2022-07-17T15:53:05
2022-09-30T14:39:30
2022-09-30T14:37:16
https://github.com/huggingface/datasets/pull/4695
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4695", "html_url": "https://github.com/huggingface/datasets/pull/4695", "diff_url": "https://github.com/huggingface/datasets/pull/4695.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4695.patch", "merged_at": null }
bhavitvyamalik
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your contribution, @bhavitvyamalik. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest that you create this dataset there. Please, feel free to tell us if you need some help." ]
1,306,958,380
4,694
Distributed data parallel training for streaming datasets
open
2022-07-17T01:29:43
2023-04-26T18:21:09
null
https://github.com/huggingface/datasets/issues/4694
null
cyk1337
false
[ "Hi ! According to https://huggingface.co/docs/datasets/use_with_pytorch#stream-data you can use the pytorch DataLoader with `num_workers>0` to distribute the shards across your workers (it uses `torch.utils.data.get_worker_info()` to get the worker ID and select the right subsets of shards to use)\r\n\r\n<s> EDIT: here is a code example </s>\r\n```python\r\n# ds = ds.with_format(\"torch\")\r\n# dataloader = DataLoader(ds, num_workers=num_workers)\r\n```\r\n\r\nEDIT: `with_format(\"torch\")` is not required, now you can just do\r\n```python\r\ndataloader = DataLoader(ds, num_workers=num_workers)\r\n```", "@cyk1337 does streaming datasets with multi-gpu works for you? I am testing on one node with multiple gpus, but this is freezing, https://github.com/huggingface/datasets/issues/5123 \r\nIn case you could make this work, could you share with me your data-loading codes?\r\nthank you", "+1", "This has been implemented in `datasets` 2.8:\r\n```python\r\nfrom datasets.distributed import split_dataset_by_node\r\n\r\nds = split_dataset_by_node(ds, rank=rank, world_size=world_size)\r\n```\r\n\r\ndocs: https://huggingface.co/docs/datasets/use_with_pytorch#distributed", "i'm having hanging issues with this when using DDP and allocating the datasets with `split_dataset_by_node` 🤔\r\n\r\n--- \r\n### edit\r\nI don't want to pollute this thread, but for the sake of following up, I observed hanging close to the final iteration of the dataloader. I think this was happening on the final shard. First, I removed the final shard and things worked. Then (including all shards), I reordered the list of shards: `load_dataset('json', data_files=reordered, streaming=True)` and no hang. \r\n\r\nI won't open an issue yet bc I am not quite sure about this observation.", "@wconnell would you mind opening a different bug issue and giving more details?\r\nhttps://github.com/huggingface/datasets/issues/new?assignees=&labels=&template=bug-report.yml\r\n\r\nThanks." ]
1,306,788,322
4,693
update `samsum` script
closed
2022-07-16T11:53:05
2022-09-23T11:40:11
2022-09-23T11:37:57
https://github.com/huggingface/datasets/pull/4693
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4693", "html_url": "https://github.com/huggingface/datasets/pull/4693", "diff_url": "https://github.com/huggingface/datasets/pull/4693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4693.patch", "merged_at": null }
bhavitvyamalik
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "We are closing PRs to dataset scripts because we are moving them to the Hub.\r\n\r\nThanks anyway.\r\n\r\n" ]
1,306,609,680
4,692
Unable to cast a column with `Image()` by using the `cast_column()` feature
closed
2022-07-15T22:56:03
2022-07-19T13:36:24
2022-07-19T13:36:24
https://github.com/huggingface/datasets/issues/4692
null
skrishnan99
false
[ "Hi, thanks for reporting! A PR (https://github.com/huggingface/datasets/pull/4614) has already been opened to address this issue." ]
1,306,389,656
4,691
Dataset Viewer issue for rajistics/indian_food_images
closed
2022-07-15T19:03:15
2022-07-18T15:02:03
2022-07-18T15:02:03
https://github.com/huggingface/datasets/issues/4691
null
rajshah4
false
[ "Hi, thanks for reporting. I triggered a refresh of the preview for this dataset, and it works now. I'm not sure what occurred.\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-07-18 à 11 01 52\" src=\"https://user-images.githubusercontent.com/1676121/179541327-f62ecd5e-a18a-4d91-b316-9e2ebde77a28.png\">\r\n\r\n" ]
1,306,321,975
4,690
Refactor base extractors
closed
2022-07-15T17:47:48
2022-07-18T08:46:56
2022-07-18T08:34:49
https://github.com/huggingface/datasets/pull/4690
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4690", "html_url": "https://github.com/huggingface/datasets/pull/4690", "diff_url": "https://github.com/huggingface/datasets/pull/4690.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4690.patch", "merged_at": "2022-07-18T08:34:49" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,306,230,203
4,689
Test extractors for all compression formats
closed
2022-07-15T16:29:55
2022-07-15T17:47:02
2022-07-15T17:35:24
https://github.com/huggingface/datasets/pull/4689
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4689", "html_url": "https://github.com/huggingface/datasets/pull/4689", "diff_url": "https://github.com/huggingface/datasets/pull/4689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4689.patch", "merged_at": "2022-07-15T17:35:24" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,306,100,488
4,688
Skip test_extractor only for zstd param if zstandard not installed
closed
2022-07-15T14:23:47
2022-07-15T15:27:53
2022-07-15T15:15:24
https://github.com/huggingface/datasets/pull/4688
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4688", "html_url": "https://github.com/huggingface/datasets/pull/4688", "diff_url": "https://github.com/huggingface/datasets/pull/4688.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4688.patch", "merged_at": "2022-07-15T15:15:24" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,306,021,415
4,687
Trigger CI also on push to main
closed
2022-07-15T13:11:29
2022-07-15T13:47:21
2022-07-15T13:35:23
https://github.com/huggingface/datasets/pull/4687
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4687", "html_url": "https://github.com/huggingface/datasets/pull/4687", "diff_url": "https://github.com/huggingface/datasets/pull/4687.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4687.patch", "merged_at": "2022-07-15T13:35:23" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,305,974,924
4,686
Align logging with Transformers (again)
closed
2022-07-15T12:24:29
2023-09-24T10:05:34
2023-07-11T18:29:27
https://github.com/huggingface/datasets/pull/4686
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4686", "html_url": "https://github.com/huggingface/datasets/pull/4686", "diff_url": "https://github.com/huggingface/datasets/pull/4686.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4686.patch", "merged_at": null }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4686). All of your documentation changes will be reflected on that endpoint.", "I wasn't aware of https://github.com/huggingface/datasets/pull/1845 before opening this PR. This issue seems much more complex now ..." ]
1,305,861,708
4,685
Fix mock fsspec
closed
2022-07-15T10:23:12
2022-07-15T13:05:03
2022-07-15T12:52:40
https://github.com/huggingface/datasets/pull/4685
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4685", "html_url": "https://github.com/huggingface/datasets/pull/4685", "diff_url": "https://github.com/huggingface/datasets/pull/4685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4685.patch", "merged_at": "2022-07-15T12:52:40" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,305,554,654
4,684
How to assign new values to Dataset?
closed
2022-07-15T04:17:57
2023-03-20T15:50:41
2022-10-10T11:53:38
https://github.com/huggingface/datasets/issues/4684
null
beyondguo
false
[ "Hi! One option is use `map` with a function that overwrites the labels (`dset = dset.map(lamba _: {\"label\": 0}, features=dset.features`)). Or you can use the `remove_column` + `add_column` combination (`dset = dset.remove_columns(\"label\").add_column(\"label\", [0]*len(data)).cast(dset.features)`, but note that this approach creates an in-memory table for the added column instead of writing to disk, which could be problematic for large datasets.", "Hi! I tried your proposed solution, but it does not solve my problem unfortunately. I am working with a set of protein sequences that have been tokenized with ESM, but some sequences are longer than `max_length`, they have been truncated in the tokenization. So now I want to truncate my labels as well, but that does not work with a mapping (e.g. `dset.map` as you suggested). Specifically, what I did was the following:\r\n\r\n```\r\ndef postprocess_tokenize(tokenized_data):\r\n \"\"\"\r\n adjust label lengths if they dont match.\r\n \"\"\"\r\n if len(tokenized_data['input_ids']) < len(tokenized_data['labels']):\r\n new_labels = tokenized_data['labels'][:len(tokenized_data['input_ids'])]\r\n tokenized_data[\"labels\"] = new_labels\r\n return tokenized_data\r\n\r\ntokenized_data = tokenized_data.map(postprocess_tokenize, batched=True) # this does not adjust the labels...\r\n```\r\n\r\nAny tips on how to do this properly?\r\n\r\nMore generally, I am wondering why the DataCollator supports padding but does not support truncation? Seems odd to me.\r\n\r\nThanks in advance!" ]
1,305,443,253
4,683
Update create dataset card docs
closed
2022-07-15T00:41:29
2022-07-18T17:26:00
2022-07-18T13:24:10
https://github.com/huggingface/datasets/pull/4683
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4683", "html_url": "https://github.com/huggingface/datasets/pull/4683", "diff_url": "https://github.com/huggingface/datasets/pull/4683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4683.patch", "merged_at": "2022-07-18T13:24:10" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,304,788,215
4,682
weird issue/bug with columns (dataset iterable/stream mode)
open
2022-07-14T13:26:47
2022-07-14T13:26:47
null
https://github.com/huggingface/datasets/issues/4682
null
eunseojo
false
[]
1,304,617,484
4,681
IndexError when loading ImageFolder
closed
2022-07-14T10:57:55
2022-07-25T12:37:54
2022-07-25T12:37:54
https://github.com/huggingface/datasets/issues/4681
null
johko
false
[ "Hi, thanks for reporting! If there are no examples in ImageFolder, the `label` column is of type `ClassLabel(names=[])`, which leads to an error in [this line](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/arrow_writer.py#L387) as `asdict(info)` calls `Features({..., \"label\": {'num_classes': 0, 'names': [], 'id': None, '_type': 'ClassLabel'}})`, which then calls `require_decoding` [here](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/features/features.py#L1516) on the dict value it does not expect.\r\n\r\nI see two ways to fix this:\r\n* custom `asdict` where `dict_factory` is also applied on the `dict` object itself besides dataclasses (the built-in implementation calls `type(dict_obj)` - this means we also need to fix `Features.to_dict` btw) \r\n* implement `DatasetInfo.to_dict` (though adding `to_dict` to a data class is a bit weird IMO)\r\n\r\n@lhoestq Which one of these approaches do you like more?\r\n", "Small pref for the first option, it feels weird to know that `Features()` can be called with a dictionary of types defined as dictionaries instead of type instances." ]
1,304,534,770
4,680
Dataset Viewer issue for codeparrot/xlcost-text-to-code
closed
2022-07-14T09:45:50
2022-07-18T16:37:00
2022-07-18T16:04:36
https://github.com/huggingface/datasets/issues/4680
null
loubnabnl
false
[ "There seems to be an issue with the `C++-snippet-level` config:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"codeparrot/xlcost-text-to-code\", \"C++-snippet-level\")\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 352, in get_dataset_config_info\r\n info.splits = {\r\nTypeError: 'NoneType' object is not iterable\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nI remove the dataset-viewer tag since it's not directly related.\r\n\r\nPinging @huggingface/datasets ", "Thanks I found that this subset wasn't properly defined the the config, I fixed it. Now I can see the subsets but I get this error for the viewer\r\n````\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: The split cache is empty.\r\n```", "Yes, the cache is being refreshed, hopefully, it will work in some minutes for all the splits. Some are already here:\r\n\r\nhttps://huggingface.co/datasets/codeparrot/xlcost-text-to-code/viewer/Python-snippet-level/train\r\n\r\n<img width=\"1533\" alt=\"Capture d’écran 2022-07-18 à 12 04 06\" src=\"https://user-images.githubusercontent.com/1676121/179553933-64d874fa-ada9-4b82-900e-082619523c20.png\">\r\n", "I think all the splits are working as expected now", "Perfect, thank you!" ]
1,303,980,648
4,679
Added method to remove excess nesting in a DatasetDict
closed
2022-07-13T21:49:37
2022-07-21T15:55:26
2022-07-21T10:55:02
https://github.com/huggingface/datasets/pull/4679
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4679", "html_url": "https://github.com/huggingface/datasets/pull/4679", "diff_url": "https://github.com/huggingface/datasets/pull/4679.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4679.patch", "merged_at": null }
CakeCrusher
true
[ "Hi ! I think the issue you linked is closed and suggests to use `remove_columns`.\r\n\r\nMoreover if you end up with a dataset with an unnecessarily nested data, please modify your processing functions to not output nested data, or use `map(..., batched=True)` if you function take batches as input", "Hi @lhoestq , you are right about the issues this pull has steered beyond that issue. I created this [colab notebook](https://colab.research.google.com/drive/16aLu6QrDSV_aUYRdpufl5E4iS08qkUGj?usp=sharing) to present the error. I tried using batch and that won't resolve it either. I'm looking into that error right now.", "I think you just need to pass one example at a time to your tokenizer, this way you don't end up with nested data:\r\n```python\r\n\r\ndef preprocessFunction(row):\r\n collatedContext = tokenizer.eos_token.join([row[\"context\"+str(i+1)] for i in range(int(AMT_OF_CONTEXT))])\r\n response = row[\"response\"]\r\n tokenizedContext = tokenizer(\r\n collatedContext, max_length=max_context_length, truncation=True # don't pass as a list here\r\n )\r\n with tokenizer.as_target_tokenizer():\r\n tokenized_response = tokenizer(\r\n response, max_length=max_response_length, truncation=True # don't pass a a list here\r\n )\r\n tokenizedContext[\"labels\"] = tokenized_response[\"input_ids\"]\r\n return tokenizedContext\r\n```", "Yes that is correct, the purpose of this pull is to advise of a more general solution like with `def remove_excess_nesting(self)` or maybe automate the solution (stas00 advised not to automate it as it could \"not be backwards compatible\").", "I'm not sure I understand how having `remove_excess_nesting` would make more sense than just fixing the preprocessFunction to simply not return nested samples, can you elaborate ?", "Figuring out the issue can be a bit difficult to figure out. Only until I added batch does it make a little more sense with the error\r\n\r\n> sequence item 0: expected str instance, list found\r\n\r\nbut batch was never intended.\r\n\r\nWhen you run the colab you will notice that only until collating do you learn there is this error. So i figured it would be better to address it during at the `DatasetDict` level.\r\nI think it would be ideal if the user could be notified at the preprocess function.", "I'm not arguing that `remove_excess_nesting` is the right solution but what I aim to address is dealing with unnecessary nesting as early as possible.", "> When you run the colab you will notice that only until collating do you learn there is this error.\r\n\r\nI think users can just check the `dataset.features` and they would notice that the data are nested\r\n```python\r\n{\r\n 'input_ids': Sequence(Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), length=-1, id=None)\r\n ...\r\n}\r\n```\r\n\r\nSometime nested data are intentional, so you can't know in advance if it's a user's mistake or something planned.", "Yes, I understand, it could be intentional and only the collator has problems with it. So, it is not worth handling it any differently in any other non-erroneous data. \r\n\r\nThat being said do you think there is any use for the `remove_excess_nesting` method? Or maybe it should be applied in a different way? If not feel free to close this PR. ", "I think users can write it and use `map` themselves if needed, it is pretty straightforward to implement.\r\n\r\nI'm closing this PR if you don't mind, and thank you for the discussion :)", "No problem @lhoestq , thanks for walking me through it." ]
1,303,741,432
4,678
Cant pass streaming dataset to dataloader after take()
open
2022-07-13T17:34:18
2022-07-14T13:07:21
null
https://github.com/huggingface/datasets/issues/4678
null
zankner
false
[ "Hi! Calling `take` on an iterable/streamable dataset makes it not possible to shard the dataset, which in turn disables multi-process loading (attempts to split the workload over the shards), so to go past this limitation, you can either use single-process loading in `DataLoader` (`num_workers=None`) or fetch the first `50_000/batch_size` batches in the loop." ]
1,302,258,440
4,677
Random 400 Client Error when pushing dataset
closed
2022-07-12T15:56:44
2023-02-07T13:54:10
2023-02-07T13:54:10
https://github.com/huggingface/datasets/issues/4677
null
msis
false
[ "did you ever fix this? I'm experiencing the same", "I am having the same issue. Even the simple example from the documentation gives me the 400 Error\r\n\r\n\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"stevhliu/demo\")\r\n> dataset.push_to_hub(\"processed_demo\")\r\n\r\n\r\n`requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/REDACTED/commit/main (Request ID: e-tPnYTiCdB5KPmSL86dQ)`\r\n\r\nI \"fixed\" it by initializing a new virtual environment with only datasets==2.5.2 installed.\r\n\r\nThe workaround consists of saving to disk then loading from disk and pushing to hub but from the new clean virtual environment." ]
1,302,202,028
4,676
Dataset.map gets stuck on _cast_to_python_objects
closed
2022-07-12T15:09:58
2022-10-03T13:01:04
2022-10-03T13:01:03
https://github.com/huggingface/datasets/issues/4676
null
srobertjames
false
[ "Are you able to reproduce this? My example is small enough that it should be easy to try.", "Hi! Thanks for reporting and providing a reproducible example. Indeed, by default, `datasets` performs an expensive cast on the values returned by `map` to convert them to one of the types supported by PyArrow (the underlying storage format used by `datasets`). This cast is not needed on NumPy arrays as PyArrow supports them natively, so one way to make this transform faster is to add `return_tensors=\"np\"` to the tokenizer call. \r\n\r\nI think we should mention this in the docs (cc @stevhliu)", "I tested this tokenize function and indeed noticed a casting. However it seems to only concerns the `offset_mapping` field, which contains a list of tuples, that is converted to a list of lists. Since `pyarrow` also supports tuples, we actually don't need to convert the tuples to lists. \r\n\r\nI think this can be changed here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/src/datasets/features/features.py#L382-L383\r\n\r\n```diff\r\n- if isinstance(obj, list): \r\n+ if isinstance(obj, (list, tuple)): \r\n```\r\n\r\nand here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/ede72d3f9796339701ec59899c7c31d2427046fb/src/datasets/features/features.py#L386-L387\r\n\r\n```diff\r\n- return obj if isinstance(obj, list) else [], isinstance(obj, tuple)\r\n+ return obj, False\r\n```\r\n\r\n@srobertjames can you try applying these changes and let us know if it helps ? If so, feel free to open a Pull Request to contribute this improvement if you want :)", "Wow, adding `return_tensors=\"np\"` sped up my example by a **factor 17x** of and completely eliminated the casting! I'd recommend not only to document it, but to make that the default.\r\n\r\nThe code at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb does not specify `return_tensors=\"np\"` but yet avoids the casting penalty. How does it do that? (The ntbk seems to do `return_overflowing_tokens=True, return_offsets_mapping=True,`).\r\n\r\nAlso, surprisingly enough, using `return_tensors=\"pt\"` (which is my eventual application) yields this error:\r\n```\r\nTypeError: Provided `function` which is applied to all elements of table returns a `dict` of types \r\n[<class 'torch.Tensor'>, <class 'torch.Tensor'>, <class 'torch.Tensor'>, <class 'torch.Tensor'>]. \r\nWhen using `batched=True`, make sure provided `function` returns a `dict` of types like \r\n`(<class 'list'>, <class 'numpy.ndarray'>)`.\r\n```", "Setting the output to `\"np\"` makes the whole pipeline fast because it moves the data buffers from rust to python to arrow using zero-copy, and also because it does eliminate the casting completely ;)\r\n\r\nHave you had a chance to try eliminating the tuple casting using the trick above ?", "@lhoestq I just benchmarked the two edits to `features.py` above, and they appear to solve the problem, bringing my original example to within 20% the speed of the output `\"np\"` example. Nice!\r\n\r\nFor a pull request, do you suggest simply following https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md ?", "Cool ! Sure feel free to follow these instructions to open a PR :) thanks !", "#take", "Resolved via https://github.com/huggingface/datasets/pull/4993." ]
1,302,193,649
4,675
Unable to use dataset with PyTorch dataloader
open
2022-07-12T15:04:04
2022-07-14T14:17:46
null
https://github.com/huggingface/datasets/issues/4675
null
BlueskyFR
false
[ "Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumPy arrays \r\n2. convert NumPy arrays to Torch tensors. \r\n\r\nThe 2nd step is problematic for your case as `datasets` attempts to convert the array of dictionaries to a PyTorch tensor. One way to fix this is to use the [preprocessing logic](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/translation/run_translation.py#L440-L458) from the Transformers translation script. And on our side, I think we can replace a NumPy array of dicts with a dict of NumPy array if the feature type is `Translation`/`TranslationVariableLanguages` (one array for each language) to get the official PyTorch error message for strings in such case." ]
1,301,294,844
4,674
Issue loading datasets -- pyarrow.lib has no attribute
closed
2022-07-11T22:10:44
2023-02-28T18:06:55
2023-02-28T18:06:55
https://github.com/huggingface/datasets/issues/4674
null
margotwagner
false
[ "Hi @margotwagner, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug: in an environment with datasets-2.3.2 and pyarrow-8.0.0, I can load the datasets without any problem:\r\n```python\r\n>>> ds = load_dataset(\"glue\", \"cola\")\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n8.0.0\r\n>>> from pyarrow.lib import IpcReadOptions\r\n>>> IpcReadOptions\r\npyarrow.lib.IpcReadOptions\r\n```\r\n\r\nI think you may have a problem in your Python environment: maybe you have also an old version of pyarrow that has precedence when importing it.\r\n\r\nCould you please check this (just after you tried to load the dataset and got the error)?\r\n```python\r\n>>> import pyarrow\r\n>>> pyarrow.__version__\r\n``` " ]
1,301,010,331
4,673
load_datasets on csv returns everything as a string
closed
2022-07-11T17:30:24
2024-11-05T03:55:10
2022-07-12T13:33:08
https://github.com/huggingface/datasets/issues/4673
null
courtneysprouse
false
[ "Hi @courtneysprouse, thanks for reporting.\r\n\r\nYes, you are right: by default the \"csv\" loader loads all columns as strings. \r\n\r\nYou could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to lacking of implementation in PyArrow. For example:\r\n```python\r\nimport datasets\r\n\r\nfeatures = datasets.Features(\r\n {\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(datasets.Value(\"int32\")),\r\n }\r\n)\r\n\r\nnew_conll = datasets.load_dataset(\"csv\", data_files=\"ner_conll.csv\", features=features)\r\n```\r\ngives `ArrowNotImplementedError` error:\r\n```\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from string to list using function cast_list\r\n```\r\n\r\nOn the other hand, if you just would like to save and afterwards load your dataset, you could use `save_to_disk` and `load_from_disk` instead. These functions preserve all data types.\r\n```python\r\n>>> orig_conll.save_to_disk(\"ner_conll\")\r\n\r\n>>> from datasets import load_from_disk\r\n\r\n>>> new_conll = load_from_disk(\"ner_conll\")\r\n>>> new_conll\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3454\r\n })\r\n})\r\n>>> new_conll[\"train\"][0]\r\n{'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n 'id': '0',\r\n 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0],\r\n 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n 'tokens': ['EU',\r\n 'rejects',\r\n 'German',\r\n 'call',\r\n 'to',\r\n 'boycott',\r\n 'British',\r\n 'lamb',\r\n '.']}\r\n>>> new_conll[\"train\"].features\r\n{'chunk_tags': Sequence(feature=ClassLabel(num_classes=23, names=['O', 'B-ADJP', 'I-ADJP', 'B-ADVP', 'I-ADVP', 'B-CONJP', 'I-CONJP', 'B-INTJ', 'I-INTJ', 'B-LST', 'I-LST', 'B-NP', 'I-NP', 'B-PP', 'I-PP', 'B-PRT', 'I-PRT', 'B-SBAR', 'I-SBAR', 'B-UCP', 'I-UCP', 'B-VP', 'I-VP'], id=None), length=-1, id=None),\r\n 'id': Value(dtype='string', id=None),\r\n 'ner_tags': Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], id=None), length=-1, id=None),\r\n 'pos_tags': Sequence(feature=ClassLabel(num_classes=47, names=['\"', \"''\", '#', '$', '(', ')', ',', '.', ':', '``', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB'], id=None), length=-1, id=None),\r\n 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\n```", "Hi @albertvillanova!\r\n\r\nThanks so much for your suggestions! That worked! ", "> Hi @courtneysprouse, thanks for reporting.\r\n> \r\n> Yes, you are right: by default the \"csv\" loader loads all columns as strings.\r\n> \r\n> You could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to lacking of implementation in PyArrow. For example:\r\n> \r\n> ```python\r\n> import datasets\r\n> \r\n> features = datasets.Features(\r\n> {\r\n> \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n> \"ner_tags\": datasets.Sequence(datasets.Value(\"int32\")),\r\n> }\r\n> )\r\n> \r\n> new_conll = datasets.load_dataset(\"csv\", data_files=\"ner_conll.csv\", features=features)\r\n> ```\r\n> \r\n> gives `ArrowNotImplementedError` error:\r\n> \r\n> ```\r\n> /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n> \r\n> ArrowNotImplementedError: Unsupported cast from string to list using function cast_list\r\n> ```\r\n> \r\n> On the other hand, if you just would like to save and afterwards load your dataset, you could use `save_to_disk` and `load_from_disk` instead. These functions preserve all data types.\r\n> \r\n> ```python\r\n> >>> orig_conll.save_to_disk(\"ner_conll\")\r\n> \r\n> >>> from datasets import load_from_disk\r\n> \r\n> >>> new_conll = load_from_disk(\"ner_conll\")\r\n> >>> new_conll\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n> num_rows: 14042\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n> num_rows: 3251\r\n> })\r\n> test: Dataset({\r\n> features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n> num_rows: 3454\r\n> })\r\n> })\r\n> >>> new_conll[\"train\"][0]\r\n> {'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n> 'id': '0',\r\n> 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0],\r\n> 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n> 'tokens': ['EU',\r\n> 'rejects',\r\n> 'German',\r\n> 'call',\r\n> 'to',\r\n> 'boycott',\r\n> 'British',\r\n> 'lamb',\r\n> '.']}\r\n> >>> new_conll[\"train\"].features\r\n> {'chunk_tags': Sequence(feature=ClassLabel(num_classes=23, names=['O', 'B-ADJP', 'I-ADJP', 'B-ADVP', 'I-ADVP', 'B-CONJP', 'I-CONJP', 'B-INTJ', 'I-INTJ', 'B-LST', 'I-LST', 'B-NP', 'I-NP', 'B-PP', 'I-PP', 'B-PRT', 'I-PRT', 'B-SBAR', 'I-SBAR', 'B-UCP', 'I-UCP', 'B-VP', 'I-VP'], id=None), length=-1, id=None),\r\n> 'id': Value(dtype='string', id=None),\r\n> 'ner_tags': Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], id=None), length=-1, id=None),\r\n> 'pos_tags': Sequence(feature=ClassLabel(num_classes=47, names=['\"', \"''\", '#', '$', '(', ')', ',', '.', ':', '``', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB'], id=None), length=-1, id=None),\r\n> 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\n> ```\r\n\r\nIt seems that by default, the 'csv' loader doesn’t load all columns as strings. When I load a column with numbers that start with 0, datasets removes the leading 0 and converts this column to an integer type. How can I set it to load all columns as strings?" ]
1,300,911,467
4,672
Support extract 7-zip compressed data files
closed
2022-07-11T15:56:51
2022-07-15T13:14:27
2022-07-15T13:02:07
https://github.com/huggingface/datasets/pull/4672
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4672", "html_url": "https://github.com/huggingface/datasets/pull/4672", "diff_url": "https://github.com/huggingface/datasets/pull/4672.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4672.patch", "merged_at": "2022-07-15T13:02:07" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Cool! Can you please remove `Fix #3541` from the description as this PR doesn't add support for streaming/`iter_archive`, so it only partially addresses the issue?\r\n\r\nSide note:\r\nI think we can use `libarchive` (`libarchive-c` is a Python package with the bindings) for streaming 7z archives. The only issue with this lib is that it's tricky to install on Windows/Mac." ]
1,300,385,909
4,671
Dataset Viewer issue for wmt16
closed
2022-07-11T08:34:11
2022-09-13T13:27:02
2022-09-08T08:16:06
https://github.com/huggingface/datasets/issues/4671
null
lewtun
false
[ "Thanks for reporting, @lewtun.\r\n\r\n~We can't load the dataset locally, so I think this is an issue with the loading script (not the viewer).~\r\n\r\n We are investigating...", "Recently, there was a merged PR related to this dataset:\r\n- #4554\r\n\r\nWe are looking at this...", "Indeed, the above mentioned PR fixed the loading script (it was not working before).\r\n\r\nI'm forcing the refresh of the Viewer.", "Please note that the above mentioned PR also made an enhancement in the `datasets` library, required by this loading script. This enhancement will only be available to the Viewer once we make our next release.", "OK, it's working now.\r\n\r\nhttps://huggingface.co/datasets/wmt16/viewer/ro-en/test\r\n\r\n<img width=\"1434\" alt=\"Capture d’écran 2022-09-08 à 10 15 55\" src=\"https://user-images.githubusercontent.com/1676121/189071665-17d2d149-9b22-42bf-93ac-1a966c3f637a.png\">\r\n", "Thank you @severo !!" ]
1,299,984,246
4,670
Can't extract files from `.7z` zipfile using `download_and_extract`
closed
2022-07-10T18:16:49
2022-07-15T13:02:07
2022-07-15T13:02:07
https://github.com/huggingface/datasets/issues/4670
null
bhavitvyamalik
false
[ "Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce52/datasets/samsum/samsum.py#L106-L110\r\n", "Related to this issue: https://github.com/huggingface/datasets/issues/3541", "Sure, let me look into and check what can be done. Will keep you guys updated here!", "Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesn’t work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?", "Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. " ]
1,299,848,003
4,669
loading oscar-corpus/OSCAR-2201 raises an error
closed
2022-07-10T07:09:30
2022-07-11T09:27:49
2022-07-11T09:27:49
https://github.com/huggingface/datasets/issues/4669
null
vitalyshalumov
false
[ "I had to use the appropriate token for use_auth_token. Thank you." ]
1,299,735,893
4,668
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
closed
2022-07-09T18:04:13
2022-07-11T07:47:47
2022-07-11T07:47:47
https://github.com/huggingface/datasets/issues/4668
null
ghost
false
[ "It seems like a private dataset. The viewer is currently not supported on the private datasets." ]
1,299,735,703
4,667
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
closed
2022-07-09T18:03:15
2022-07-11T07:47:15
2022-07-11T07:47:15
https://github.com/huggingface/datasets/issues/4667
null
ghost
false
[]
1,299,732,238
4,666
Issues with concatenating datasets
closed
2022-07-09T17:45:14
2022-07-12T17:16:15
2022-07-12T17:16:14
https://github.com/huggingface/datasets/issues/4666
null
ChenghaoMou
false
[ "Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow files), which would lead to the concatenation error as PyArrow does not support this sort of type promotion. This can be fixed as follows:\r\n```python\r\ntemp = load_dataset(\"json\", data_files={\"train\": \"output.jsonl\"}, features=squad[\"train\"].features)\r\n``` ", "That makes sense. I totally missed the `int64` and `int32` part. Thanks for pointing it out! Will close this issue for now." ]
1,299,652,638
4,665
Unable to create dataset having Python dataset script only
closed
2022-07-09T11:45:46
2022-07-11T07:10:09
2022-07-11T07:10:01
https://github.com/huggingface/datasets/issues/4665
null
aleSuglia
false
[ "Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions" ]
1,299,571,212
4,664
Add stanford dog dataset
closed
2022-07-09T04:46:07
2022-07-15T13:30:32
2022-07-15T13:15:42
https://github.com/huggingface/datasets/pull/4664
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4664", "html_url": "https://github.com/huggingface/datasets/pull/4664", "diff_url": "https://github.com/huggingface/datasets/pull/4664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4664.patch", "merged_at": null }
khushmeeet
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @khushmeeet, thanks for your contribution.\r\n\r\nBut wouldn't it be better to add this dataset to the Hub? \r\n- https://huggingface.co/docs/datasets/share\r\n- https://huggingface.co/docs/datasets/dataset_script", "Hi @albertvillanova \r\n\r\nDataset is added to Hub - https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset", "Great, so I guess we can close this issue, as the dataset is already available on the Hub.", "OK I read the discussion on:\r\n- #4504\r\n\r\nCurrently, priority is adding datasets to the Hub, not here on GitHub.\r\n\r\nIf you would like to contribute the loading script and all the metadata you generated (README + JSON files), you could:\r\n- Either make a PR to the existing dataset on the Hub\r\n- Create a new dataset on the Hub:\r\n - Either under your personal namespace\r\n - or even more professionally, under the namespace `stanfordSVL` (Stanford Vision and Learning Lab: https://svl.stanford.edu/)\r\n\r\nYou can use the Community tab to ping us if you need help or have any questions." ]
1,299,298,693
4,663
Add text decorators
closed
2022-07-08T17:51:48
2022-07-18T18:33:14
2022-07-18T18:20:49
https://github.com/huggingface/datasets/pull/4663
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4663", "html_url": "https://github.com/huggingface/datasets/pull/4663", "diff_url": "https://github.com/huggingface/datasets/pull/4663.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4663.patch", "merged_at": "2022-07-18T18:20:49" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,298,845,369
4,662
Fix: conll2003 - fix empty example
closed
2022-07-08T10:49:13
2022-07-08T14:14:53
2022-07-08T14:02:42
https://github.com/huggingface/datasets/pull/4662
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4662", "html_url": "https://github.com/huggingface/datasets/pull/4662", "diff_url": "https://github.com/huggingface/datasets/pull/4662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4662.patch", "merged_at": "2022-07-08T14:02:42" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,298,374,944
4,661
Concurrency bug when using same cache among several jobs
open
2022-07-08T01:58:11
2025-04-10T13:21:23
null
https://github.com/huggingface/datasets/issues/4661
null
ioana-blue
false
[ "I can confirm that if I run one job first that processes the dataset, then I can run any jobs in parallel with no problem (no write-concurrency anymore...). ", "Hi! That's weird. It seems like the error points to the `mkstemp` function, but the official docs state the following:\r\n```\r\nThere are no race conditions in the file’s creation, assuming that the platform properly implements the [os.O_EXCL](https://docs.python.org/3/library/os.html#os.O_EXCL) flag for [os.open()](https://docs.python.org/3/library/os.html#os.open)\r\n```\r\nSo this could mean your platform doesn't support that flag.\r\n\r\n~~Can you please check if wrapping the temp file creation (the line `tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)` in `_map_single`) with the `multiprocess.Lock` fixes the issue?~~\r\nPerhaps wrapping the temp file creation in `_map_single` with `filelock` could work:\r\n```python\r\nwith FileLock(lock_path):\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n```\r\nCan you please check if that helps?", "**Edit**: while writing my comment I took the time the read previous comments. By wrapping `dl_manager.download_and_extract` with a **FileLock** it works like a charm ! Thx @mariosasko \n\nOS : MacOS 14.7.4 (intel)\nPython : 3.12\ndatasets : 3.5.0\n\nAdding to this, I had a similar problem when 2 process concurrently load different subsets of the same dataset that needs to be extracted. The use case is similar as OP : running a benchmark.\n\nThe dataloader needs to download and extract a zip file, with ~20 ~100Mo files.\n\nWhen 2 processes executes `load_dataset(\"TESTLOAD.py\", name=\"a\", trust_remote_code=True)` at the same time it is fine (there must some lock on `(\"TESTLOAD.py\", \"a\")`).\nBut when running `load_dataset(\"TESTLOAD.py\", name=\"a\", trust_remote_code=True)` and `load_dataset(\"TESTLOAD.py\", name=\"b\", trust_remote_code=True)` (cf. `test.py`.\n\nHere is what I managed to understand as a table using the scripts below. Step 3 is attested by the `os.listdir` in `TESTLOAD.py`.\n\n| steps | process a | process b |\n|---|---|---|\n| 1 | download | wait |\n| 2 | end of download | wait |\n| 3 | extracting | FAIL to open a not yet extracted file |\n| 4 | end of extraction | KO |\n| 5 | OK | KO |\n\n<details>\n\n<summary>TESTLOAD.py (dataloader)</summary>\n\n```python\nimport os\nimport datasets\n# from filelock import FileLock\n_URL = \"/Users/ygallina/Documents/dr-benchmark/GSC-v1.1_big.zip\"\n\nclass TESTLOAD(datasets.GeneratorBasedBuilder):\n\n\tBUILDER_CONFIGS = [\n\t\tdatasets.BuilderConfig(name='a'),\n\t\tdatasets.BuilderConfig(name='b')\n\t]\n\n\tdef _info(self):\n\t\tfeatures = datasets.Features({\n\t\t\t\"id\": datasets.Value(\"string\"),\n\t\t})\n\t\treturn datasets.DatasetInfo(features=features)\n\n\tdef _split_generators(self, dl_manager):\n\t\tc2p = {'a': \"Medline_GSC_en_fr_man.xml\", 'b': \"Medline_GSC_en_es_man.xml\"}\n\t\t# with FileLock(\"path/to/tmp.lock\"):\n\t\tdata_dir = dl_manager.download_and_extract(_URL)\n\n\t\tprint(os.listdir(data_dir))\n\n\t\tdata_dir = data_dir + \"/\" + c2p[self.config.name]\n\t\treturn [datasets.SplitGenerator(\n\t\t\tname=datasets.Split.TRAIN,\n\t\t\tgen_kwargs={\"data_dir\": data_dir}\n\t\t)]\n\n\tdef _generate_examples(self, data_dir):\n\t\tf = open(data_dir)\n\t\tf.close()\n\t\tyield 0, {'id': data_dir}\n```\n\n</details>\n\n\n<details>\n\n<summary>test.py (main file)</summary>\n\nCommands to execute the test\n```bash\nrm -rf \"test\"\nHF_HOME=\"test\" python test.py\n```\n\n```python\nimport os\nfrom datasets import load_dataset\n\n# Forking to make sure access will be concurrent\n# Waiting to fork after imports (because it takes a while)\nchild = os.fork()\n\nif child:\n print('child')\n ds = load_dataset('TESTLOAD.py', 'a', trust_remote_code=True)\n print(f\"child, {ds['train'][0]}\")\nelse:\n print('parent')\n ds = load_dataset('TESTLOAD.py', 'b', trust_remote_code=True)\n print(f\"parent, {ds['train'][0]}\")\n```\n\n</details>" ]
1,297,128,387
4,660
Fix _resolve_single_pattern_locally on Windows with multiple drives
closed
2022-07-07T09:57:30
2022-07-07T17:03:36
2022-07-07T16:52:07
https://github.com/huggingface/datasets/pull/4660
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4660", "html_url": "https://github.com/huggingface/datasets/pull/4660", "diff_url": "https://github.com/huggingface/datasets/pull/4660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4660.patch", "merged_at": "2022-07-07T16:52:07" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Good catch ! Sorry I forgot (again) about windows paths when writing this x)" ]
1,297,094,140
4,659
Transfer CI to GitHub Actions
closed
2022-07-07T09:29:47
2022-07-12T11:30:20
2022-07-12T11:18:25
https://github.com/huggingface/datasets/pull/4659
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4659", "html_url": "https://github.com/huggingface/datasets/pull/4659", "diff_url": "https://github.com/huggingface/datasets/pull/4659.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4659.patch", "merged_at": "2022-07-12T11:18:25" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks a lot @albertvillanova ! I hope we're finally done with flakiness on windows ^^\r\n\r\nAlso thanks for paying extra attention to billing and avoiding running unnecessary jobs. Though for certain aspects (see my comments), I think it's worth having the extra jobs to make our life easier", "~@lhoestq I think you forgot to add your comments?~\r\n\r\nI had missed it among all the other comments...", "@lhoestq, I'm specially enthusiastic with the fail-fast policy: it was in my TODO list for a long time. I really think it will have a positive impact (I would love to know the spent time saving it will enable, besides the carbon footprint reduction). :wink: \r\n\r\nSo yes, as you said above, let's give it a try at least. If we encounter any inconvenience, we can easily disable it.\r\n\r\nQuestion: I guess I have to disable CircleCI CI before merging this PR?\r\n\r\n" ]
1,297,001,390
4,658
Transfer CI tests to GitHub Actions
closed
2022-07-07T08:10:50
2022-07-12T11:18:25
2022-07-12T11:18:25
https://github.com/huggingface/datasets/issues/4658
null
albertvillanova
false
[]
1,296,743,133
4,657
Add SQuAD2.0 Dataset
closed
2022-07-07T03:19:36
2022-07-12T16:14:52
2022-07-12T16:14:52
https://github.com/huggingface/datasets/issues/4657
null
omarespejel
false
[ "Hey, It's already present [here](https://huggingface.co/datasets/squad_v2) ", "Hi! This dataset is indeed already available on the Hub. Closing." ]
1,296,740,266
4,656
Add Amazon-QA Dataset
closed
2022-07-07T03:15:11
2022-07-14T02:20:12
2022-07-14T02:20:12
https://github.com/huggingface/datasets/issues/4656
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/Amazon-QA)." ]
1,296,720,896
4,655
Simple Wikipedia
closed
2022-07-07T02:51:26
2022-07-14T02:16:33
2022-07-14T02:16:33
https://github.com/huggingface/datasets/issues/4655
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/simple-wiki)." ]
1,296,716,119
4,654
Add Quora Question Triplets Dataset
closed
2022-07-07T02:43:42
2022-07-14T02:13:50
2022-07-14T02:13:50
https://github.com/huggingface/datasets/issues/4654
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/QQP_triplets)." ]
1,296,702,834
4,653
Add Altlex dataset
closed
2022-07-07T02:23:02
2022-07-14T02:12:39
2022-07-14T02:12:39
https://github.com/huggingface/datasets/issues/4653
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)." ]
1,296,697,498
4,652
Add Sentence Compression Dataset
closed
2022-07-07T02:13:46
2022-07-14T02:11:48
2022-07-14T02:11:48
https://github.com/huggingface/datasets/issues/4652
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/sentence-compression)." ]
1,296,689,414
4,651
Add Flickr 30k Dataset
closed
2022-07-07T01:59:08
2022-07-14T02:09:45
2022-07-14T02:09:45
https://github.com/huggingface/datasets/issues/4651
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/flickr30k-captions)." ]
1,296,680,037
4,650
Add SPECTER dataset
open
2022-07-07T01:41:32
2022-07-14T02:07:49
null
https://github.com/huggingface/datasets/issues/4650
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)" ]
1,296,673,712
4,649
Add PAQ dataset
closed
2022-07-07T01:29:42
2022-07-14T02:06:27
2022-07-14T02:06:27
https://github.com/huggingface/datasets/issues/4649
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/PAQ_pairs)" ]
1,296,659,335
4,648
Add WikiAnswers dataset
closed
2022-07-07T01:06:37
2022-07-14T02:03:40
2022-07-14T02:03:40
https://github.com/huggingface/datasets/issues/4648
null
omarespejel
false
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/WikiAnswers)" ]
1,296,311,270
4,647
Add Reddit dataset
open
2022-07-06T19:49:18
2022-07-06T19:49:18
null
https://github.com/huggingface/datasets/issues/4647
null
omarespejel
false
[]
1,296,027,785
4,645
Set HF_SCRIPTS_VERSION to main
closed
2022-07-06T15:43:21
2022-07-06T15:56:21
2022-07-06T15:45:05
https://github.com/huggingface/datasets/pull/4645
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4645", "html_url": "https://github.com/huggingface/datasets/pull/4645", "diff_url": "https://github.com/huggingface/datasets/pull/4645.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4645.patch", "merged_at": "2022-07-06T15:45:05" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,296,018,052
4,644
[Minor fix] Typo correction
closed
2022-07-06T15:37:02
2022-07-06T15:56:32
2022-07-06T15:45:16
https://github.com/huggingface/datasets/pull/4644
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4644", "html_url": "https://github.com/huggingface/datasets/pull/4644", "diff_url": "https://github.com/huggingface/datasets/pull/4644.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4644.patch", "merged_at": "2022-07-06T15:45:16" }
cakiki
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]