id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,422,172,080
| 5,159
|
fsspec lock reset in multiprocessing
|
closed
| 2022-10-25T09:41:59
| 2022-11-03T20:51:15
| 2022-11-03T20:48:53
|
https://github.com/huggingface/datasets/pull/5159
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5159",
"html_url": "https://github.com/huggingface/datasets/pull/5159",
"diff_url": "https://github.com/huggingface/datasets/pull/5159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5159.patch",
"merged_at": "2022-11-03T20:48:53"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,422,059,287
| 5,158
|
Fix language and license tag names in all Hub datasets
|
closed
| 2022-10-25T08:19:29
| 2022-10-25T11:27:26
| 2022-10-25T10:42:19
|
https://github.com/huggingface/datasets/issues/5158
| null |
albertvillanova
| false
|
[
"There are currently 402 datasets with deprecated \"languages\" or \"licenses\".",
"hey @albertvillanova ,i would love to work on this issue if you like.",
"Hi @ayushthe1, thanks for your offer.\r\n\r\nBut as you can see, I self-assigned this issue.\r\n\r\nI have already fixed 200 out of the 402 datasets. My script is still running and fixing the rest.\r\n\r\nFor example: https://huggingface.co/datasets/fhamborg/news_sentiment_newsmtsc/discussions/2/files",
"Thanks for your time. Will try next time. 😇",
"@ayushthe1 feel free to take one of the non-assigned open issues: https://github.com/huggingface/datasets/issues",
"This is done."
] |
1,421,703,577
| 5,157
|
Consistent caching between python and jupyter
|
closed
| 2022-10-25T01:34:33
| 2022-11-02T15:43:22
| 2022-11-02T15:43:22
|
https://github.com/huggingface/datasets/issues/5157
| null |
gpucce
| false
|
[
"Hi ! Maybe it's possible to have a consistent hash for a function defined in `__main__` and a function define in a notebook.\r\n\r\nHowever for functions imported from another location, pickle uses the location to identify the code, so in that case we can't do much I believe.\r\n\r\nWould it be ok for you if we only try to do this for functions in `__main__` / jupyter ?\r\n\r\nIf you'd like to contribute, you can read this part of the code and let me know if you have questions:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/py_utils.py#L617-L643\r\n\r\nI think the key here would be to also ignore the \"co_filename\" of functions defined in `__main__`",
"Seems like a good solution, I will start a PR and see if I understood the changes needed. Thanks!"
] |
1,421,667,125
| 5,156
|
Unable to download dataset using Azure Data Lake Gen 2
|
closed
| 2022-10-25T00:43:18
| 2024-02-15T09:48:36
| 2022-11-17T23:37:08
|
https://github.com/huggingface/datasets/issues/5156
| null |
clarissesimoes
| false
|
[
"Hi ! From the `adlfs` docs, there are two filesystems you can use:\r\n> To use the Gen1 filesystem:\r\n> - known_implementations[‘adl’] = {‘class’: ‘adlfs.AzureDatalakeFileSystem’}\r\n> \r\n> To use the Gen2 filesystem:\r\n> - known_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n\r\nIf I'm not mistaken you're using the second one - so you should use `abfs://` instead of `adl://`, and also run this at the beginning of your script:\r\n```python\r\nfrom fsspec.registry import known_implementations\r\nknown_implementations['abfs'] = {'class': 'adlfs.AzureDatalakeFileSystem'}\r\n```\r\n\r\n",
"Thank you @lhoestq . Great call.\r\nUsing the default class from `known_implementations` dict solved my problem\r\n```\r\nknown_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n```\r\nI'm closing this issue.",
"> Thank you @lhoestq . Great call. Using the default class from `known_implementations` dict solved my problem\r\n> \r\n> ```\r\n> known_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n> ```\r\n> \r\n> I'm closing this issue.\r\n\r\nHi so here `Saving serialized datasets\r\n\r\nAfter you have processed your dataset, you can save it to your cloud storage with [Dataset.save_to_disk()](https://huggingface.co/docs/datasets/v2.17.0/en/package_reference/main_classes#datasets.Dataset.save_to_disk):` what is the encoded dataset I have failed to save it ",
"Uploading failed ? Did you get an error message ?"
] |
1,421,278,748
| 5,155
|
TextConfig: added "errors"
|
closed
| 2022-10-24T18:56:52
| 2022-11-03T13:38:13
| 2022-11-03T13:35:35
|
https://github.com/huggingface/datasets/pull/5155
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5155",
"html_url": "https://github.com/huggingface/datasets/pull/5155",
"diff_url": "https://github.com/huggingface/datasets/pull/5155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5155.patch",
"merged_at": "2022-11-03T13:35:35"
}
|
NightMachinery
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)",
"[**@lhoestq**](https://github.com/lhoestq) commented on [Oct 27, 2022, 4:08 PM GMT+3:30](https://github.com/huggingface/datasets/pull/5155#issuecomment-1293464680 \"2022-10-27T12:38:04Z - Replied by Github Reply Comments\"):\r\n> Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)\r\n\r\nI ran this and force pushed the changes."
] |
1,421,161,992
| 5,154
|
Test latest fsspec in CI
|
closed
| 2022-10-24T17:18:13
| 2023-09-24T10:06:06
| 2022-10-25T09:30:45
|
https://github.com/huggingface/datasets/pull/5154
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5154",
"html_url": "https://github.com/huggingface/datasets/pull/5154",
"diff_url": "https://github.com/huggingface/datasets/pull/5154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5154.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"actually the latest fsspec is already installed "
] |
1,420,833,457
| 5,153
|
default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir
|
closed
| 2022-10-24T13:28:18
| 2022-11-15T16:31:10
| 2022-11-15T16:31:09
|
https://github.com/huggingface/datasets/issues/5153
| null |
polinaeterna
| false
|
[
"Makes sense! For the last structure, we could count the path segments (delimited by \"/\" for URLs and `os.sep` for local paths) to ensure all inferred labels are on the same level. Otherwise, I think it's safe to assume they are meaningless and ignore them.\r\n"
] |
1,420,808,919
| 5,152
|
refactor FolderBasedBuilder and Image/AudioFolder tests
|
open
| 2022-10-24T13:11:52
| 2022-10-24T13:11:52
| null |
https://github.com/huggingface/datasets/issues/5152
| null |
polinaeterna
| false
|
[] |
1,420,791,163
| 5,151
|
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
|
open
| 2022-10-24T12:59:18
| 2022-11-04T14:55:20
| null |
https://github.com/huggingface/datasets/issues/5151
| null |
polinaeterna
| false
|
[
"also asked in https://discuss.huggingface.co/t/create-multiple-dataset-configs-with-push-to-hub-method/25480"
] |
1,420,684,999
| 5,150
|
Problems after upgrading to 2.6.1
|
open
| 2022-10-24T11:32:36
| 2024-05-12T07:40:03
| null |
https://github.com/huggingface/datasets/issues/5150
| null |
pietrolesci
| false
|
[
"Hi! I can't reproduce the error following these steps. Can you please provide a reproducible example?",
"I faced the same issue:\r\n\r\n### Repro\r\n```\r\n!pip install datasets==2.6.1\r\nimport datasets as Dataset\r\ndataset = Dataset.from_pandas(dataframe)\r\ndataset.save_to_disk(local)\r\n\r\n!pip install datasets==2.5.2\r\nimport datasets as Dataset\r\ndataset = Dataset.load_from_disk(local)\r\n```\r\n\r\n",
"@Lokiiiiii And what are the contents of the \"dataframe\" in your example?",
"I bumped into the issue too. @Lokiiiiii thanks for steps. I \"solved\" if for now by `pip install datasets>=2.6.1` everywhere.",
"Hi all, \r\nI experienced the same issue. \r\nPlease note that the pull request is related to the IMDB example provided in the doc, and is a fix for that, in that context, to make sure that people can follow the doc example and have a working system. \r\nIt does not provide a fix for Datasets itself. ",
"im getting the same error.\r\n- using the base AWS HF container that uses a datasets <2.\r\n- updating the AWS HF container to use dataset 2.4\r\n",
"Same here, running on our SageMaker pipelines. It's only happening for some but not all of our saved Datasets.",
"I am also receiving this error on Sagemaker but not locally, I have noticed that this occurs when the `.dataset/` folder does not contain a single file like:\r\n\r\n`dataset.arrow`\r\n\r\nbut instead contains multiple files like:\r\n\r\n`data-00000-of-00002.arrow`\r\n`data-00001-of-00002.arrow`\r\n\r\nI think that it may have something to do with this recent PR that updated the behaviour of `dataset.save_to_disk` by introducing sharding: https://github.com/huggingface/datasets/pull/5268\r\n\r\nFor now I can get around this by forcing datasets==2.8.0 on machine that creates dataset and in the huggingface instance for training (by running this at the start of training script `os.system(\"pip install datasets==2.8.0\")`)\r\n\r\nTo ensure the dataset is a single shard when saving the dataset locally:\r\n\r\n```python3\r\ndataset.flatten_indices().save_to_disk('path/to/dataset', num_shards=1)\r\n```\r\n\r\n and then manually changing the name afterwards from `path/to/dataset/data-00000-of-00001.arrow` to `path/to/dataset/dataset.arrow` and updating the `path/to/dataset/state.json` to reflect this name change. i.e. by changing `state.json` to this:\r\n\r\n```javascript\r\n{\r\n \"_data_files\": [\r\n {\r\n \"filename\": \"dataset.arrow\"\r\n }\r\n ],\r\n \"_fingerprint\": \"420086f0636f8727\",\r\n \"_format_columns\": null,\r\n \"_format_kwargs\": {},\r\n \"_format_type\": null,\r\n \"_output_all_columns\": false,\r\n \"_split\": null\r\n}\r\n```",
"Does anyone know if this has been resolved?",
"I have the same issue in datasets version 2.3.2"
] |
1,420,415,639
| 5,149
|
Make iter_files deterministic
|
closed
| 2022-10-24T08:16:27
| 2022-10-27T09:53:23
| 2022-10-27T09:51:09
|
https://github.com/huggingface/datasets/pull/5149
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5149",
"html_url": "https://github.com/huggingface/datasets/pull/5149",
"diff_url": "https://github.com/huggingface/datasets/pull/5149.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5149.patch",
"merged_at": "2022-10-27T09:51:09"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,420,219,222
| 5,148
|
Cannot find the rvl_cdip dataset
|
closed
| 2022-10-24T04:57:42
| 2022-10-24T12:23:47
| 2022-10-24T06:25:28
|
https://github.com/huggingface/datasets/issues/5148
| null |
santule
| false
|
[
"Hi, @santule.\r\n\r\nWe have transferred all dataset scripts from GitHub to the Hugging Face Hub: https://huggingface.co/datasets\r\n- Concretely, you have \"rvl_cdip\" here: https://huggingface.co/datasets/rvl_cdip\r\n\r\nTo be able to load them, you should update your `datasets` library:\r\n```\r\npip install -U datasets\r\n```",
"thank you, it worked"
] |
1,419,522,275
| 5,147
|
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
|
open
| 2022-10-22T21:46:38
| 2022-11-01T22:19:07
| null |
https://github.com/huggingface/datasets/issues/5147
| null |
falcaopetri
| false
|
[
"Hi ! In the `transformers` issue the object to not hash is a `Pool` - I think you can instantiate it inside your function instead of passing it as a parameter. It's good practice that your function and all its fn_kwargs are picklable, in case you want to parallelize `map` using `num_proc>1`\r\n\r\nFor the other case `def fn(example, verbose=False):` however, I agree it would be nice to let the user specify that \"verbose\" needs to be ignored.\r\n\r\nDo you think providing a decorator could help ? Maybe\r\n```python\r\n@datasets.hashing.register(ignore_kwargs=[\"verbose\"])\r\ndef func(example, verbose=False):\r\n ...\r\n```",
"Hi @lhoestq! Thanks for your response.\r\n\r\nA `Pool` shouldn't be instantiated within the function, because there's a huge overhead in doing so. The main idea is that the same `Pool` should be used across all function calls. Parallel `map` is not helpful/desired in that specific scenario, because the heavy parallel computation is done by another lib (`pyctcdecode`, called within `transformer`'s model inference code).\r\n\r\nBut yes, it makes sense to be able to leverage parallel processing by just doing `num_proc>1` when possible.\r\n\r\nYour decorator suggestions seems like a pretty clean API to me. I didn't find a `datasets.hashing` module though. Would it be created for this specific purpose? Any downsides in just using `datasets.fingerprint`?\r\n\r\nAnd would `datasets.hashing.register` just add some metadata to `func` in your approach (so it could be inspected from `fingerprint_transform`)?\r\n\r\nAnd looking to the `datasets.Dataset` API, `.filter` would also benefited from this.",
"> Would it be created for this specific purpose? Any downsides in just using datasets.fingerprint?\r\n\r\nThis can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\n> And would datasets.hashing.register just add some metadata to func in your approach (so it could be inspected from fingerprint_transform)?\r\n\r\nYup that's the idea :)\r\n\r\n> And looking to the datasets.Dataset API, .filter would also benefited from this.\r\n\r\nIndeed !\r\n\r\n-----\r\n\r\nIf you would like to contribute this you can assign yourself to this issue by posting #self-assign\r\nAnd of course if you have questions or if I can help, feel free to ping me !",
"> This can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\nSure, it makes sense.\r\n\r\n---\r\n\r\nI don't plan to work on it right now, so I'll let it unassigned in case somebody wants to join. I'll get back at it as soon as possible though.\r\n"
] |
1,418,331,282
| 5,146
|
Delete duplicate issue template file
|
closed
| 2022-10-21T13:18:46
| 2022-10-21T13:52:30
| 2022-10-21T13:50:04
|
https://github.com/huggingface/datasets/pull/5146
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5146",
"html_url": "https://github.com/huggingface/datasets/pull/5146",
"diff_url": "https://github.com/huggingface/datasets/pull/5146.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5146.patch",
"merged_at": "2022-10-21T13:50:04"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,418,005,452
| 5,145
|
Dataset order is not deterministic with ZIP archives and `iter_files`
|
closed
| 2022-10-21T09:00:03
| 2022-10-27T09:51:49
| 2022-10-27T09:51:10
|
https://github.com/huggingface/datasets/issues/5145
| null |
fxmarty
| false
|
[
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co/datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank you for noticing indeed!",
"This is still a bug, so I'd keep this one open if you don't mind ;)",
"Besides the linked PR, to make the loading process fully deterministic, I believe we should also sort the data files [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L276) and [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L485) (e.g. fsspec's `LocalFileSystem.glob` relies on `os.scandir`, which yields the contents in arbitrary order). My concern is the overhead of these sorts... Maybe we could introduce a new flag to `load_dataset` similar to TFDS' [`shuffle_files`](https://www.tensorflow.org/datasets/determinism#determinism_when_reading) or sort only if the number of data files is small?",
"We already return the result sorted at the end of `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository` if I'm not mistaken",
"@lhoestq Oh, you are right. Feel free to ignore my comment.",
"I think the corresponding PR is ready to be merged :hugs: ",
"@albertvillanova Thanks for the fix!"
] |
1,417,974,731
| 5,144
|
Inconsistent documentation on map remove_columns
|
closed
| 2022-10-21T08:37:53
| 2022-11-15T14:15:10
| 2022-11-15T14:15:10
|
https://github.com/huggingface/datasets/issues/5144
| null |
zhaowei-wang-nlp
| false
|
[
"Thanks for reporting, @zhaowei-wang-nlp.\r\n\r\nYou are right, the documentation is confusing on the behavior of `remove_columns`. We should better explain it. ",
"This is a duplicate of https://github.com/huggingface/datasets/issues/2343.",
"I'm closing this issue because as @mariosasko pointed out, it is a duplicate of:\r\n- #2343"
] |
1,416,837,186
| 5,143
|
DownloadManager Git LFS support
|
closed
| 2022-10-20T15:29:29
| 2022-10-20T17:17:10
| 2022-10-20T17:17:10
|
https://github.com/huggingface/datasets/issues/5143
| null |
Muennighoff
| false
|
[
"Hey ! Actually it works, just pass the right URL ;)\r\nThe URL must be the one with “/resolve/”\r\n\r\ne.g. https://huggingface.co/datasets/imagenet-1k/resolve/main/data/test_images.tar.gz\r\n\r\nYou can even pass a relative path to the dl_manager instead, like `dl_manager.download(\"data/test_images.tar.gz\")`",
"Amazing it works, thanks!"
] |
1,416,317,678
| 5,142
|
Deprecate num_proc parameter in DownloadManager.extract
|
closed
| 2022-10-20T09:52:52
| 2022-10-25T18:06:56
| 2022-10-25T15:56:45
|
https://github.com/huggingface/datasets/pull/5142
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5142",
"html_url": "https://github.com/huggingface/datasets/pull/5142",
"diff_url": "https://github.com/huggingface/datasets/pull/5142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5142.patch",
"merged_at": "2022-10-25T15:56:45"
}
|
ayushthe1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @mariosasko . Can you please help me with why the tests keep failing. I have reviewed the code changes multiple times but can't spot any mistakes. ",
"You can fix this failure by formatting your code with the `make style` command (run it from the root of the cloned repo).",
"hey @mariosasko ,i cant understand how to use the `make style` command .I searched for it on the internet but cant find any results. \r\nSo i formatted the code using vs-code document formatter. Hope this helps.",
"`make style` runs the \"style\" target defined here: https://github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/Makefile#L12\r\n\r\nThis seems to be a good tutorial on Makefiles: https://opensource.com/article/18/8/what-how-makefile",
"\r\n\r\n\r\n\r\n> `make style` runs the \"style\" target defined here:\r\n> \r\n> https://github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/Makefile#L12\r\n> \r\n> This seems to be a good tutorial on Makefiles: https://opensource.com/article/18/8/what-how-makefile\r\n\r\nThanks! I will look into this :relaxed: "
] |
1,415,479,438
| 5,141
|
Raise ImportError instead of OSError
|
closed
| 2022-10-19T19:30:05
| 2022-10-25T15:59:25
| 2022-10-25T15:56:58
|
https://github.com/huggingface/datasets/pull/5141
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5141",
"html_url": "https://github.com/huggingface/datasets/pull/5141",
"diff_url": "https://github.com/huggingface/datasets/pull/5141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5141.patch",
"merged_at": "2022-10-25T15:56:58"
}
|
ayushthe1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @mariosasko ,i commited the changes as you said.\r\n\r\n"
] |
1,415,075,530
| 5,140
|
Make the KeyHasher FIPS compliant
|
closed
| 2022-10-19T14:25:52
| 2022-11-07T16:20:43
| 2022-11-07T16:20:43
|
https://github.com/huggingface/datasets/pull/5140
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5140",
"html_url": "https://github.com/huggingface/datasets/pull/5140",
"diff_url": "https://github.com/huggingface/datasets/pull/5140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5140.patch",
"merged_at": null
}
|
vvalouch
| true
|
[] |
1,414,642,723
| 5,137
|
Align task tags in dataset metadata
|
closed
| 2022-10-19T09:41:42
| 2022-11-10T05:25:58
| 2022-10-25T06:17:00
|
https://github.com/huggingface/datasets/issues/5137
| null |
albertvillanova
| false
|
[
"I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts",
"(Types.ts is not internal it's public)",
"I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...",
"For future reference: this fix had some complications\r\n\r\nWhen trying to open a PR to fix the task tags, an exception was thrown if:\r\n- the metadata contained \"languages\" or \"licenses\" (instead of \"language\" or \"license\")\r\n- the metadata contained a non-valid language: `en-US` (instead of `en`), `no` (instead of `'no'`),...\r\n- the metadata contained a non-valid license\r\n- either `task_categories` or `task_ids` was not an array (a dict for each config)\r\n- the metadata contained non-valid tag names\r\n\r\nErrors:\r\n```\r\nValueError: - Error: \"languages\" is deprecated. Use \"language\" instead.\r\n```\r\n```\r\nValueError: - Error: \"licenses\" is deprecated. Use \"license\" instead.\r\n```\r\n```\r\nValueError: - Error: \"language[17]\" must only contain lowercase characters\r\n```\r\n```\r\nValueError: - Error: \"language[0]\" with value \"cz, de, it\" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like \"code\", \"multilingual\". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.\r\n```\r\n```\r\nValueError: - Error: \"task_ids\" must be an array\r\n```",
"All Hub datasets are done.",
"great job! did you have feedback from Hub users/i.E. repo authors?",
"Yes, @julien-c. These are some of the feedbacks:\r\n- Most people just thank for the fix: [cahya/librivox-indonesia](https://huggingface.co/datasets/cahya/librivox-indonesia/discussions/1#6357cd8a292a050ebd705f84), [TurkuNLP/xlsum-fi](https://huggingface.co/datasets/TurkuNLP/xlsum-fi/discussions/1#6357828aa1f8ad1c31bcbe46), [coastalcph/fairlex](https://huggingface.co/datasets/coastalcph/fairlex/discussions/4#6351a527a8e595171ab1aef2)\r\n- Why are we changing their task names? [joelito/lextreme](https://huggingface.co/datasets/joelito/lextreme/discussions/1#6351b576fe367c0d9b12041b)\r\n - I take note of this for the next bulk operation; besides the PR title, we should also add a description to explain the reason for the change and also maybe putting a link to some pertinent GH Issue page\r\n- Some of them ask where to find the list of the supported task values is: [dennlinger/klexikon](https://huggingface.co/datasets/dennlinger/klexikon/discussions/3#6356b3ea80f8cb3ab777ac5c), [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad/discussions/1#635262467e4cc3135fd09f58)\r\n - Currently, the list is here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L85\r\n - Maybe we could made them more easily accessible\r\n- Some people do not agree about current \"hierarchy\":\r\n - text-scoring: [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/1#6357c1b128792d8cdd51e9f9) (but referring to [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/2/files))\r\n - Before \"text-scoring\" was a task_category, with task_ids [\"semantic-similarity-scoring\", \"sentiment-scoring\"]\r\n - Now all three are task_ids [\"text-scoring\", \"semantic-similarity-scoring\", \"sentiment-scoring\"] under the task_category \"text-classification\"\r\n - People complain that their scoring tasks are not classification task\r\n - binary-classification: why don't we have binary-classification? We have multi-class-classification, multi-label-classification and sentiment-classification, but not binary-classification\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?\r\n\r\nNOTE: I'm editing this comment to add more feedback",
"As someone with feedback on the updates (which I highly appreciate seeing included here :D), a few comments from a \"user perspective\": \r\n\r\n* I think the general confusion for me was also surrounding the hierarchy; it doesn't really become super clear (even when using the tagger space) that one is a subset of the other, especially since it seems to be still possible to include fine-grained tasks without the \"parent category\"?\r\n* The datasets explorer still shows tags that are no longer valid (e.g., super specific ones such as `summarization-other-paper-abstract-generation`, but also ones that should be `task_categories`, such as `summarization`). I'm assuming this will be fixed soon, but until then it can confuse people who don't understand why they suddenly can't use seemingly still valid tags anymore.\r\n* As I mentioned to @albertvillanova, having a dedicated page in the docs with explanations (especially wrt the difference between `task_categories` and `task_ids`) would be super helpful. However, I think it would have been sufficient to just include some description in the dataset PRs where you can link to the Github/other discussion on the topic :) That way, I can check myself what changes are expected to happen.\r\n\r\nThanks again for the streamlining process, I personally learned a fair bit about the tagging structure in the meantime!\r\nBest,\r\nDennis",
"Thanks to you both for your feedback! super useful! cc'ing @osanseviero too 🙂\r\n\r\n> The datasets explorer still shows tags that are no longer valid\r\n\r\nwait which explorer is that? is it https://huggingface.co/datasets/viewer/ ?\r\n",
"Sorry, this one: https://huggingface.co/datasets \r\nAnd then selecting the \"Fine-Grained Tasks\".",
"good feedback! we'll improve this",
"Super useful feedback, thanks a lot!",
"- Some people do not agree about current \"hierarchy\":\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?",
"@albertvillanova \r\nThank you for sharing our voice here!\r\n\r\nYes, we want `symbolic-regression` to be listed as a task. This task has been attracting attention from the machine learning/deep learning community, and unfortunately existing symbolic regression datasets are de-centralized in the community (hosted at individual platforms like author website, github, etc).\r\nIt would be great for the community if Hugging Face can support the task."
] |
1,414,492,139
| 5,136
|
Update docs once dataset scripts transferred to the Hub
|
closed
| 2022-10-19T07:58:27
| 2022-10-20T08:12:21
| 2022-10-20T08:10:00
|
https://github.com/huggingface/datasets/pull/5136
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5136",
"html_url": "https://github.com/huggingface/datasets/pull/5136",
"diff_url": "https://github.com/huggingface/datasets/pull/5136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5136.patch",
"merged_at": "2022-10-20T08:10:00"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,414,413,519
| 5,135
|
Update docs once dataset scripts transferred to the Hub
|
closed
| 2022-10-19T06:58:19
| 2022-10-20T08:10:01
| 2022-10-20T08:10:01
|
https://github.com/huggingface/datasets/issues/5135
| null |
albertvillanova
| false
|
[] |
1,413,623,687
| 5,134
|
Raise ImportError instead of OSError if required extraction library is not installed
|
closed
| 2022-10-18T17:53:46
| 2022-10-25T15:56:59
| 2022-10-25T15:56:59
|
https://github.com/huggingface/datasets/issues/5134
| null |
mariosasko
| false
|
[
"hey ,i would like to work on this issue . Please assign it to me.",
"hey @mariosasko , i made a pr for this issue. Could you please review it.\r\nAlso i found multiple `OSError` in `extract.py` file which i thought could be replaced too but wasn't sure about them.\r\nPlease do tell if that also needs to be done."
] |
1,413,623,462
| 5,133
|
Tensor operation not functioning in dataset mapping
|
closed
| 2022-10-18T17:53:35
| 2022-10-19T04:15:45
| 2022-10-19T04:15:44
|
https://github.com/huggingface/datasets/issues/5133
| null |
xinghaow99
| false
|
[
"Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .",
"> Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .\r\n\r\nThank you. "
] |
1,413,607,306
| 5,132
|
Depracate `num_proc` parameter in `DownloadManager.extract`
|
closed
| 2022-10-18T17:41:05
| 2022-10-25T15:56:46
| 2022-10-25T15:56:46
|
https://github.com/huggingface/datasets/issues/5132
| null |
mariosasko
| false
|
[
"I can take this! #self-assign",
"#self-assign",
"@lazarust i'm already working on this issue :smile: ",
"#self-assign",
"hey @mariosasko , i made a pr for this issue. Could you please review it."
] |
1,413,534,863
| 5,131
|
WikiText 103 tokenizer hangs
|
closed
| 2022-10-18T16:44:00
| 2023-08-08T08:42:40
| 2023-07-21T14:41:51
|
https://github.com/huggingface/datasets/issues/5131
| null |
TrentBrick
| false
|
[
"any updates on this? It happens to me on [OpenWikiText-20%](https://huggingface.co/datasets/Bingsu/openwebtext_20p) dataset, but not on [OpenWebText-10k](https://huggingface.co/datasets/stas/openwebtext-10k). This is really strange because I don't change anything else in my running script.\r\n\r\ntransformers version 4.18.0.dev0\r\ndatasets version 1.18.0"
] |
1,413,435,000
| 5,130
|
Avoid extra cast in `class_encode_column`
|
closed
| 2022-10-18T15:31:24
| 2022-10-19T11:53:02
| 2022-10-19T11:50:46
|
https://github.com/huggingface/datasets/pull/5130
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5130",
"html_url": "https://github.com/huggingface/datasets/pull/5130",
"diff_url": "https://github.com/huggingface/datasets/pull/5130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5130.patch",
"merged_at": "2022-10-19T11:50:46"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,413,031,664
| 5,129
|
unexpected `cast` or `class_encode_column` result after `rename_column`
|
closed
| 2022-10-18T11:15:24
| 2022-10-19T03:02:26
| 2022-10-19T03:02:26
|
https://github.com/huggingface/datasets/issues/5129
| null |
quaeast
| false
|
[
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, 方子东. I tried running the code with exact the same configuration (both datasets 2.5.2 and 2.6.1, python, pyarrow, pandas), but on Linux. The results seem to be the expected `{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}`.\r\nI don't have a Mac device. I can't verify whether this is a M1 chip-specific problem.",
"I've just tested the code on my M1 Mac, and it behaves as expected.",
"> Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...\r\n\r\nThank you for your attention and feel sorry to take your time. Since this is a bug of old version, I think mybe my problem is because `cast` operation directaly used cached data generated by older verion of `datasets`. I tried to deleted the cached data and I got expected result.\r\n"
] |
1,412,783,855
| 5,128
|
Make filename matching more robust
|
closed
| 2022-10-18T08:22:48
| 2022-10-28T13:07:38
| 2022-10-28T13:05:06
|
https://github.com/huggingface/datasets/pull/5128
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5128",
"html_url": "https://github.com/huggingface/datasets/pull/5128",
"diff_url": "https://github.com/huggingface/datasets/pull/5128.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5128.patch",
"merged_at": "2022-10-28T13:05:06"
}
|
riccardobucco
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I think we should also modify one of the metadata files in the `folder_based_builder` tests to make sure \"./\" is ignored now in the `file_name`\r\n\r\n@mariosasko what do you mean here? I'm not sure which metadata file I should modify here",
"You can modify this line for instance: https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/tests/packaged_modules/test_folder_based_builder.py#L135"
] |
1,411,897,544
| 5,127
|
[WIP] WebDataset export
|
closed
| 2022-10-17T16:50:22
| 2024-01-11T06:27:04
| 2024-01-08T14:25:43
|
https://github.com/huggingface/datasets/pull/5127
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5127",
"html_url": "https://github.com/huggingface/datasets/pull/5127",
"diff_url": "https://github.com/huggingface/datasets/pull/5127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5127.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5127). All of your documentation changes will be reflected on that endpoint.",
"Should we close this PR?"
] |
1,411,757,124
| 5,126
|
Fix class name of symbolic link
|
closed
| 2022-10-17T15:11:02
| 2022-11-14T14:40:18
| 2022-11-14T14:40:18
|
https://github.com/huggingface/datasets/pull/5126
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5126",
"html_url": "https://github.com/huggingface/datasets/pull/5126",
"diff_url": "https://github.com/huggingface/datasets/pull/5126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5126.patch",
"merged_at": "2022-11-14T14:40:18"
}
|
riccardobucco
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5126). All of your documentation changes will be reflected on that endpoint.",
"I have removed the reference to the Issue in the PR title, so that we avoid to have both references (to the issue and to the PR) in the merge commit to the main branch.\r\n\r\nInstead, it should be commented in the PR description, so that the PR is appropriately linked by GitHub to its corresponding Issue:\r\n\r\n> Fix #5098.",
"@albertvillanova What should I test in your opinion? Also, where should I save the test file and how should I name it? Thanks for your support",
"The regression test to be implemented should test what your PR fixes: that is, that `_resolve_single_pattern_locally` function does not resolve any symbolic link when passed a directory that does contain any.\r\n\r\nAs you are testing a function in `data_files.py`, the corresponding test should be in `tests/test_data_files.py`.\r\n\r\nYou could name the test something lilke: `test_resolve_single_pattern_locally_does_not_resolve_symbolic_links`\r\n\r\nYou could take inspiration from other tests there in that file."
] |
1,411,602,813
| 5,125
|
Add `pyproject.toml` for `black`
|
closed
| 2022-10-17T13:38:47
| 2024-11-20T13:36:11
| 2022-10-17T14:21:09
|
https://github.com/huggingface/datasets/pull/5125
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5125",
"html_url": "https://github.com/huggingface/datasets/pull/5125",
"diff_url": "https://github.com/huggingface/datasets/pull/5125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5125.patch",
"merged_at": "2022-10-17T14:21:09"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,411,159,725
| 5,124
|
Install tensorflow-macos dependency conditionally
|
closed
| 2022-10-17T08:45:08
| 2022-10-19T09:12:17
| 2022-10-19T09:10:06
|
https://github.com/huggingface/datasets/pull/5124
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5124",
"html_url": "https://github.com/huggingface/datasets/pull/5124",
"diff_url": "https://github.com/huggingface/datasets/pull/5124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5124.patch",
"merged_at": "2022-10-19T09:10:06"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,410,828,756
| 5,123
|
datasets freezes with streaming mode in multiple-gpu
|
open
| 2022-10-17T03:28:16
| 2023-05-14T06:55:20
| null |
https://github.com/huggingface/datasets/issues/5123
| null |
jackfeinmann5
| false
|
[
"@lhoestq I tested the script without accelerator, and I confirm this is due to datasets part as this gets similar results without accelerator.",
"Hi ! You said it works on 1 GPU but doesn't wortk without accelerator - what's the difference between running on 1 GPU and running without accelerator in your case ?",
"Hi @lhoestq \r\nthanks for coming back to me. Sorry for the confusion I made. I meant this works fine on 1 GPU, but on multi-gpu it is freezing. \"accelerator\" is not an issue as if you adapt the code without accelerator this still gets the same issue.\r\nIn order to test it. Please run \"accelerate config\", then use the setup for multi-gpu in one node.\r\nAfter that run \"accelerate launch code.py\" and then you would see the freezing occurs.",
"Hi @lhoestq \r\ncould you have the chance to reproduce the error by running the minimal example shared?\r\nthanks",
"I think you need to do `train_dataset = train_dataset.with_format(\"torch\")` to work with the DataLoader in a multiprocessing setup :)\r\n\r\nThe hang is probably caused by our streamign lib `fsspec` which doesn't work in multiprocessing out of the box - but we made it work with the PyTorch DataLoader when the dataset format is set to \"torch\"",
"Hi @lhoestq \r\nthanks for the response. I added the line suggested right before calling `with accelerator.main_process_first():` in the code above and I confirm this also freezes. to reproduce it please run \"accelerate launch code.py\". I was wondering if you could have more suggestions for me? I do not have an idea how to fix this or debug this freezing. many thanks.",
"Maybe the `fsspec` stuff need to be clearer even before - can you try to run this function at the very beginning of your script ?\r\n```python\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n_set_fsspec_for_multiprocess()\r\n```",
"Hi @lhoestq \r\nthank you. I tried it, I am getting `AttributeError: module 'fsspec' has no attribute 'asyn'`. which version of fsspect do you use?\r\nI am using \r\n```fsspec 2022.8.2 pypi_0 pypi```\r\nthank you.",
"Hi @lhoestq \r\nI solved `fsspec` error with this hack for now https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 but this is still freezing, I greatly appreciate if you could run this script on your side. Many thanks.\r\n\r\n```\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n\r\n_set_fsspec_for_multiprocess()\r\n\r\nfrom accelerate import Accelerator\r\nfrom accelerate.logging import get_logger\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data.dataloader import DataLoader\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\nimport torch\r\nfrom accelerate.logging import get_logger\r\nfrom torch.utils.data import IterableDataset\r\nfrom torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe\r\n\r\n\r\nlogger = get_logger(__name__)\r\n\r\n\r\nclass ConstantLengthDataset(IterableDataset):\r\n \"\"\"\r\n Iterable dataset that returns constant length chunks of tokens from stream of text files.\r\n Args:\r\n tokenizer (Tokenizer): The processor used for proccessing the data.\r\n dataset (dataset.Dataset): Dataset with text files.\r\n infinite (bool): If True the iterator is reset after dataset reaches end else stops.\r\n max_seq_length (int): Length of token sequences to return.\r\n num_of_sequences (int): Number of token sequences to keep in buffer.\r\n chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\r\n \"\"\"\r\n\r\n def __init__(\r\n self,\r\n tokenizer,\r\n dataset,\r\n infinite=False,\r\n max_seq_length=1024,\r\n num_of_sequences=1024,\r\n chars_per_token=3.6,\r\n ):\r\n self.tokenizer = tokenizer\r\n # self.concat_token_id = tokenizer.bos_token_id\r\n self.dataset = dataset\r\n self.max_seq_length = max_seq_length\r\n self.epoch = 0\r\n self.infinite = infinite\r\n self.current_size = 0\r\n self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences\r\n self.content_field = \"text\"\r\n\r\n def __iter__(self):\r\n iterator = iter(self.dataset)\r\n more_examples = True\r\n while more_examples:\r\n buffer, buffer_len = [], 0\r\n while True:\r\n if buffer_len >= self.max_buffer_size:\r\n break\r\n try:\r\n buffer.append(next(iterator)[self.content_field])\r\n buffer_len += len(buffer[-1])\r\n except StopIteration:\r\n if self.infinite:\r\n iterator = iter(self.dataset)\r\n self.epoch += 1\r\n logger.info(f\"Dataset epoch: {self.epoch}\")\r\n else:\r\n more_examples = False\r\n break\r\n tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\r\n all_token_ids = []\r\n for tokenized_input in tokenized_inputs:\r\n all_token_ids.extend(tokenized_input)\r\n for i in range(0, len(all_token_ids), self.max_seq_length):\r\n input_ids = all_token_ids[i : i + self.max_seq_length]\r\n if len(input_ids) == self.max_seq_length:\r\n self.current_size += 1\r\n yield torch.tensor(input_ids)\r\n\r\n def shuffle(self, buffer_size=1000):\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n\r\n\r\ndef create_dataloaders(tokenizer, accelerator):\r\n ds_kwargs = {\"streaming\": True}\r\n # In distributed training, the load_dataset function gaurantees that only one process\r\n # can concurrently download the dataset.\r\n datasets = load_dataset(\r\n \"c4\",\r\n \"en\",\r\n cache_dir=\"cache_dir\",\r\n **ds_kwargs,\r\n )\r\n train_data, valid_data = datasets[\"train\"], datasets[\"validation\"]\r\n with accelerator.main_process_first():\r\n train_data = train_data.shuffle(buffer_size=10000, seed=None)\r\n train_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n train_data,\r\n infinite=True,\r\n max_seq_length=256,\r\n )\r\n valid_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n valid_data,\r\n infinite=False,\r\n max_seq_length=256,\r\n )\r\n train_dataset = train_dataset.shuffle(buffer_size=10000)\r\n train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)\r\n eval_dataloader = DataLoader(valid_dataset, batch_size=160)\r\n return train_dataloader, eval_dataloader\r\n\r\n\r\ndef main():\r\n # Accelerator.\r\n logging_dir = \"data_save_dir/log\"\r\n accelerator = Accelerator(\r\n gradient_accumulation_steps=1,\r\n mixed_precision=\"bf16\",\r\n log_with=\"tensorboard\",\r\n logging_dir=logging_dir,\r\n )\r\n # We need to initialize the trackers we use, and also store our configuration.\r\n # The trackers initializes automatically on the main process.\r\n if accelerator.is_main_process:\r\n accelerator.init_trackers(\"test\")\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n # Load datasets and create dataloaders.\r\n train_dataloader, _ = create_dataloaders(tokenizer, accelerator)\r\n\r\n train_dataloader = accelerator.prepare(train_dataloader)\r\n for step, batch in enumerate(train_dataloader, start=1):\r\n print(step)\r\n accelerator.end_training()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line: \r\n```\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n```\r\n`ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.",
"> Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line:\r\n> \r\n> ```\r\n> return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n> ```\r\n> \r\n> `ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.\r\n\r\nI met the same issue for pytorch 1.12 and 1.13, is there a way to work around for this function for newer pytorch versions?"
] |
1,410,732,403
| 5,122
|
Add warning
|
closed
| 2022-10-17T01:30:37
| 2022-11-05T12:23:53
| 2022-11-05T12:23:53
|
https://github.com/huggingface/datasets/pull/5122
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5122",
"html_url": "https://github.com/huggingface/datasets/pull/5122",
"diff_url": "https://github.com/huggingface/datasets/pull/5122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5122.patch",
"merged_at": null
}
|
Salehbigdeli
| true
|
[
"As mentioned in https://github.com/huggingface/datasets/issues/5105 I think we just need to keep the existing files instead of deleting them.\r\nThe `dataset_info.json` file contains the split names anyway, so we know which files belong to the dataset, and which ones don't."
] |
1,410,681,067
| 5,121
|
Bugfix ignore function when creating new_fingerprint for caching
|
closed
| 2022-10-17T00:03:43
| 2022-10-17T12:39:36
| 2022-10-17T12:39:36
|
https://github.com/huggingface/datasets/pull/5121
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5121",
"html_url": "https://github.com/huggingface/datasets/pull/5121",
"diff_url": "https://github.com/huggingface/datasets/pull/5121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5121.patch",
"merged_at": null
}
|
Salehbigdeli
| true
|
[
"Adding \"function\" to the kwargs to ignore when computing the fingerprint will break `map` caching. Indeed passing two different function would result in two different datasets that have the same fingerprint - and the cache wouldn't be able to distinguish them.\r\n\r\nE.g this code would reload ds1 from the cache insetad of computing the dataset for ds2\r\n```python\r\nds = Dataset.from_dict({\"a\": [1, 2, 3]})\r\nds1 = ds.map(lambda x: {\"b\": 1})\r\nds2 = ds.map(lambda x: {\"b\": 2})\r\n```"
] |
1,410,641,221
| 5,120
|
Fix `tqdm` zip bug
|
closed
| 2022-10-16T22:19:18
| 2022-10-23T10:27:53
| 2022-10-19T08:53:17
|
https://github.com/huggingface/datasets/pull/5120
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5120",
"html_url": "https://github.com/huggingface/datasets/pull/5120",
"diff_url": "https://github.com/huggingface/datasets/pull/5120.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5120.patch",
"merged_at": "2022-10-19T08:53:17"
}
|
david1542
| true
|
[
"@albertvillanova Thanks for your comment. What do you think about creating 2 `pbar` for each case? I see the `pbar_iterable` is initialized differently. Maybe `pbar` can also be initialized like that.",
"@albertvillanova Another solution I implemented is to change `pbar_iterable` and add the `zip` to it. I updated the PR with this solution. Let me know what you think.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova Done :) Let me know what you think.",
"@albertvillanova Thanks :) I also don't see an easy way to test this. This was just a problem in the way `tqdm` was used. I'm not sure we should cover it in tests.",
"Hi, \r\n\r\nFirst of all, thanks for this PR. \r\nIt's the first time I join a discussion on GitHUB on problem resolution in libraries such as transformers, so I hope I comply to the best practices for an efficient communication...\r\n\r\nI am running `AutoTokenizer.from_pretrained` in a Google Colab notebook for using with BERT base. \r\nI am experiencing issue [5117](https://github.com/huggingface/datasets/issues/5117).\r\n\r\nEach time I run my notebook, I do:\r\n\r\n`! pip install transformers \r\n! pip install datasets \r\n! pip install huggingface_hub`\r\n\r\nAs I understand, the issue has been resolved and the solution merged to the released version of the code?\r\nSo I expect that the bug is resolved in my notebook, however this is not the case.\r\n\r\nDo I get something wrong? \r\nDo I have to implement some change in the source code myself?\r\n\r\nThanks in advance for your help!",
"@Cochonaki Hi :) The problem was fixed but there wasn't a release since then. I believe a new release should come out in the upcoming weeks. Maybe someone from the core maintainers can answer that :)\r\n\r\ncc: @albertvillanova ",
"Baby Haiti Coffee SE is born\n\nNH watch\n\nOn Sun, Oct 23, 2022 at 02:39 Dudu Lasry ***@***.***> wrote:\n\n> @Cochonaki <https://github.com/Cochonaki> Hi :) The problem was fixed but\n> there wasn't a release since then. I believe a new release should come out\n> in the upcoming weeks. Maybe someone from the core maintainers can answer\n> that :)\n>\n> cc: @albertvillanova <https://github.com/albertvillanova>\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5120#issuecomment-1288024546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAB4E2NCT7QO7W3PTQGDIKDWETMQ7ANCNFSM6AAAAAARGRBY2M>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n",
"Hi, @Cochonaki.\r\n\r\nAs @david1542 pointed out, we have not made a release since this bug was fixed. We will make one in the following weeks.\r\n\r\nIn the meantime, if you would like to incorporate the bug fix, you can install `datasets` from this repo main branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks a lot @albertvillanova and @david1542, it works now!\r\nI am really thankful for your help, that encourages me to participate more in this community.\r\nSee you around!",
"Welcome!!! 🤗"
] |
1,410,561,363
| 5,119
|
[TYPO] Update new_dataset_script.py
|
closed
| 2022-10-16T17:36:49
| 2022-10-19T09:48:19
| 2022-10-19T09:45:59
|
https://github.com/huggingface/datasets/pull/5119
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5119",
"html_url": "https://github.com/huggingface/datasets/pull/5119",
"diff_url": "https://github.com/huggingface/datasets/pull/5119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5119.patch",
"merged_at": "2022-10-19T09:45:59"
}
|
cakiki
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,410,547,373
| 5,118
|
Installing `datasets` on M1 computers
|
closed
| 2022-10-16T16:50:08
| 2022-10-19T09:10:08
| 2022-10-19T09:10:08
|
https://github.com/huggingface/datasets/issues/5118
| null |
david1542
| false
|
[
"Thanks for reporting, @david1542."
] |
1,409,571,346
| 5,117
|
Progress bars have color red and never completed to 100%
|
closed
| 2022-10-14T16:12:30
| 2024-06-19T19:03:42
| 2022-10-23T12:58:41
|
https://github.com/huggingface/datasets/issues/5117
| null |
echatzikyriakidis
| false
|
[
"Hi @echatzikyriakidis, thanks for submitting the issue.\r\nWhich shell are you using exactly? I tried to run the command you sent, but I don't see colors at all 🧐\r\n\r\nI tried from bash and zsh as well.",
"Hi @david1542 ,\r\n\r\nI use Google Colab.\r\n",
"Got it. I [created a PR](https://github.com/huggingface/datasets/pull/5120) that fixes this issue. Turns out that the wrapping logic for the inner loop was slightly incorrect.",
"Thank you!",
"Hello @mariosasko \r\n\r\nI am still facing this issue. Was this problem fixed?\r\n\r\n\r\n\r\nI cleared the hugging face cache before running, and no error message was given. Let me know if you need a minimal repro of my code."
] |
1,409,549,471
| 5,116
|
Use yaml for issue templates + revamp
|
closed
| 2022-10-14T15:53:13
| 2022-10-19T13:05:49
| 2022-10-19T13:03:22
|
https://github.com/huggingface/datasets/pull/5116
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5116",
"html_url": "https://github.com/huggingface/datasets/pull/5116",
"diff_url": "https://github.com/huggingface/datasets/pull/5116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5116.patch",
"merged_at": "2022-10-19T13:03:22"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,409,250,020
| 5,115
|
Fix iter_batches
|
closed
| 2022-10-14T12:06:14
| 2022-10-14T15:02:15
| 2022-10-14T14:59:58
|
https://github.com/huggingface/datasets/pull/5115
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5115",
"html_url": "https://github.com/huggingface/datasets/pull/5115",
"diff_url": "https://github.com/huggingface/datasets/pull/5115.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5115.patch",
"merged_at": "2022-10-14T14:59:58"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I also ran the code in https://github.com/huggingface/datasets/issues/5111 and it works fine now :)",
"This is ready for review :)"
] |
1,409,236,738
| 5,114
|
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
|
open
| 2022-10-14T11:54:53
| 2022-11-19T07:13:10
| null |
https://github.com/huggingface/datasets/issues/5114
| null |
bruno-hays
| false
|
[
"Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.",
"What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```"
] |
1,409,207,607
| 5,113
|
Fix filter indices when batched
|
closed
| 2022-10-14T11:30:03
| 2022-10-24T06:21:09
| 2022-10-14T12:11:44
|
https://github.com/huggingface/datasets/pull/5113
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5113",
"html_url": "https://github.com/huggingface/datasets/pull/5113",
"diff_url": "https://github.com/huggingface/datasets/pull/5113.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5113.patch",
"merged_at": "2022-10-14T12:11:44"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I think a patch release will be necessary.",
"I'm also fixing https://github.com/huggingface/datasets/issues/5111 which will lalso require a patch release"
] |
1,409,143,409
| 5,112
|
Bug with filtered indices
|
closed
| 2022-10-14T10:35:47
| 2022-10-14T13:55:03
| 2022-10-14T12:11:45
|
https://github.com/huggingface/datasets/issues/5112
| null |
albertvillanova
| false
|
[
"The issue is here:\r\nhttps://github.com/huggingface/datasets/blob/3ad9644b9a2e4558dd1d0f1e43c67658674e6228/src/datasets/arrow_dataset.py#L2964",
"@PartiallyTyped, @Muennighoff: the issue is fixed.\r\n\r\nWe are planning to make a patch release today.",
"Thanks a lot for the swift response! For a brief moment yesterday I thought I had gone insane 🤣On 14 Oct 2022, at 15:44, Albert Villanova del Moral ***@***.***> wrote:\n@PartiallyTyped, @Muennighoff: the issue is fixed.\nWe are planning to make a patch release today.\n\n—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>"
] |
1,408,143,170
| 5,111
|
map and filter not working properly in multiprocessing with the new release 2.6.0
|
closed
| 2022-10-13T17:00:55
| 2022-10-17T08:26:59
| 2022-10-14T14:59:59
|
https://github.com/huggingface/datasets/issues/5111
| null |
loubnabnl
| false
|
[
"Same bug exists with `num_proc=1` on colab. `3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]` ",
"Thanks for reporting, @loubnabnl and for the additional information, @PartiallyTyped.\r\n\r\nHowever, I'm not able to reproduce this issue, neither locally nor on Colab:\r\n```\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\n```\r\nCC: @huggingface/datasets can anybody reproduce this?",
"This is the minimum reproducible example. I ran this on the premium instances of colab.\r\n\r\n```\r\n# !pip install datasets\r\nimport datasets\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"copenlu/answerable_tydiqa\").filter(\"english\".__eq__, input_columns=\"language\")\r\nassert all(map(\"english\".__eq__, ds[\"train\"][\"language\"]))\r\n```\r\n\r\nIn my case, the number of samples is correct, however, the samples selected when indexing are wrong.\r\n\r\n```python\r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 990\r\n })\r\n train: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 7389\r\n })\r\n})\r\n```\r\n\r\nThe number of rows is indeed correct, and i have checked it with a version that works.",
"I can reproduce the issue on my mac too \r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: macOS-12.2.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3\r\n```\r\nBut not on Colab with python 3.7, maybe related to python version? (didn't manage to install python 3.9)\r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.14\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.3.5\r\n```",
"I have the same issue, here's a simple notebook to reproduce: https://colab.research.google.com/drive/1Lvo9fg5DSpGUUgXW5JAutZ0bFsR-WV--?usp=sharing\r\n\r\n\r\n\r\n",
"I think there are 2 different issues here:\r\n- the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n- the issue reported by @PartiallyTyped is related just to \"filter\" (without multiprocessing) and I can reproduce it.",
"Could you create another issue for the @PartiallyTyped one please ?\r\n\r\nRegarding the OP issue, I also tried on colab or locally on py3.7 or py3.10 but didn't reproduce",
"I have created another issue for the one reported by @PartiallyTyped: \r\n- #5112 ",
"I managed to reproduce your issue @loubnabnl on colab by upgrading pyarrow to 9.0.0 instead of 6.0.1",
"I managed to have a _super_ minimal reproducible example:\r\n```python\r\n\r\nfrom datasets import Dataset, concatenate_datasets\r\n\r\nds = concatenate_datasets([Dataset.from_dict({\"a\": [i]}) for i in range(10)])\r\nds2 = ds.map(lambda _: {}, batched=True)\r\nassert list(ds2) == list(ds)\r\n```\r\n(filter uses a batched `map` under the hood)",
"> the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n\r\nSo finally it was related to PyArrow version! :+1: ",
"Doing a patch release asap :)",
"Did the patch release yesterday, lmk if you still have issues",
"It works now, thanks!\r\n"
] |
1,407,434,706
| 5,109
|
Map caching not working for some class methods
|
closed
| 2022-10-13T09:12:58
| 2022-10-17T10:38:45
| 2022-10-17T10:38:45
|
https://github.com/huggingface/datasets/issues/5109
| null |
Mouhanedg56
| false
|
[
"The hash used for caching is computed by pickling recursively the function passed to `map`. Maybe some objects don't have the same hash across sessions. In particular you can check the hash of your model using\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nobj = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nprint(Hasher.hash(obj))\r\n```\r\n\r\nYou can find mode info here: https://huggingface.co/docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument",
"Indeed, the hash is changing. The `dumps` function serialize the model object in different ways because the model object is not deterministic\r\n```python\r\nfrom datasets.utils.py_utils import dumps\r\nobj1 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nobj2 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\n\r\ndumps(bert) == dumps(bert2). # False\r\n```\r\n\r\n> You can find mode info here: https://huggingface.co/docs/datasets/about_cache\r\n> \r\n> You can also provide your own unique hash in map if you want, with the new_fingerprint argument\r\n\r\n\r\nThanks, the doc is so helpful. Indeed, we can fix the hash and get cache hit using `new_fingerprint`. Closing the issue."
] |
1,407,044,107
| 5,108
|
Fix a typo in arrow_dataset.py
|
closed
| 2022-10-13T02:33:55
| 2022-10-14T09:47:28
| 2022-10-14T09:47:27
|
https://github.com/huggingface/datasets/pull/5108
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5108",
"html_url": "https://github.com/huggingface/datasets/pull/5108",
"diff_url": "https://github.com/huggingface/datasets/pull/5108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5108.patch",
"merged_at": "2022-10-14T09:47:27"
}
|
yangky11
| true
|
[] |
1,406,736,710
| 5,107
|
Multiprocessed dataset builder
|
closed
| 2022-10-12T19:59:17
| 2022-12-01T15:37:09
| 2022-11-09T17:11:43
|
https://github.com/huggingface/datasets/pull/5107
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5107",
"html_url": "https://github.com/huggingface/datasets/pull/5107",
"diff_url": "https://github.com/huggingface/datasets/pull/5107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5107.patch",
"merged_at": "2022-11-09T17:11:43"
}
|
TevenLeScao
| true
|
[
"I would also like to add a test, but am not sure whether it should go into `test_builder` (more natural imo) or `test_load` (which already contains a lot of the things I have to import to run my current testing setup). For reference, what I run to test that it works looks like:\r\n\r\n```\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\n\r\nimport datasets\r\nfrom datasets.builder import DatasetBuilder\r\nfrom datasets.features import Features, Value\r\n\r\nDATASET_LOADING_SCRIPT_NAME = \"__dummy_dataset1__\"\r\n\r\nDATASET_LOADING_SCRIPT_CODE = \"\"\"\r\nimport os\r\n\r\nimport datasets\r\nfrom datasets import DatasetInfo, Features, Split, SplitGenerator, Value\r\n\r\n\r\nclass __DummyDataset1__(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self) -> DatasetInfo:\r\n return DatasetInfo(features=Features({\"text\": Value(\"string\")}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [\r\n SplitGenerator(Split.TRAIN, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"train1.txt\"), os.path.join(dl_manager.manual_dir, \"train2.txt\")]}),\r\n SplitGenerator(Split.TEST, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"test.txt\")]}),\r\n ]\r\n\r\n def _generate_examples(self, filepaths, **kwargs):\r\n idx = 0\r\n for filepath in filepaths:\r\n with open(filepath, \"r\", encoding=\"utf-8\") as f:\r\n for line in f:\r\n yield idx, {\"text\": line.strip()}\r\n idx += 1\r\n\"\"\"\r\n\r\n\r\ndef dataset_loading_script_dir(tmp_path):\r\n script_name = DATASET_LOADING_SCRIPT_NAME\r\n script_dir = tmp_path / script_name\r\n script_dir.mkdir()\r\n script_path = script_dir / f\"{script_name}.py\"\r\n with open(script_path, \"w\") as f:\r\n f.write(DATASET_LOADING_SCRIPT_CODE)\r\n return str(script_dir)\r\n\r\n\r\ndef data_dir(tmp_path):\r\n data_dir = tmp_path / \"data_dir\"\r\n data_dir.mkdir()\r\n with open(data_dir / \"train1.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"train2.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"test.txt\", \"w\") as f:\r\n f.write(\"bar\\n\" * 10)\r\n return str(data_dir)\r\n\r\n\r\ndef load_dataset_builder_multiprocessed(tmp_path):\r\n builder = datasets.load_dataset_builder(\r\n os.path.join(dataset_loading_script_dir(tmp_path), DATASET_LOADING_SCRIPT_NAME + \".py\"),\r\n data_dir=data_dir(tmp_path),\r\n )\r\n assert isinstance(builder, DatasetBuilder)\r\n assert builder.name == DATASET_LOADING_SCRIPT_NAME\r\n assert builder.info.features == Features({\"text\": Value(\"string\")})\r\n builder.download_and_prepare(tmp_path / \"prepare_target\", max_shard_size=500, num_proc=2)\r\n\r\nif __name__ == \"__main__\":\r\n tmp_path = \"tmp\"\r\n if os.path.exists(tmp_path):\r\n raise FileExistsError(f\"path {tmp_path} already exists\")\r\n os.makedirs(tmp_path)\r\n try:\r\n load_dataset_builder_multiprocessed(Path(tmp_path))\r\n finally:\r\n # pass\r\n shutil.rmtree(tmp_path)\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5107). All of your documentation changes will be reflected on that endpoint.",
"Nice ! I think the test can go in `test_builder.py` :)",
"I've added sharded arrow dataset loading. Two WIP items in the PR:\r\n- ~~Order is not conserved (it seems like the sharded files are read in the wrong order)~~\r\n- the tqdm for preparing the splits is wrong (it compares against the size of the whole split rather than against the size of the multiprocessing shard, but I am not sure how to access the latter)\r\n\r\nAlso `naming.filenames_for_dataset_split` is not very elegant imo.\r\n\r\n@lvwerra if you don't care about order, as I do, it's functional for now but I'd still quite like to get to the bottom of this.",
"Found the ordering bug ! (`glob.glob` returning stuff in arbitrary order)",
"I fixed the tqdm to be less misleading, but it can't tell where to stop. I am a bit hesitant to add a top-level tqdm (on the shard iterator) since for most intents it will do 0 -> N shards straight, but I am not sure what is the best way to present that info here.",
"I'm continuing the PR :)",
"Did a few changes:\r\n- make shards naming consistent:\r\n - use `{builder_name}-{split_name}.{file_format}` when there's only 1 shard\r\n - otherwise use `{builder_name}-{split_name}-{shard_idx:05d}-of-{num_shards:05d}.{file_format}`\r\n- update the reader to support reading several shards\r\n - added a new `shard_lengths` field in `SplitInfo` (FYI it is saved in `dataset_info.json` next to the shards as usual)\r\n - it's None when there's only 1 shard\r\n - otherwise it's a list of integers that correspond to the number of rows per shard\r\n - implemented partial reading to only memory map the required shards\r\n - e.g. when someone asks for a partial split like `train[:10%]`\r\n- align the sharding for beam datasets\r\n - no more combining into 1 big arrow file\r\n- added a tqdm bar\r\n - only one single bar, handled by the main process\r\n - gathers progress updates from other processes using `iflatmap_unordered`\r\n - shows the number of examples (even for datasets prepared by generating arrow tables)\r\n- disabled multiprocessing by default - users must pass `num_proc` explicitly\r\n- tests\r\n- docs",
"Alright this is ready for review - sorry it ended up so big ^^'\r\n\r\nIf I can do anything to make it easier for your to review this PR @mariosasko let me know",
"Multiprocessing is disabled by default but we may show a warning to encourage users to pass `num_proc` if the dataset is split in many files. Let me know what you think",
"Hey, is this error seems to you guys natural? \r\n\r\nThe package built from `0d4e3907` commit tag, and here is the version displayed from the import ... \r\n```bash\r\n>>> datasets.__version__\r\n'2.6.1.dev0'\r\n>>> \r\n```\r\n\r\n```bash\r\n>>> data = load_dataset('dataset_loaders/rfw2latentplay', num_proc=14)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1523, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 1292, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 303, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 456, in _create_builder_config\r\n builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'num_proc'\r\n```\r\n\r\nLet me know if I can help fixing this ... \r\n",
"> Do we have some benchmarks to see the speed-up?\r\n\r\nOn my machine running `load_dataset(\"oscar-corpus/OSCAR-2201\", \"br\")` (which is split in shards) I go from 2-3k examples per sec to 4-5k examples per sec with num_proc=2 😉",
"> Hey, is this error seems to you guys natural?\r\n>\r\n> The package built from 0d4e3907 commit tag, and here is the version displayed from the import ...\r\n\r\nI don't know where you got the `0d4e3907` commit tag from, it doesn't seem to be in this PR. You should try installing from this PR, or wait for it to be merged on `main`",
"## Splits vs Shards\r\n\r\nMaybe it's a good idea to add some documentation on the `sharding` that can be achieved by passing `list` based arguments to the `SplitGenerator`s `gen_kwargs` ... \r\n\r\nI had to read the whole dataset generation source code to find this out ... \r\n\r\n\r\n",
"> Maybe it's a good idea to add some documentation on the sharding that can be achieved by passing list based arguments to the SplitGenerators gen_kwargs ...\r\n\r\nThis is part of this PR :) you can check the changes in docs/source/dataset_script.mdx",
"I took your comments into account @mariosasko thanks !\r\nLet me know if it's good for you now ;)",
"The doc CI should be fixed by now hopefully, merging !"
] |
1,406,635,758
| 5,106
|
Fix task template reload from dict
|
closed
| 2022-10-12T18:33:49
| 2022-10-13T09:59:07
| 2022-10-13T09:56:51
|
https://github.com/huggingface/datasets/pull/5106
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5106",
"html_url": "https://github.com/huggingface/datasets/pull/5106",
"diff_url": "https://github.com/huggingface/datasets/pull/5106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5106.patch",
"merged_at": "2022-10-13T09:56:51"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Just wondering if there might be other data classes default values missed that could cause an issue... Apart from feature-like classes and tasks, I don't see any others though...\r\n\r\nI think we're good ! `asdict` is used on the DatasetInfo attributes like features, tasks etc. and they all support dict conversion properly now\r\n\r\n> And a question: but this information about the tasks is no longer being saved as YAML tags in the dataset card; won't be a problem with current datasets using task templates (with this information in their metadata JSON) once we replace the JSON by the YAML tags (which do not have this information about the task templates)?\r\n\r\nIn the long run we'll use the train_eval_index YAML tags instead, but I agree when removing the JSON files we should try to not break existing code that may rely on this"
] |
1,406,078,357
| 5,105
|
Specifying an exisiting folder in download_and_prepare deletes everything in it
|
open
| 2022-10-12T11:53:33
| 2022-10-20T11:53:59
| null |
https://github.com/huggingface/datasets/issues/5105
| null |
cakiki
| false
|
[
"cc @lhoestq ",
"Thanks for reporting, @cakiki.\r\n\r\nI would say the deletion of the dir is an expected behavior though...",
"`dask.to_parquet` has an \"overwrite\" parameter and default is `False`, we could also have something similar",
"Thank you both for your feedback!\r\n\r\n@albertvillanova I think I might have have the wrong mental model of what the function was meant to do. I thought it would be an API similar to the pandas `to_XX` write methods (Like the one @lhoestq mentions) so I just assumed it would download the dataframe to whichever folder I specififed (`\"./\"` in my case) so I could load it into a dask dataframe. I absolutely did not expect it to delete everything in my local directory, including the script where I called it from :smile: \r\n\r\nI think Quentin's proposed solution sounds like a reasonable feature!",
"actually there's already a `download_mode` parameter that defaults to `REUSE_DATASET_IF_EXISTS` - so I guess it's just a matter of not deleting files unrelated to the dataset, and to overwrite existing dataset files if the download mode is `REUSE_CACHE_IF_EXISTS` or `FORCE_REDOWNLOAD`"
] |
1,405,973,102
| 5,104
|
Fix loading how to guide (#5102)
|
closed
| 2022-10-12T10:34:42
| 2022-10-12T11:34:07
| 2022-10-12T11:31:55
|
https://github.com/huggingface/datasets/pull/5104
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5104",
"html_url": "https://github.com/huggingface/datasets/pull/5104",
"diff_url": "https://github.com/huggingface/datasets/pull/5104.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5104.patch",
"merged_at": "2022-10-12T11:31:55"
}
|
riccardobucco
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,405,956,311
| 5,103
|
url encode hub url (#5099)
|
closed
| 2022-10-12T10:22:12
| 2022-10-12T15:27:24
| 2022-10-12T15:24:47
|
https://github.com/huggingface/datasets/pull/5103
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5103",
"html_url": "https://github.com/huggingface/datasets/pull/5103",
"diff_url": "https://github.com/huggingface/datasets/pull/5103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5103.patch",
"merged_at": "2022-10-12T15:24:47"
}
|
riccardobucco
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,404,746,554
| 5,102
|
Error in create a dataset from a Python generator
|
closed
| 2022-10-11T14:28:58
| 2022-10-12T11:31:56
| 2022-10-12T11:31:56
|
https://github.com/huggingface/datasets/issues/5102
| null |
yangxuhui
| false
|
[
"Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.",
"Can I work on this one?"
] |
1,404,513,085
| 5,101
|
Free the "hf" filesystem protocol for `hffs`
|
closed
| 2022-10-11T11:57:21
| 2022-10-12T15:32:59
| 2022-10-12T15:30:38
|
https://github.com/huggingface/datasets/pull/5101
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5101",
"html_url": "https://github.com/huggingface/datasets/pull/5101",
"diff_url": "https://github.com/huggingface/datasets/pull/5101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5101.patch",
"merged_at": "2022-10-12T15:30:38"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,404,458,586
| 5,100
|
datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method
|
closed
| 2022-10-11T11:16:31
| 2022-10-11T13:48:26
| 2022-10-11T13:48:26
|
https://github.com/huggingface/datasets/issues/5100
| null |
jagochi
| false
|
[] |
1,404,370,191
| 5,099
|
datasets doesn't support # in data paths
|
closed
| 2022-10-11T10:05:32
| 2022-10-13T13:14:20
| 2022-10-13T13:14:20
|
https://github.com/huggingface/datasets/issues/5099
| null |
loubnabnl
| false
|
[
"`datasets` doesn't seem to urlencode the directory names here\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/file_utils.py#L109-L111\r\n\r\nfor example we should have\r\n```python\r\nfrom datasets.utils.file_utils import hf_hub_url\r\n\r\nurl = hf_hub_url(\"loubnabnl/bigcode_csharp\", \"data/c#/data_0003.jsonl\")\r\nprint(url)\r\n# Currently returns\r\n# https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/main/data/c#/data_0003.jsonl\r\n# while it should be \r\n# https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/main/data/c%23/data_0003.jsonl\r\n```",
"I'll work on this :)",
"@loubnabnl The dataset you linked in the description of the bug does not work and returns a 404. Where can I find the dataset to reproduce the bug?",
"I think you can create a dataset repository on the Hub with a dummy file containing a `#`",
"Ah sorry it was private I just made it public, I can also help with this if needed",
"@lhoestq Should I url encode also repo_id and revision parameters? I'm not sure what are the valid characters there.\r\n\r\nPersonally, I would be cautious and only url encode the path parameter.",
"These are possible solutions (assuming `from urllib.parse import quote`):\r\n\r\n1) url encode only the path parameter:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=repo_id, path=quote(path), revision=revision)\r\n```\r\n2) url encode all parameters:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=quote(repo_id), path=quote(path), revision=quote(revision))\r\n```\r\n3) url encode the whole url:\r\n```\r\n# src/datasets/config.py\r\nHUB_DATASETS_PATH = \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nHUB_DATASETS_URL = HF_ENDPOINT + HUB_DATASETS_PATH\r\n```\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HF_ENDPOINT + quote(config.HUB_DATASETS_PATH.format(repo_id=repo_id, path=path, revision=revision))\r\n```",
"repo_id can only contain alphanumeric characters and _- so it doesn't need to be encoded.\r\n\r\nHowever I agree it's a good idea to also apply `quote` to the revision as well as in 2. !",
"Should be fixed by https://github.com/huggingface/datasets/issues/5099 - we'll do a release later today"
] |
1,404,058,518
| 5,098
|
Classes label error when loading symbolic links using imagefolder
|
closed
| 2022-10-11T06:10:58
| 2022-11-14T14:40:20
| 2022-11-14T14:40:20
|
https://github.com/huggingface/datasets/issues/5098
| null |
horizon86
| false
|
[
"It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278",
"Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.",
"> Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.\r\n\r\nThanks for your reply!"
] |
1,403,679,353
| 5,097
|
Fatal error with pyarrow/libarrow.so
|
closed
| 2022-10-10T20:29:04
| 2022-10-11T06:56:01
| 2022-10-11T06:56:00
|
https://github.com/huggingface/datasets/issues/5097
| null |
catalys1
| false
|
[
"Thanks for reporting, @catalys1.\r\n\r\nThis seems a duplicate of:\r\n- #3310 \r\n\r\nThe source of the problem is in PyArrow:\r\n- [ARROW-15141: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-15141)\r\n- [ARROW-17501: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-17501)\r\n\r\nThe bug in their dependency is still unresolved:\r\n- https://github.com/aws/aws-sdk-cpp/issues/1809\r\n\r\nApparently, the `aws-sdk-cpp` PyArrow dependency needs to be pinned at version `1.8.186` if using conda. Have you updated it after installing PyArrow?\r\n```shell\r\nconda list aws-sdk-cpp\r\n```\r\n\r\nMaybe you should try to downgrade it to that version:\r\n```shell\r\nconda install -c conda-forge aws-sdk-cpp=1.8.186\r\n```"
] |
1,403,379,816
| 5,096
|
Transfer some canonical datasets under an organization namespace
|
closed
| 2022-10-10T15:44:31
| 2024-06-24T06:06:28
| 2024-06-24T06:02:45
|
https://github.com/huggingface/datasets/issues/5096
| null |
albertvillanova
| false
|
[
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 2.01MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default (download: 411 bytes, generated: 385 bytes, post-processed: Unknown size, total: 796 bytes) to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 293kB/s]\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 304.16it/s]\r\nOut[1]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"dummy-canonical-org/dummy_canonical_dataset\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 1.57MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 362.48it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n```",
"Cool ! 🚀 ",
"Maybe we should be a bit more proactive with these transfers. There are only ≈70 canonical models, so reaching that number with datasets would be great, too. It's not easy considering the current number of ≈750 canonical datasets, but doable.\r\n\r\nFor instance, it shouldn't be too hard to transfer these datasets (partial list; all of them have more than > 1k downloads):\r\n\r\n<details>\r\n\r\n<summary> Datasets to transfer </summary>\r\n\r\n```\r\nquickdraw -> google\r\nopenai_humaneval -> openai\r\nc4 -> allenai/c4 (the canonical version reads data from the org version)\r\nmbpp -> google (ask jaaustin (author) where to transfer the dataset)\r\ncompetition_math -> hendrycks (author)\r\ngsm8k -> openai\r\nai2_arc -> allenai\r\nimdb -> stanfordai\r\ngreek_legal_code -> chrispap (author)\r\nspider -> Yale-LILY\r\nsquad and squad_v2 -> rajpurkarlab (or rajpurkar, a member of the org and one of the authors)\r\ncppe-5 -> rishitdagli\r\nnews_commentary -> Helsinki-NLP\r\njfleg -> keisks (author)\r\npubmed_qa -> qiaojin (author)\r\nmedmcqa -> infinitylogesh (author)\r\ncifar10 and cifar100 -> UniversityofToronto\r\ncc100 -> gwenzek (author)\r\nasset -> facebook\r\nblbooks -> BritishLibraryLabs\r\ncapes -> FLSRDS (maybe the author?)\r\ncc_news -> fhamborg (author)\r\nclue -> CLUE benchmark\r\ncoqa -> stanfordnlp\r\nlambada -> germank (author)\r\nlibrispeech_asr -> openslr\r\ndrop -> allenai\r\nduorc -> salesforce (ask amritasaha87 (author) where to transfer)\r\nglue -> nyu-mll ?\r\ngo_emotions -> google\r\ncommonsense_qa -> tau\r\ndbpedia_14 -> JensLehmann (author?)\r\ndiscofuse -> google\r\nmc4 -> allenai/c4\r\nopenbookqa -> allenai\r\nropes -> allene\r\ntrivia_qa -> mandarjoshi (author)\r\nwikiann -> afshinrahimi (author)\r\nxtreme -> google\r\nxscr -> INK-USC\r\nyelp_review_full -> Yelp\r\ntruthful_qa -> jacobhilton22 (author)\r\nbigbench -> google\r\nxnli -> facebook\r\nsciq -> allenai\r\nsst2 -> stanfordnlp\r\nblimp -> alexwarstadt (author)\r\ntweet_eval -> cardiffnlp\r\nbeans -> AI-Lab-Makerere\r\nlex_glue -> coastalcph\r\namericas_nli -> abteen (author)\r\nopus_euconst -> tiedeman (author)\r\nmedical_questions_pairs -> curaihealth\r\nweb_questions -> joberant (author)\r\nanli -> facebook\r\nrace -> CarnegieMellonCS\r\nklue -> klue\r\nwino_bias -> uclanlp\r\nwiki_qa -> microsoft\r\nxcopa -> cambridgeltl\r\nindic_glue -> ai4bharat\r\nboolq -> google\r\nadversarial_qa -> mbartolo (author)\r\nnq_open -> google\r\nsnli -> stanfordnlp\r\nstsb_multi_mt -> PhilipMay (author)\r\nmulti_nli -> sleepinyourhat (author)\r\npaws -> google\r\npaws-x -> google\r\nms_marco - microsoft\r\nxquad -> deepmind\r\nnarrativeqa -> deepmind\r\nkilt_tasks -> facebook\r\nhate_speech_offensive -> tdavidson (author)\r\nwiki40b -> google\r\ncovost2 -> facebook\r\ncommon_gen -> INKLAB\r\nmulti_eurlex -> kiddothe2b (author)\r\nexams -> mhardalov (author)\r\ntiny_shakespeare -> karpathy (author)\r\nblbooksgenre -> BritishLibraryLabs ?\r\nfood101 -> ethz ?\r\nscitail -> allenai\r\nbillsum -> FiscalNote\r\nimppres -> facebook\r\nquartz -> allenai\r\nqasc -> allenai\r\nquail -> textmachinelab\r\nwiki_lingua -> esdurmus\r\ncos_e -> salesforce ?\r\ncivil_comments -> google ? (create a “jigsaw” org) \r\nxquad_r -> google\r\nwikitext-> metamind (or salesforce)\r\n\r\n// deprecate c4 and mc4 in favor of allenai/c4 (add a dataset script to the org version to make it easier to use?)\r\n```\r\n</details>\r\n\r\nAlso, a space that allows users to claim the existing canonical datasets (for themselves or their organizations) could be nice.\r\n\r\nWDYT?",
"Next week I can take care of some of them :) In most cases we just need to send an email to ask them if they're ok with it.\r\nLet's coordinate on slack ?",
"Yup, sounds good to me!",
"I can also continuing working on this if we agree this has become a priority now.",
"cool stuff! \r\n\r\nthis morning on my side i moved huggingface.co/ctrl (a not very used model) to its rightful entity",
"As a previous step before transferring the datasets, we decided we should convert them to Parquet, so that the viewer does not stop working (the viewer does not support datasets with scripts). \r\n\r\nDatasets converted to Parquet:\r\n- [x] adversarial_qa\r\n- [x] ai2_arc\r\n- [x] americas_nli\r\n- [x] anli\r\n- [x] asset\r\n- [x] beans\r\n- [ ] bigbench\r\n- [x] billsum\r\n- [ ] blbooks: it was already transferred to: TheBritishLibrary/blbooks\r\n- [ ] blbooksgenre: it was already transferred to: TheBritishLibrary/blbooksgenre\r\n- [x] blimp\r\n- [x] boolq\r\n- [ ] c4\r\n- [x] capes\r\n- [ ] cc100\r\n- [x] cc_news\r\n- [x] cifar10\r\n- [x] cifar100\r\n- [x] civil_comments\r\n- [x] clue\r\n- [x] common_gen\r\n- [x] commonsense_qa\r\n- [ ] competition_math: it was already transferred to: hendrycks/competition_math\r\n- [x] coqa\r\n- [x] cos_e\r\n- [ ] covost2: it requires manual download\r\n- [x] cppe-5\r\n- [x] dbpedia_14\r\n- [x] discofuse\r\n- [x] drop\r\n- [x] duorc\r\n- [x] exams\r\n- [x] food101\r\n- [x] glue\r\n- [x] go_emotions\r\n- [x] greek_legal_code\r\n- [x] gsm8k\r\n- [x] hate_speech_offensive\r\n- [x] imdb\r\n- [x] imppres\r\n- [x] indic_glue\r\n- [x] jfleg\r\n- [x] kilt_tasks\r\n- [x] klue\r\n- [x] lambada\r\n- [x] lex_glue\r\n- [ ] librispeech_asr\r\n- [x] mbpp\r\n- [ ] mc4\r\n- [x] medical_questions_pairs\r\n- [x] medmcqa\r\n- [x] ms_marco\r\n- [ ] multi_eurlex\r\n- [x] multi_nli\r\n- [ ] narrativeqa\r\n- [ ] news_commentary\r\n- [x] nq_open\r\n- [x] openai_humaneval\r\n- [x] openbookqa\r\n- [ ] opus_euconst\r\n- [x] paws\r\n- [x] paws-x\r\n- [x] pubmed_qa\r\n- [x] qasc\r\n- [x] quail\r\n- [x] quartz\r\n- [ ] quickdraw\r\n- [x] race\r\n- [x] ropes\r\n- [x] sciq\r\n- [x] scitail\r\n- [ ] snli\r\n- [x] spider\r\n- [x] squad\r\n- [x] squad_v2\r\n- [x] sst2\r\n- [x] stsb_multi_mt\r\n- [x] tiny_shakespeare\r\n- [x] trivia_qa\r\n- [x] truthful_qa\r\n- [x] tweet_eval\r\n- [x] web_questions\r\n- [ ] wiki40b\r\n- [x] wiki_lingua\r\n- [x] wiki_qa\r\n- [ ] wikiann\r\n- [x] wikitext\r\n- [x] wino_bias\r\n- [x] xcopa\r\n- [x] xcsr\r\n- [x] xnli\r\n- [x] xquad\r\n- [x] xquad_r\r\n- [ ] xtreme\r\n- [x] yelp_review_full\r\n",
"For `c4` and `mc4` I was thinking of adding the corresponding configs to `allenai/c4` and redirect `c4` and `mc4` to `allenai/c4`. I'll open a PR on `allenai/c4` if it's good for you",
"@davanstrien and @lhoestq, I have shared with you this spreadsheet: https://docs.google.com/spreadsheets/d/1GvNTd1UxmtTvEFOK-Eq6E3Str4FUWQuWZsEN0WVFirs/edit?usp=sharing\r\n\r\nThis way we can take datasets by batches to contact the authors and transfer to the organizations.",
"We have already transferred all canonical datasets under organization/user namespaces."
] |
1,403,221,408
| 5,095
|
Fix tutorial (#5093)
|
closed
| 2022-10-10T13:55:15
| 2022-10-10T17:50:52
| 2022-10-10T15:32:20
|
https://github.com/huggingface/datasets/pull/5095
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5095",
"html_url": "https://github.com/huggingface/datasets/pull/5095",
"diff_url": "https://github.com/huggingface/datasets/pull/5095.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5095.patch",
"merged_at": "2022-10-10T15:32:20"
}
|
riccardobucco
| true
|
[
"Oops I merged without linking to the hacktoberfest issue - not sure if it counts in this case\r\n\r\nsorry about that..\r\n\r\nNext time you can just mention \"Close #XXXX\" in your issue to link it",
"It should :) (the `hacktoberfest` repo topic is all that matters)"
] |
1,403,214,950
| 5,094
|
Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock
|
closed
| 2022-10-10T13:50:56
| 2023-07-24T15:29:13
| 2023-07-24T15:29:13
|
https://github.com/huggingface/datasets/issues/5094
| null |
RR-28023
| false
|
[
"Hi ! Could it be an Out of Memory issue that could have killed one of the processes ? can you check your memory ?",
"Hi! I don't think it is a memory issue. I'm monitoring the main and spawn python processes and threads with `htop` and the memory does not peak. Besides, the example I've posted above should not be that demanding in terms of memory, right? (I have 32GB of RAM). ",
"Indeed it should be fine. I couldn't reproduce the error though - I ran your script on my side and it works fine. What version of pytorch are you using ?",
"Interesting.. I'm using `torch 1.12.1`",
"I also tried on colab and it works fine 🤔 \r\nMaybe something is wrong with your installation of pytorch ?",
"Oh actually I just saw that you're using python 3.9\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/4113\r\n\r\nWe'll fix that as soon as we can, in the meantime you can try to use use single process, or use an older version of python maybe ?",
"I tried with python 3.7 and the issue persists. In collab, which also uses 3.7 I don't get the issue, so yes I guess is something on mu side... will post it here if I manage to fix it",
"Hi! Which version of transformers are you using? I test the code on Colab (so python 3.7) with transformers 4.23.1, torch 1.12.1 and pyarrow 9.0.0 (also 6.x), it worked without stuck.",
"Hi, I have the same problem in use **datasets.IterableDatasetDict.map()**\r\nmy pytorch is 2.0.0a0+gitc263bd4\r\nmy python is 3.8.16(default, Jun 12 2023, 17:37:21)\r\nwork on aarch64 in 16 node, each node with 4*nVidia-A100-40G\r\nevery node have 4 process execute code as ↓\r\n\r\n```\r\nfrom datasets import load_dataset, interleave_datasets, IterableDatasetDict, concatenate_datasets\r\n```\r\n...\r\n```\r\n model_args.cache_dir = '/home/scx/.cache'\r\n for dataset_name in data_args.datasets_name:\r\n train_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='train'\r\n ).select_columns('text')\r\n )\r\n valid_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='validation'\r\n ).select_columns('text')\r\n )\r\n train_dataset = interleave_datasets(train_datasets,\r\n probabilities=data_args.datasets_probabilities, \r\n seed=training_args.seed,\r\n stopping_strategy='all_exhausted')\r\n raw_datasets = IterableDatasetDict({'train': train_dataset, 'validation': valid_dataset})\r\n```\r\n...\r\n\r\n```\r\n tokenized_datasets = None\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n if not data_args.streaming:\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n remove_columns=column_names,\r\n )\r\n else:\r\n #TODO 20230722\r\n logger.info('{}: {}'.format(__file__, 'tokenized_datasets = raw_datasets.map('))\r\n logger.info('len raw_datasets: {}'.format(len(raw_datasets.items())))\r\n logger.info('raw_datasets:{}'.format(raw_datasets.items()))\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n batch_size=1000,\r\n remove_columns=column_names\r\n )\r\n logger.info('map ok!')\r\n logger.info('show train: {}'.format(next(iter(tokenized_datasets['train']))))\r\n logger.info('ok')\r\n # ### RAW CODE ###\r\n # tokenized_datasets = raw_datasets.map(\r\n # tokenize_function,\r\n # batched=True,\r\n # batch_size=1000,\r\n # remove_columns=column_names\r\n # )\r\n #TODO 20230722\r\n logger.info(\"Finish tokenization\")\r\n```\r\nthe output of my code is\r\n```\r\n07/22/2023 21:57:09 - INFO - __main__ - /demo/run_blue_space.py: tokenized_datasets = raw_datasets.map(\r\n07/22/2023 21:57:09 - INFO - __main__ - len raw_datasets: 2\r\n07/22/2023 21:57:09 - INFO - __main__ - raw_datasets:dict_items([('train', <datasets.iterable_dataset.IterableDataset object at 0x4005ee301190>), ('validation', <datasets.iterable_dataset.IterableDataset object at 0x4005ee5427f0>)])\r\n07/22/2023 21:57:09 - INFO - __main__ - map ok!\r\n07/22/2023 22:01:07 - INFO - __main__ - show train: {'input_ids': [14608, 26797, 31891, 34260, 12227, 33207, 5, 5, 31632, 26797, 31891, 34260, 12227, 33207, 7398, 28561, 31236, 31177, 31253, 33558, 31556, 31377, 72, 20732, 32383, 32295, 14027, 31178, 53, 61, 53, 55, 31189, 31146, 31321, 31235, 53, 61, 56, 58, 31189, 31145, 72, 53, 61, 58, 54, 31189, 54, 31245, 53, 60, 31224, 31896, 31178, 28561, 29331, 20732, 31888, 32637, 4426, 2824, 72, 53, 61, 60, 55, 31189, 53, 54, 31245, 53, 31224, 31896, 31178, 28561, 29331, 26137, 20732, 4426, 2824, 73, 54, 52, 52, 52, 31189, 61, 31245, 59, 31224, 31896, 31178, 29331, 28561, 20732, 4426, 2824, 73, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n07/22/2023 22:01:07 - INFO - __main__ - ok\r\n```\r\n\r\n",
"@bio-punk `IterableDatasetDict.map` does not support multiprocessing (only `DatasetDict.map` and `Dataset.map` do), so please open a new issue as this doesn't seem to be related to the original issue. ",
"Closing as this issue doesn't seem to be related to `datasets`."
] |
1,402,939,660
| 5,093
|
Mismatch between tutoriel and doc
|
closed
| 2022-10-10T10:23:53
| 2022-10-10T17:51:15
| 2022-10-10T17:51:14
|
https://github.com/huggingface/datasets/issues/5093
| null |
clefourrier
| false
|
[
"Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).",
"Can I work on this?",
"Fixed in https://github.com/huggingface/datasets/pull/5095"
] |
1,402,713,517
| 5,092
|
Use HTML relative paths for tiles in the docs
|
closed
| 2022-10-10T07:24:27
| 2022-10-11T13:25:45
| 2022-10-11T13:23:23
|
https://github.com/huggingface/datasets/pull/5092
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5092",
"html_url": "https://github.com/huggingface/datasets/pull/5092",
"diff_url": "https://github.com/huggingface/datasets/pull/5092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5092.patch",
"merged_at": "2022-10-11T13:23:23"
}
|
lewtun
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Good catch, @lewtun. Thanks for the fix.\r\n> \r\n> Do you know if there are other absolute paths in the docs that should be fixed as well?\r\n\r\nI found a few more in [0d4796b](https://github.com/huggingface/datasets/pull/5092/commits/0d4796b747e6620d9fcc17a8f74acc5cf4bba7be).\r\n\r\nHowever, I noticed that none of the cross-references (e.g. to API classes / methods) work locally, but that is probably just a limitation of the local build",
"Thanks."
] |
1,401,112,552
| 5,091
|
Allow connection objects in `from_sql` + small doc improvement
|
closed
| 2022-10-07T12:39:44
| 2022-10-09T13:19:15
| 2022-10-09T13:16:57
|
https://github.com/huggingface/datasets/pull/5091
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5091",
"html_url": "https://github.com/huggingface/datasets/pull/5091",
"diff_url": "https://github.com/huggingface/datasets/pull/5091.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5091.patch",
"merged_at": "2022-10-09T13:16:57"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,401,102,407
| 5,090
|
Review sync issues from GitHub to Hub
|
closed
| 2022-10-07T12:31:56
| 2022-10-08T07:07:36
| 2022-10-08T07:07:36
|
https://github.com/huggingface/datasets/issues/5090
| null |
albertvillanova
| false
|
[
"Nice!!"
] |
1,400,788,486
| 5,089
|
Resume failed process
|
open
| 2022-10-07T08:07:03
| 2022-10-07T08:07:03
| null |
https://github.com/huggingface/datasets/issues/5089
| null |
felix-schneider
| false
|
[] |
1,400,530,412
| 5,088
|
load_datasets("json", ...) don't read local .json.gz properly
|
open
| 2022-10-07T02:16:58
| 2022-10-07T14:43:16
| null |
https://github.com/huggingface/datasets/issues/5088
| null |
junwang-wish
| false
|
[
"Hi @junwang-wish, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce the bug. Which version of `datasets` are you using? Does the problem persist if you update `datasets`?\r\n```shell\r\npip install -U datasets\r\n``` ",
"Thanks @albertvillanova I updated `datasets` from `2.5.1` to `2.5.2` and tested copying the `json.gz` to a different directory and my mind was blown:\r\n\r\n```python\r\nfpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nproduces \r\n```python\r\nUsing custom data configuration default-0e6cf24134163e8b\r\nFound cached dataset json (/data/junwang/.cache/huggingface/datasets/json/default-0e6cf24134163e8b/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab)\r\n(1, 0)\r\n```\r\nbut then I ran below command to see if the same file in a different directory leads to same discrepancy\r\n```shell\r\ncp /data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz tmp_test.json.gz\r\n```\r\nand so I ran\r\n```python\r\nfpath = 'tmp_test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nand behold, I get \r\n```python\r\nUsing custom data configuration default-f679b32ab0008520\r\nDownloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n(1, 1)\r\n```\r\nThey match now !\r\n\r\nThis problem happens regardless of the shell I use (VScode jupyter extension or plain old Python REPL). \r\n\r\nI attached the `json.gz` here for reference: [test.json.gz](https://github.com/huggingface/datasets/files/9734843/test.json.gz)\r\n\r\n"
] |
1,400,487,967
| 5,087
|
Fix filter with empty indices
|
closed
| 2022-10-07T01:07:00
| 2022-10-07T18:43:03
| 2022-10-07T18:40:26
|
https://github.com/huggingface/datasets/pull/5087
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5087",
"html_url": "https://github.com/huggingface/datasets/pull/5087",
"diff_url": "https://github.com/huggingface/datasets/pull/5087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5087.patch",
"merged_at": "2022-10-07T18:40:26"
}
|
Mouhanedg56
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,400,216,975
| 5,086
|
HTTPError: 404 Client Error: Not Found for url
|
closed
| 2022-10-06T19:48:58
| 2022-10-07T15:12:01
| 2022-10-07T15:12:01
|
https://github.com/huggingface/datasets/issues/5086
| null |
keyuchen21
| false
|
[
"FYI @lewtun ",
"Hi @km5ar, thanks for reporting.\r\n\r\nThis should be fixed in the notebook:\r\n- the filename `datasets-issues-with-hf-doc-builder.jsonl` no longer exists on the repo; instead, current filename is `datasets-issues-with-comments.jsonl`\r\n- see: https://huggingface.co/datasets/lewtun/github-issues/tree/main\r\n\r\nAnyway, depending on your version of `datasets`, you can now use:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"lewtun/github-issues\")\r\nissues_dataset\r\n```\r\ninstead of:\r\n```python\r\nfrom huggingface_hub import hf_hub_url\r\n\r\ndata_files = hf_hub_url(\r\n repo_id=\"lewtun/github-issues\",\r\n filename=\"datasets-issues-with-hf-doc-builder.jsonl\",\r\n repo_type=\"dataset\",\r\n)\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\nissues_dataset\r\n```\r\n\r\nOutput:\r\n```python\r\nIn [25]: ds = load_dataset(\"lewtun/github-issues\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.5k/10.5k [00:00<00:00, 5.75MB/s]\r\nUsing custom data configuration lewtun--github-issues-cff5093ecc410ea2\r\nDownloading and preparing dataset json/lewtun--github-issues to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.2M/12.2M [00:00<00:00, 26.5MB/s]\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.70s/it]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1589.96it/s]\r\nDataset json downloaded and prepared to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 133.95it/s]\r\n\r\nIn [26]: ds\r\nOut[26]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app', 'is_pull_request'],\r\n num_rows: 3019\r\n })\r\n})\r\n```",
"Thanks for reporting @km5ar and thank you @albertvillanova for the quick solution! I'll post a fix on the source too"
] |
1,400,113,569
| 5,085
|
Filtering on an empty dataset returns a corrupted dataset.
|
closed
| 2022-10-06T18:18:49
| 2022-10-07T19:06:02
| 2022-10-07T18:40:26
|
https://github.com/huggingface/datasets/issues/5085
| null |
gabegma
| false
|
[
"~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are returned without going through partial function on `map` method, which will not work to get indices for `filter`: we need to run `get_indices_from_mask_function` partial function on the dataset to get output = `{\"indices\": []}`. But this is complicated since functions used in args, in particular `get_indices_from_mask_function`, do not support empty datasets.\r\nWe can just handle empty datasets aside on filter method.",
"#self-assign",
"Thank you for solving this amazingly quickly!"
] |
1,400,016,229
| 5,084
|
IterableDataset formatting in numpy/torch/tf/jax
|
closed
| 2022-10-06T16:53:38
| 2023-09-24T10:06:51
| 2022-12-20T17:19:52
|
https://github.com/huggingface/datasets/pull/5084
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5084",
"html_url": "https://github.com/huggingface/datasets/pull/5084",
"diff_url": "https://github.com/huggingface/datasets/pull/5084.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5084.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084). All of your documentation changes will be reflected on that endpoint.",
"Actually I'm not happy with this implementation. It always require the iterable dataset to have definite `features`, which removes a lot of flexibility. So I think we need an actual formatting from python objects, not from arrow data.",
"Closing this one since it has too many conflicts and still require some work - it will be easier to open a new PR"
] |
1,399,842,514
| 5,083
|
Support numpy/torch/tf/jax formatting for IterableDataset
|
closed
| 2022-10-06T15:14:58
| 2023-10-09T12:42:15
| 2023-10-09T12:42:15
|
https://github.com/huggingface/datasets/issues/5083
| null |
lhoestq
| false
|
[
"hii @lhoestq, can you assign this issue to me? Though i am new to open source still I would love to put my best foot forward. I can see there isn't anyone right now assigned to this issue.",
"Hi @zutarich ! This issue was fixed by #5852 - sorry I forgot to close it\r\n\r\nFeel free to look for other issues and ping me or @mariosasko if you have questions :)\r\nAlso let us know if we can help find an issue that can correspond to what you're looking for"
] |
1,399,379,777
| 5,082
|
adding keep in memory
|
closed
| 2022-10-06T11:10:46
| 2022-10-07T14:35:34
| 2022-10-07T14:32:54
|
https://github.com/huggingface/datasets/pull/5082
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5082",
"html_url": "https://github.com/huggingface/datasets/pull/5082",
"diff_url": "https://github.com/huggingface/datasets/pull/5082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5082.patch",
"merged_at": "2022-10-07T14:32:54"
}
|
Mustapha-AJEGHRIR
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @mariosasko , I have added a test for the `keep_in_memory` version. I have also removed the `Compatible with temp_seed` part in the scope of `dset_shuffled`, please verify if that makes sense."
] |
1,399,340,050
| 5,081
|
Bug loading `sentence-transformers/parallel-sentences`
|
open
| 2022-10-06T10:47:51
| 2022-10-11T10:00:48
| null |
https://github.com/huggingface/datasets/issues/5081
| null |
PhilipMay
| false
|
[
"tagging @nreimers ",
"The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.",
"Since the dataset is a bunch of TSVs we should not need a dataset script I think.\r\n\r\nBy default it tries to load all the TSVs at once, which fails here because they don't all have the same columns (pd.read_csv uses the first line as header by default). But those files have no header ! So, to properly load any TSV file in this repo, one has to pass `names=[...]` for pd.read_csv to know which column names to use.\r\n\r\nTo fix this situation, we can either do\r\n1. replace the TSVs by TSV with column names\r\n2. OR specify the pd.read_csv kwargs as YAML in the dataset card - and `datasets` would use that by default\r\n\r\nWDTY ?",
"There are more issues in the dataset.\r\nTo load OpenSubtitles I have to provide this (see `skiprows`):\r\n\r\n```python\r\ndf_os = pd.read_csv(\r\n \"./parallel-sentences/OpenSubtitles/OpenSubtitles-en-de-train.tsv.gz\", \r\n sep=\"\\t\", \r\n quoting=csv.QUOTE_NONE,\r\n header=None,\r\n names=[\"en\", \"de\"],\r\n skiprows=[540344, 9151700, 10040173, 10040199, 11314673, 11338258, 11869223, 12159297, 12251078, 12303334],\r\n)\r\n```",
"What's wrong with those lines exactly ?\r\nMaybe passing `error_bad_lines=False` (and maybe `warn_bad_lines=True`) can be helpful",
"> What's wrong with those lines exactly ? \r\n\r\nStuff like this: `ParserError: Error tokenizing data. C error: Expected 2 fields in line 540345, saw 3`\r\n\r\n",
"> Maybe passing error_bad_lines=False (and maybe warn_bad_lines=True) can be helpful\r\n\r\nYes. That would hide the issue but not solve it.",
"@nreimers WDYT about the two options mentioned above ?"
] |
1,398,849,565
| 5,080
|
Use hfh for caching
|
open
| 2022-10-06T05:51:58
| 2022-10-06T14:26:05
| null |
https://github.com/huggingface/datasets/issues/5080
| null |
albertvillanova
| false
|
[
"There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)"
] |
1,398,609,305
| 5,079
|
refactor: replace AssertionError with more meaningful exceptions (#5074)
|
closed
| 2022-10-06T01:39:35
| 2022-10-07T14:35:43
| 2022-10-07T14:33:10
|
https://github.com/huggingface/datasets/pull/5079
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5079",
"html_url": "https://github.com/huggingface/datasets/pull/5079",
"diff_url": "https://github.com/huggingface/datasets/pull/5079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5079.patch",
"merged_at": "2022-10-07T14:33:10"
}
|
galbwe
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,398,335,148
| 5,078
|
Fix header level in Audio docs
|
closed
| 2022-10-05T20:22:44
| 2022-10-06T08:12:23
| 2022-10-06T08:09:41
|
https://github.com/huggingface/datasets/pull/5078
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5078",
"html_url": "https://github.com/huggingface/datasets/pull/5078",
"diff_url": "https://github.com/huggingface/datasets/pull/5078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5078.patch",
"merged_at": "2022-10-06T08:09:41"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,398,080,859
| 5,077
|
Fix passed download_config in HubDatasetModuleFactoryWithoutScript
|
closed
| 2022-10-05T16:42:36
| 2022-10-06T05:31:22
| 2022-10-06T05:29:06
|
https://github.com/huggingface/datasets/pull/5077
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5077",
"html_url": "https://github.com/huggingface/datasets/pull/5077",
"diff_url": "https://github.com/huggingface/datasets/pull/5077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5077.patch",
"merged_at": "2022-10-06T05:29:06"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,397,918,092
| 5,076
|
fix: update exception throw from OSError to EnvironmentError in `push…
|
closed
| 2022-10-05T14:46:29
| 2022-10-07T14:35:57
| 2022-10-07T14:33:27
|
https://github.com/huggingface/datasets/pull/5076
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5076",
"html_url": "https://github.com/huggingface/datasets/pull/5076",
"diff_url": "https://github.com/huggingface/datasets/pull/5076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5076.patch",
"merged_at": "2022-10-07T14:33:27"
}
|
rahulXs
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,397,865,501
| 5,075
|
Throw EnvironmentError when token is not present
|
closed
| 2022-10-05T14:14:18
| 2022-10-07T14:33:28
| 2022-10-07T14:33:28
|
https://github.com/huggingface/datasets/issues/5075
| null |
mariosasko
| false
|
[
"@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks."
] |
1,397,850,352
| 5,074
|
Replace AssertionErrors with more meaningful errors
|
closed
| 2022-10-05T14:03:55
| 2022-10-07T14:33:11
| 2022-10-07T14:33:11
|
https://github.com/huggingface/datasets/issues/5074
| null |
mariosasko
| false
|
[
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] |
1,397,832,183
| 5,073
|
Restore saved format state in `load_from_disk`
|
closed
| 2022-10-05T13:51:47
| 2022-10-11T16:55:07
| 2022-10-11T16:49:23
|
https://github.com/huggingface/datasets/pull/5073
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5073",
"html_url": "https://github.com/huggingface/datasets/pull/5073",
"diff_url": "https://github.com/huggingface/datasets/pull/5073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5073.patch",
"merged_at": "2022-10-11T16:49:23"
}
|
asofiaoliveira
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,397,765,531
| 5,072
|
Image & Audio formatting for numpy/torch/tf/jax
|
closed
| 2022-10-05T13:07:03
| 2022-10-10T13:24:10
| 2022-10-10T13:21:32
|
https://github.com/huggingface/datasets/pull/5072
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5072",
"html_url": "https://github.com/huggingface/datasets/pull/5072",
"diff_url": "https://github.com/huggingface/datasets/pull/5072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5072.patch",
"merged_at": "2022-10-10T13:21:32"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I just added a consolidation step so that numpy arrays or tensors of images are stacked together if the shapes match, instead of having lists of tensors\r\n\r\nFeel free to review @mariosasko :)",
"I added a few lines in the docs and reverted the ragged numpy array change :)\r\n\r\nready for another review @mariosasko !"
] |
1,397,301,270
| 5,071
|
Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS
|
closed
| 2022-10-05T06:28:39
| 2022-10-06T14:43:12
| 2022-10-06T14:40:26
|
https://github.com/huggingface/datasets/pull/5071
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5071",
"html_url": "https://github.com/huggingface/datasets/pull/5071",
"diff_url": "https://github.com/huggingface/datasets/pull/5071.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5071.patch",
"merged_at": "2022-10-06T14:40:25"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Super, thanks a lot for adding this support, Albert!"
] |
1,396,765,647
| 5,070
|
Support default config name when no builder configs
|
closed
| 2022-10-04T19:49:35
| 2022-10-06T14:40:26
| 2022-10-06T14:40:26
|
https://github.com/huggingface/datasets/issues/5070
| null |
albertvillanova
| false
|
[
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/discussions/2/files\r\n\r\nbut which I had to revert since providing no split breaks at run time.\r\n"
] |
1,396,361,768
| 5,067
|
Fix CONTRIBUTING once dataset scripts transferred to Hub
|
closed
| 2022-10-04T14:16:05
| 2022-10-06T06:14:43
| 2022-10-06T06:12:12
|
https://github.com/huggingface/datasets/pull/5067
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5067",
"html_url": "https://github.com/huggingface/datasets/pull/5067",
"diff_url": "https://github.com/huggingface/datasets/pull/5067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5067.patch",
"merged_at": "2022-10-06T06:12:12"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,396,086,745
| 5,066
|
Support streaming gzip.open
|
closed
| 2022-10-04T11:20:05
| 2022-10-06T15:13:51
| 2022-10-06T15:11:29
|
https://github.com/huggingface/datasets/pull/5066
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5066",
"html_url": "https://github.com/huggingface/datasets/pull/5066",
"diff_url": "https://github.com/huggingface/datasets/pull/5066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5066.patch",
"merged_at": "2022-10-06T15:11:29"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,396,003,362
| 5,065
|
Ci py3.10
|
closed
| 2022-10-04T10:13:51
| 2022-11-29T15:28:05
| 2022-11-29T15:25:26
|
https://github.com/huggingface/datasets/pull/5065
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5065",
"html_url": "https://github.com/huggingface/datasets/pull/5065",
"diff_url": "https://github.com/huggingface/datasets/pull/5065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5065.patch",
"merged_at": "2022-11-29T15:25:26"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Does it sound good to you @albertvillanova ?"
] |
1,395,978,143
| 5,064
|
Align signature of create/delete_repo with latest hfh
|
closed
| 2022-10-04T09:54:53
| 2022-10-07T17:02:11
| 2022-10-07T16:59:30
|
https://github.com/huggingface/datasets/pull/5064
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5064",
"html_url": "https://github.com/huggingface/datasets/pull/5064",
"diff_url": "https://github.com/huggingface/datasets/pull/5064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5064.patch",
"merged_at": "2022-10-07T16:59:30"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,395,895,463
| 5,063
|
Align signature of list_repo_files with latest hfh
|
closed
| 2022-10-04T08:51:46
| 2022-10-07T16:42:57
| 2022-10-07T16:40:16
|
https://github.com/huggingface/datasets/pull/5063
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5063",
"html_url": "https://github.com/huggingface/datasets/pull/5063",
"diff_url": "https://github.com/huggingface/datasets/pull/5063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5063.patch",
"merged_at": "2022-10-07T16:40:16"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,395,739,417
| 5,062
|
Fix CI hfh token warning
|
closed
| 2022-10-04T06:36:54
| 2022-10-04T08:58:15
| 2022-10-04T08:42:31
|
https://github.com/huggingface/datasets/pull/5062
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5062",
"html_url": "https://github.com/huggingface/datasets/pull/5062",
"diff_url": "https://github.com/huggingface/datasets/pull/5062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5062.patch",
"merged_at": "2022-10-04T08:42:31"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"good catch !"
] |
1,395,476,770
| 5,061
|
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`
|
closed
| 2022-10-03T23:51:38
| 2023-07-21T14:43:35
| 2023-07-21T14:43:34
|
https://github.com/huggingface/datasets/issues/5061
| null |
ZhaofengWu
| false
|
[
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess==0.70.12.2, dill==0.3.4`: works\r\n- `multiprocess==0.70.12.2, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.4`: can't test, `multiprocess==0.70.13` requires `dill>=0.3.5.1`\r\n\r\nI will pin their versions on my end. I don't have enough knowledge of how python multiprocessing works to debug this, but ideally there could be a fix. It's also possible that I'm doing something wrong in my code, but again the `.name` of the logger that failed to pickle is `datasets.fingerprint`, which I'm not using directly.",
"Do you know which logger fails at being pickled ?",
"I'm not 100% sure how to figure it out -- the stack trace above doesn't clearly give me a place where I can print out who owns the logger, etc. I only found out its `.name` is `datasets.fingerprint` by printing right before\r\n```\r\n File \".../logging/__init__.py\", line 1774, in __reduce__\r\n raise pickle.PicklingError('logger cannot be pickled')\r\n```\r\nIf you have any idea on how to find it out, please let me know.",
"Ok I see, not sure why it triggers this error though, in `logging.py` the code is\r\n\r\nhttps://github.com/python/cpython/blob/c9da063e32725a66495e4047b8a5ed13e72d9e8e/Lib/logging/__init__.py#L1769-L1775\r\n\r\nand on my side it works on 3.10 with dill 0.3.5.1 and multiprocess 0.70.13\r\n```python\r\n>>> datasets.fingerprint.logger.__reduce__() \r\n(<function logging.getLogger(name=None)>, ('datasets.fingerprint',))\r\n```\r\nCould you try to run this code ?\r\n\r\nAre you in an environment where the loggers are instantiated differently ? Can you check the source code of `logging.Logger.__reduce__` in `\".../logging/__init__.py\", line 1774` ?",
"Closing due to inactivity."
] |
1,395,382,940
| 5,060
|
Unable to Use Custom Dataset Locally
|
closed
| 2022-10-03T21:55:16
| 2022-10-06T14:29:18
| 2022-10-06T14:29:17
|
https://github.com/huggingface/datasets/issues/5060
| null |
zanussbaum
| false
|
[
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read from a URL? Any links to code/documentation would be greatly appreciated, I'd love to learn more",
"`datasets` extends `open` in dataset scripts to work with URLs. The builtin `open` from python only works with local files.\r\n\r\nYou can find the extension here: https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/download/streaming_download_manager.py#L435-L451\r\n\r\nI think we can create a docs section dedicated to streaming to explain how this works",
"Closing this one - feel free to reopen if you have more questions"
] |
1,395,050,876
| 5,059
|
Fix typo
|
closed
| 2022-10-03T17:05:25
| 2022-10-03T17:34:40
| 2022-10-03T17:32:27
|
https://github.com/huggingface/datasets/pull/5059
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5059",
"html_url": "https://github.com/huggingface/datasets/pull/5059",
"diff_url": "https://github.com/huggingface/datasets/pull/5059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5059.patch",
"merged_at": "2022-10-03T17:32:27"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,394,962,424
| 5,058
|
Mark CI tests as xfail when 502 error
|
closed
| 2022-10-03T15:53:55
| 2022-10-04T10:03:23
| 2022-10-04T10:01:23
|
https://github.com/huggingface/datasets/pull/5058
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5058",
"html_url": "https://github.com/huggingface/datasets/pull/5058",
"diff_url": "https://github.com/huggingface/datasets/pull/5058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5058.patch",
"merged_at": "2022-10-04T10:01:23"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,394,827,216
| 5,057
|
Support `converters` in `CsvBuilder`
|
closed
| 2022-10-03T14:23:21
| 2022-10-04T11:19:28
| 2022-10-04T11:17:32
|
https://github.com/huggingface/datasets/pull/5057
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5057",
"html_url": "https://github.com/huggingface/datasets/pull/5057",
"diff_url": "https://github.com/huggingface/datasets/pull/5057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5057.patch",
"merged_at": "2022-10-04T11:17:32"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,394,713,173
| 5,056
|
Fix broken URL's (GEM)
|
closed
| 2022-10-03T13:13:22
| 2022-10-04T13:49:00
| 2022-10-04T13:48:59
|
https://github.com/huggingface/datasets/pull/5056
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5056",
"html_url": "https://github.com/huggingface/datasets/pull/5056",
"diff_url": "https://github.com/huggingface/datasets/pull/5056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5056.patch",
"merged_at": null
}
|
manandey
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub."
] |
1,394,503,844
| 5,055
|
Fix backward compatibility for dataset_infos.json
|
closed
| 2022-10-03T10:30:14
| 2022-10-03T13:43:55
| 2022-10-03T13:41:32
|
https://github.com/huggingface/datasets/pull/5055
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5055",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"merged_at": "2022-10-03T13:41:32"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.