id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,217,115,691
| 4,236
|
Replace data URL in big_patent dataset and support streaming
|
closed
| 2022-04-27T10:01:13
| 2022-06-10T08:10:55
| 2022-05-02T18:21:15
|
https://github.com/huggingface/datasets/pull/4236
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4236",
"html_url": "https://github.com/huggingface/datasets/pull/4236",
"diff_url": "https://github.com/huggingface/datasets/pull/4236.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4236.patch",
"merged_at": "2022-05-02T18:21:15"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred to upload it it to the \"data\" org namespace, but it is already taken (although not used): might be possible to take it?\r\n\r\nAs an alternative (and to be consistent with previous datasets), I also uploaded the data files to our AWS bucket.\r\n\r\nWe should decide which to use (now and for future datasets) and set it here before merging. We should remove the data files for the non-chosen option.\r\n\r\nCC: @lhoestq @mariosasko @polinaeterna ",
"Would it make sense to make the dataset a community one (so, create an organization for it) and store the script and the data in a single repository? Just as it is for most of the datasets. That way we can also access the data using a relative path inside the repo (that's not the point though). The point is that to me it seems a bit more straightforward to store everything in one place. \r\n\r\nI guess the strong argument against this logic is that in this case the canonical version won't work... But maybe there is some redirecting mechanism I don't know about? :)\r\n\r\nAnyway, I'm in favor of hosting data on the Hub instead of AWS :) ",
"I also think storing everything in one place/single repository is the best option.\r\n\r\n@polinaeterna Canonical datasets also support data files (see the [`red_caps` repo](https://huggingface.co/datasets/red_caps/tree/main) for instance) ",
"Thanks @polinaeterna and @mariosasko for your comments.\r\n\r\nYes, definitely it is much better to have everything in the same repo. \r\n\r\nI'm transferring their data files to the Hub under \"big_patent\" and deleting them from the other repo and AWS."
] |
1,216,952,640
| 4,235
|
How to load VERY LARGE dataset?
|
closed
| 2022-04-27T07:50:13
| 2023-07-25T15:07:57
| 2023-07-25T15:07:57
|
https://github.com/huggingface/datasets/issues/4235
| null |
CaoYiqingT
| false
|
[
"The `Trainer` support `IterableDataset`, not just datasets."
] |
1,216,818,846
| 4,234
|
Autoeval config
|
closed
| 2022-04-27T05:32:10
| 2022-05-06T13:20:31
| 2022-05-05T18:20:58
|
https://github.com/huggingface/datasets/pull/4234
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4234",
"html_url": "https://github.com/huggingface/datasets/pull/4234",
"diff_url": "https://github.com/huggingface/datasets/pull/4234.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4234.patch",
"merged_at": "2022-05-05T18:20:58"
}
|
nazneenrajani
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Related to: https://github.com/huggingface/autonlp-backend/issues/414 and https://github.com/huggingface/autonlp-backend/issues/424",
"The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argument 'train-eval-index'\r\n```\r\n\r\nI think you can fix this by updating the `DatasetMetadata` class and implementing an appropriate `validate_train_eval_index()` function\r\n\r\n@lhoestq we are working with an arbitrary set of tags for `autoeval config`. See https://github.com/huggingface/autonlp-backend/issues/414\r\nI need to add a validator function though for the tests to pass. Our set is not well-defined as in the rest https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources. What's a workaround for this?",
"On the question of validating the `train-eval-index` metadata, I think the simplest approach would be to validate that the required fields exist and not worry about their values (which are open-ended).\r\n\r\nFor me, the required fields include:\r\n\r\n* `config`\r\n* `task`\r\n* `task_id`\r\n* `splits` (train / validation / eval)\r\n* `col_mapping`\r\n* `metrics` (checking that each one has `type`, `name`) \r\n\r\nHere I'm using the spec defined in https://github.com/huggingface/autonlp-backend/issues/414 as a guide.\r\n\r\nWDYT @lhoestq ?",
"Makes sense ! Currently the metadata type validator doesn't support subfields - let me open a PR to add it",
"I ended up improving the metadata validation in this PR x)\r\n\r\nIn particular:\r\n- I added support YAML keys with dashes instead of underscores for `train-eval-index`\r\n- I added `train-eval-index` validation with `validate_train_eval_index`. It does nothing fancy, it just checks that it is a list if it exists in the YAML, but feel free to improve it if you want\r\n\r\nLet me know if it sounds good to you ! I think we can improve `validate_train_eval_index` in another PR",
"Come on windows... I didn't do anything advanced...\r\n\r\nAnyway, will try to fix this when I get back home x)",
"> Come on windows... I didn't do anything advanced...\r\n> \r\n> Anyway, will try to fix this when I get back home x)\r\n\r\nHehe, thanks!",
"Thanks, @lhoestq this is great! ",
"Did I just fix it for windows and now it fails on linux ? xD",
"> Did I just fix it for windows and now it fails on linux ? xD\r\n\r\nLooks like the Heisenberg uncertainty principle is at play here - you cannot simultaneously have unit tests passing in both Linux and Windows 😅 ",
"The worst is that the tests pass locally both on my windows and my linux x)",
"Ok fixed it, the issue came from python 3.6 that doesn't return the right `__origin__` for Dict and List types",
"> Alright thanks for adding the first Autoeval config ! :D\r\n\r\nWoohoo! Thank you so much 🤗 ",
"This is cool!"
] |
1,216,665,044
| 4,233
|
Autoeval
|
closed
| 2022-04-27T01:32:09
| 2022-04-27T05:29:30
| 2022-04-27T01:32:23
|
https://github.com/huggingface/datasets/pull/4233
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4233",
"html_url": "https://github.com/huggingface/datasets/pull/4233",
"diff_url": "https://github.com/huggingface/datasets/pull/4233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4233.patch",
"merged_at": null
}
|
nazneenrajani
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4233). All of your documentation changes will be reflected on that endpoint."
] |
1,216,659,444
| 4,232
|
adding new tag to tasks.json and modified for existing datasets
|
closed
| 2022-04-27T01:21:09
| 2022-05-03T14:23:56
| 2022-05-03T14:16:39
|
https://github.com/huggingface/datasets/pull/4232
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4232",
"html_url": "https://github.com/huggingface/datasets/pull/4232",
"diff_url": "https://github.com/huggingface/datasets/pull/4232.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4232.patch",
"merged_at": null
}
|
nazneenrajani
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"closing in favor of https://github.com/huggingface/datasets/pull/4244"
] |
1,216,651,960
| 4,231
|
Fix invalid url to CC-Aligned dataset
|
closed
| 2022-04-27T01:07:01
| 2022-05-16T17:01:13
| 2022-05-16T16:53:12
|
https://github.com/huggingface/datasets/pull/4231
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4231",
"html_url": "https://github.com/huggingface/datasets/pull/4231",
"diff_url": "https://github.com/huggingface/datasets/pull/4231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4231.patch",
"merged_at": "2022-05-16T16:53:12"
}
|
juntang-zhuang
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,216,643,661
| 4,230
|
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
|
closed
| 2022-04-27T00:53:52
| 2023-07-25T15:10:15
| 2023-07-25T15:10:15
|
https://github.com/huggingface/datasets/issues/4230
| null |
beyondguo
| false
|
[
"Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.",
"The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states \"The German data is a collection of articles from the Frankfurter Rundschau. The named entities have been annotated by people of the University of Antwerp. Only the annotations are available here. In order to build these data sets you need access to the ECI Multilingual Text Corpus. It can be ordered from the Linguistic Data Consortium (2003 non-member price: US$ 35.00).\"\r\n\r\nInflation since 2003 has also affected LDC's prices, and today the dataset [LDC94T5](https://catalog.ldc.upenn.edu/LDC94T5) is available under license for $75 a copy. The [license](https://catalog.ldc.upenn.edu/license/eci-slash-mci-user-agreement.pdf) includes a non-distribution condition, which is probably why the data has not turned up openly.\r\n\r\nThe ACL hold copyright of this data; I'll mail them and anyone I can find at ECI to see if they'll open this up now. After all, it worked with Microsoft 3DMM, why not here too, after 28 years? :)\r\n",
"Closing this issue as we are not allowed to share publicly the German subset."
] |
1,216,638,968
| 4,229
|
new task tag
|
closed
| 2022-04-27T00:47:08
| 2022-04-27T00:48:28
| 2022-04-27T00:48:17
|
https://github.com/huggingface/datasets/pull/4229
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4229",
"html_url": "https://github.com/huggingface/datasets/pull/4229",
"diff_url": "https://github.com/huggingface/datasets/pull/4229.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4229.patch",
"merged_at": null
}
|
nazneenrajani
| true
|
[] |
1,216,523,043
| 4,228
|
new task tag
|
closed
| 2022-04-26T22:00:33
| 2022-04-27T00:48:31
| 2022-04-27T00:46:31
|
https://github.com/huggingface/datasets/pull/4228
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4228",
"html_url": "https://github.com/huggingface/datasets/pull/4228",
"diff_url": "https://github.com/huggingface/datasets/pull/4228.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4228.patch",
"merged_at": null
}
|
nazneenrajani
| true
|
[] |
1,216,455,316
| 4,227
|
Add f1 metric card, update docstring in py file
|
closed
| 2022-04-26T20:41:03
| 2022-05-03T12:50:23
| 2022-05-03T12:43:33
|
https://github.com/huggingface/datasets/pull/4227
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4227",
"html_url": "https://github.com/huggingface/datasets/pull/4227",
"diff_url": "https://github.com/huggingface/datasets/pull/4227.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4227.patch",
"merged_at": "2022-05-03T12:43:33"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,216,331,073
| 4,226
|
Add pearsonr mc, update functionality to match the original docs
|
closed
| 2022-04-26T18:30:46
| 2022-05-03T17:09:24
| 2022-05-03T17:02:28
|
https://github.com/huggingface/datasets/pull/4226
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4226",
"html_url": "https://github.com/huggingface/datasets/pull/4226",
"diff_url": "https://github.com/huggingface/datasets/pull/4226.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4226.patch",
"merged_at": "2022-05-03T17:02:28"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you @lhoestq!! :hugs: "
] |
1,216,213,464
| 4,225
|
autoeval config
|
closed
| 2022-04-26T16:38:34
| 2022-04-27T00:48:31
| 2022-04-26T22:00:26
|
https://github.com/huggingface/datasets/pull/4225
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4225",
"html_url": "https://github.com/huggingface/datasets/pull/4225",
"diff_url": "https://github.com/huggingface/datasets/pull/4225.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4225.patch",
"merged_at": null
}
|
nazneenrajani
| true
|
[] |
1,216,209,667
| 4,224
|
autoeval config
|
closed
| 2022-04-26T16:35:19
| 2022-04-26T16:36:45
| 2022-04-26T16:36:45
|
https://github.com/huggingface/datasets/pull/4224
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4224",
"html_url": "https://github.com/huggingface/datasets/pull/4224",
"diff_url": "https://github.com/huggingface/datasets/pull/4224.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4224.patch",
"merged_at": null
}
|
nazneenrajani
| true
|
[] |
1,216,107,082
| 4,223
|
Add Accuracy Metric Card
|
closed
| 2022-04-26T15:10:46
| 2022-05-03T14:27:45
| 2022-05-03T14:20:47
|
https://github.com/huggingface/datasets/pull/4223
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4223",
"html_url": "https://github.com/huggingface/datasets/pull/4223",
"diff_url": "https://github.com/huggingface/datasets/pull/4223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4223.patch",
"merged_at": "2022-05-03T14:20:47"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,216,056,439
| 4,222
|
Fix description links in dataset cards
|
closed
| 2022-04-26T14:36:25
| 2022-05-06T08:38:38
| 2022-04-26T16:52:29
|
https://github.com/huggingface/datasets/pull/4222
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4222",
"html_url": "https://github.com/huggingface/datasets/pull/4222",
"diff_url": "https://github.com/huggingface/datasets/pull/4222.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4222.patch",
"merged_at": "2022-04-26T16:52:29"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Non passing tests are due to other pre-existing errors in dataset cards: not related to this PR."
] |
1,215,911,182
| 4,221
|
Dictionary Feature
|
closed
| 2022-04-26T12:50:18
| 2022-04-29T14:52:19
| 2022-04-28T17:04:58
|
https://github.com/huggingface/datasets/issues/4221
| null |
jordiae
| false
|
[
"Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n],\r\n```\r\n\r\nFeel free to re-open this issue if that does not work for your use case.",
"> Hi @jordiae,\r\n> \r\n> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n> \r\n> ```python\r\n> \"list_of_dict_feature\": [\r\n> {\r\n> \"key1_in_dict\": datasets.Value(\"string\"),\r\n> \"key2_in_dict\": datasets.Value(\"int32\"),\r\n> ...\r\n> }\r\n> ],\r\n> ```\r\n> \r\n> Feel free to re-open this issue if that does not work for your use case.\r\n\r\nThank you"
] |
1,215,225,802
| 4,220
|
Altered faiss installation comment
|
closed
| 2022-04-26T01:20:43
| 2022-05-09T17:29:34
| 2022-05-09T17:22:09
|
https://github.com/huggingface/datasets/pull/4220
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4220",
"html_url": "https://github.com/huggingface/datasets/pull/4220",
"diff_url": "https://github.com/huggingface/datasets/pull/4220.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4220.patch",
"merged_at": "2022-05-09T17:22:09"
}
|
vishalsrao
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! Can you explain why this change is needed ?",
"Facebook recommends installing FAISS using conda (https://github.com/facebookresearch/faiss/blob/main/INSTALL.md). pip does not seem to have the latest version of FAISS. The latest version of faiss is 1.7.2 (https://anaconda.org/conda-forge/faiss), but the latest one available through pip is 1.5.3 (https://pypi.org/project/faiss/). "
] |
1,214,934,025
| 4,219
|
Add F1 Metric Card
|
closed
| 2022-04-25T19:14:56
| 2022-04-26T20:44:18
| 2022-04-26T20:37:46
|
https://github.com/huggingface/datasets/pull/4219
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4219",
"html_url": "https://github.com/huggingface/datasets/pull/4219",
"diff_url": "https://github.com/huggingface/datasets/pull/4219.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4219.patch",
"merged_at": null
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,214,748,226
| 4,218
|
Make code for image downloading from image urls cacheable
|
closed
| 2022-04-25T16:17:59
| 2022-04-26T17:00:24
| 2022-04-26T13:38:26
|
https://github.com/huggingface/datasets/pull/4218
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4218",
"html_url": "https://github.com/huggingface/datasets/pull/4218",
"diff_url": "https://github.com/huggingface/datasets/pull/4218.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4218.patch",
"merged_at": "2022-04-26T13:38:26"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,214,688,141
| 4,217
|
Big_Patent dataset broken
|
closed
| 2022-04-25T15:31:45
| 2022-05-26T06:29:43
| 2022-05-02T18:21:15
|
https://github.com/huggingface/datasets/issues/4217
| null |
Matthew-Larsen
| false
|
[
"Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://github.com/huggingface/datasets/issues/4075#issuecomment-1087362551):\r\n\r\n> PS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.\r\n\r\n",
"We should find out if the dataset license allows redistribution and contact the data owners to propose them to host their data on our Hub.",
"The data owners have agreed on hosting their data on the Hub."
] |
1,214,614,029
| 4,216
|
Avoid recursion error in map if example is returned as dict value
|
closed
| 2022-04-25T14:40:32
| 2022-05-04T17:20:06
| 2022-05-04T17:12:52
|
https://github.com/huggingface/datasets/pull/4216
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4216",
"html_url": "https://github.com/huggingface/datasets/pull/4216",
"diff_url": "https://github.com/huggingface/datasets/pull/4216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4216.patch",
"merged_at": "2022-05-04T17:12:52"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,214,579,162
| 4,215
|
Add `drop_last_batch` to `IterableDataset.map`
|
closed
| 2022-04-25T14:15:19
| 2022-05-03T15:56:07
| 2022-05-03T15:48:54
|
https://github.com/huggingface/datasets/pull/4215
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4215",
"html_url": "https://github.com/huggingface/datasets/pull/4215",
"diff_url": "https://github.com/huggingface/datasets/pull/4215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4215.patch",
"merged_at": "2022-05-03T15:48:54"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,214,572,430
| 4,214
|
Skip checksum computation in Imagefolder by default
|
closed
| 2022-04-25T14:10:41
| 2022-05-03T15:28:32
| 2022-05-03T15:21:29
|
https://github.com/huggingface/datasets/pull/4214
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4214",
"html_url": "https://github.com/huggingface/datasets/pull/4214",
"diff_url": "https://github.com/huggingface/datasets/pull/4214.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4214.patch",
"merged_at": "2022-05-03T15:21:29"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,214,510,010
| 4,213
|
ETT time series dataset
|
closed
| 2022-04-25T13:26:18
| 2022-05-05T12:19:21
| 2022-05-05T12:10:35
|
https://github.com/huggingface/datasets/pull/4213
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4213",
"html_url": "https://github.com/huggingface/datasets/pull/4213",
"diff_url": "https://github.com/huggingface/datasets/pull/4213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4213.patch",
"merged_at": "2022-05-05T12:10:35"
}
|
kashif
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you!\r\n"
] |
1,214,498,582
| 4,212
|
[Common Voice] Make sure bytes are correctly deleted if `path` exists
|
closed
| 2022-04-25T13:18:26
| 2022-04-26T22:54:28
| 2022-04-26T22:48:27
|
https://github.com/huggingface/datasets/pull/4212
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4212",
"html_url": "https://github.com/huggingface/datasets/pull/4212",
"diff_url": "https://github.com/huggingface/datasets/pull/4212.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4212.patch",
"merged_at": "2022-04-26T22:48:27"
}
|
patrickvonplaten
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cool that you noticed that we store unnecessary bytes again :D "
] |
1,214,361,837
| 4,211
|
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
|
closed
| 2022-04-25T11:22:54
| 2023-04-06T19:25:50
| 2022-05-20T15:15:30
|
https://github.com/huggingface/datasets/issues/4211
| null |
pietrolesci
| false
|
[
"Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nHowever, for the moment `push_to_hub` does not support specifying different configurations. IMHO, we should implement this.",
"Hi @albertvillanova,\r\n\r\nThanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality allowed me to iterate rather quickly on dataset curation.\r\n\r\nAgain, thanks for your time @albertvillanova!\r\n\r\nBest,\r\nPietro",
"Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDict.data` vs. `UserDict.data`. This makes me think we should rename the `data` attribute of `DatasetDict`/`Dataset` for easier dict subclassing (would also simplify https://github.com/huggingface/datasets/pull/3997) and to follow good Python practices. Another option is to have a custom `UserDict` class in `py_utils`, but it can be hard to keep this class consistent with the built-in `UserDict`. \r\n\r\n@albertvillanova @lhoestq wdyt?",
"I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train/test) and have the same features.\r\n\r\nNote that later you will be able to push datasets with different features as different dataset **configurations** (similarly to the [GLUE subsets](https://huggingface.co/datasets/glue) for example). We will work on this soon",
"Hi @lhoestq,\r\n\r\nReturning to this thread to ask whether the possibility to create `DatasetDict` with different configurations will be supported in the future.\r\n\r\nBest,\r\nPietro",
"DatasetDict is likely to always require the datasets to have the same columns and types, while different configurations may have different columns and types.\r\n\r\nWhy would you like to see that ?\r\nIf it's related to push_to_hub, we plan to allow pushing several configs, but not using DatasetDict",
"Hi @lhoestq and @pietrolesci,\r\n\r\nI have been curious about this question as well. I don't have experience working with different configurations, but I can give a bit more detail on the work flow that I have been using with `Dataset_dict`.\r\n\r\nAs @pietrolesci mentions, I have been using `push_to_hub` to quickly iterate on dataset curation for different ML experiments - locally I create a set of dataset splits e.g. `train/val/test/inference`, then convert them to `HF_Datasets` and finally a to `Dataset_Dict` to `push_to_hub`. Where I have run into issues is when I want to include different metadata for different splits. For example, I have situations where I only have meta-data for one of the splits (e.g. test) or situations where I am working with `inference` data that does not have labels. Currently I use a rather hacky work around by adding \"dummy\" columns for missing columns to avoid the error:\r\n\r\n```\r\nValueError: All datasets in `DatasetDict` should have the same features\r\n```\r\n\r\nI am curious why `DatasetDict` will likely not support this functionality? I don't know much about working with different configurations, but allowing for different columns between datasets / splits would be a very helpful use-case for me. Are there any docs for using different configuration OR a more info about incorporating it with `push_to_hub`.\r\n\r\nBest wishes,\r\nJonathan\r\n\r\n",
"+1",
"> I am curious why DatasetDict will likely not support this functionality?\r\n\r\nThere's a possibility we may merge the Dataset and DatasetDict classes. The DatasetDict purpose was to define a way to get the train/test splits of a dataset.\r\n\r\nsee the discussions at https://github.com/huggingface/datasets/issues/5189\r\n\r\n> Are there any docs for using different configuration OR a more info about incorporating it with push_to_hub.\r\n\r\nThere's a PR open to allow to upload a dataset with a certain configuration name. Then later you can reload this specific configuration using `load_dataset(ds_name, config_name)`\r\n\r\nsee the PR at https://github.com/huggingface/datasets/pull/5213",
"Hi, regarding the following information:\r\n\r\n> Please note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n> \r\n> To handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nAltough this is often implied (such as how else would `DatasetDict` be able to process multiple splits in the same way?), I would expect it to be written somewhere in the docs plainly and maybe even in bold. Also I would expect to see it in multiple places such as:\r\n\r\n- in docstring of `DatasetDict`\r\n- in nlp/image/audio guides on how to create a dataset\r\n- [in conceptual guide on how to create a loading script](https://huggingface.co/docs/datasets/main/en/about_dataset_load)\r\n\r\n\r\nI think this addition would benefit the docs, especially when you guide a newbie (such as me) through the process of creating a dataset. As I said, you somehow suspect that this is in fact the case, but without reading it in the docs you cannot be sure."
] |
1,214,089,130
| 4,210
|
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
|
closed
| 2022-04-25T07:28:42
| 2022-05-31T12:16:31
| 2022-05-31T12:16:31
|
https://github.com/huggingface/datasets/issues/4210
| null |
loretoparisi
| false
|
[
"Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n)\r\n# You can make this part faster with num_proc=<some int>\r\nsentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n```\r\n\r\n@lhoestq IIRC, I suggested adding `cast_to_storage` to `ClassLabel` + `table_cast` to the packaged loaders if the `ClassLabel`/`Image`/`Audio` type is present in `features` to avoid this kind of error, but your concern was speed. IMO shouldn't be a problem if we do `table_cast` only when these features are present.",
"I agree packaged loaders should support `ClassLabel` feature without throwing an error.",
"@albertvillanova @mariosasko thank you, with that change now I get\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-9-eeb68eeb9bec>](https://localhost:8080/#) in <module>()\r\n 11 )\r\n 12 # You can make this part faster with num_proc=<some int>\r\n---> 13 sentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n 14 sentences = sentences.shuffle()\r\n\r\n8 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in validate_function_output(processed_inputs, indices)\r\n 2193 if processed_inputs is not None and not isinstance(processed_inputs, (Mapping, pa.Table)):\r\n 2194 raise TypeError(\r\n-> 2195 f\"Provided `function` which is applied to all elements of table returns a variable of type {type(processed_inputs)}. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\"\r\n 2196 )\r\n 2197 elif isinstance(indices, list) and isinstance(processed_inputs, Mapping):\r\n\r\nTypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'int'>. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\r\n```\r\n\r\nthe error is raised by [this](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L2221)\r\n\r\n```\r\n[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in validate_function_output(processed_inputs, indices)\r\n```",
"@mariosasko changed it like\r\n\r\n```python\r\nsentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n```\r\n\r\nto avoid the above errorr.",
"Any update on this? Is this correct ?\r\n> @mariosasko changed it like\r\n> \r\n> ```python\r\n> sentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n> ```\r\n> \r\n> to avoid the above errorr.\r\n\r\n"
] |
1,213,716,426
| 4,208
|
Add CMU MoCap Dataset
|
closed
| 2022-04-24T17:31:08
| 2022-10-03T09:38:24
| 2022-10-03T09:36:30
|
https://github.com/huggingface/datasets/pull/4208
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4208",
"html_url": "https://github.com/huggingface/datasets/pull/4208",
"diff_url": "https://github.com/huggingface/datasets/pull/4208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4208.patch",
"merged_at": null
}
|
dnaveenr
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple download links, can we combine and host these links on the Hub since the dataset is free to use ?",
"\"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\nCan we combine and host these links on the Hub since the dataset is free to use ?",
"> \"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\n\r\nWe store downloaded data under `~/.cache/huggingface/datasets/downloads` (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".",
"> We store downloaded data under ~/.cache/huggingface/datasets/downloads (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".\r\n\r\nYes, the filesystem won't be clustered, but the problem is processing the dataset becomes cumbersome. For eg, for the c3d format has 5 part-downloads, so the folders will be as follows : \r\n```\r\n['~/.cache/huggingface/datasets/downloads/extracted/0e6bf028f490bf18c23ce572d1437c4ef32a74f630e33c26a806250d35cfcdd1', '~/.cache/huggingface/datasets/downloads/extracted/1b44fc5c7a6e031c904545422d449fd964f8ee795b9d1dcb0b6a76d03b50ebe6', '~/.cache/huggingface/datasets/downloads/extracted/137595188e96187c24ce1aa5c78200c7f78816fbd9d6c62354c01b3e6ec550c7', '~/.cache/huggingface/datasets/downloads/extracted/6c0c893e435f36fd79aa0f199f58fe16f01985f039644a7cb094a8c43a15ffd4', '~/.cache/huggingface/datasets/downloads/extracted/45e4703354cbc975e6add66f1b17b716c882b56f44575b033c5926aa5fcfb17f']\r\n```\r\nEach of these folders have a given set of subjects, so we'll be need to write extra code to fetch data from each of these folders, and the mpg format has 12 part-downloads which will lead to 12 folders having certain set of subjects, so it is cumbersome to process them.",
"I have added all the changes that were suggested. We just need to handle the multi-part download for c3d and mpg formats. Easiest way would be to have just one zip for these formats.",
"But we can handle this with a simple mapping that stores the id ranges (for each config), no? And an actual file path is not important during processing.",
"I have added code to handle c3d, mpg formats as well. The data for the mpg format seems incomplete as it contains only 53 rows. I have added a note regarding this in the Data Splits section.",
"The real data test works fine and dummy_data test work fine. There were few missing files which was causing issues, I have fixed it now.\r\n",
"- Reduced the dummy_data size.\r\n- Added sample dataset preprocessing code, it is not complete though.\r\n- Added all changes suggested.\r\n\r\nLet me know if anything else is required. Thank you. :)",
"Thanks for your contribution, @dnaveenr.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
1,213,604,615
| 4,207
|
[Minor edit] Fix typo in class name
|
closed
| 2022-04-24T09:49:37
| 2022-05-05T13:17:47
| 2022-05-05T13:17:47
|
https://github.com/huggingface/datasets/pull/4207
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4207",
"html_url": "https://github.com/huggingface/datasets/pull/4207",
"diff_url": "https://github.com/huggingface/datasets/pull/4207.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4207.patch",
"merged_at": "2022-05-05T13:17:47"
}
|
cakiki
| true
|
[] |
1,212,715,581
| 4,206
|
Add Nerval Metric
|
closed
| 2022-04-22T19:45:00
| 2023-07-11T09:34:56
| 2023-07-11T09:34:55
|
https://github.com/huggingface/datasets/pull/4206
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4206",
"html_url": "https://github.com/huggingface/datasets/pull/4206",
"diff_url": "https://github.com/huggingface/datasets/pull/4206.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4206.patch",
"merged_at": null
}
|
maridda
| true
|
[
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] |
1,212,466,138
| 4,205
|
Fix `convert_file_size_to_int` for kilobits and megabits
|
closed
| 2022-04-22T14:56:21
| 2022-05-03T15:28:42
| 2022-05-03T15:21:48
|
https://github.com/huggingface/datasets/pull/4205
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4205",
"html_url": "https://github.com/huggingface/datasets/pull/4205",
"diff_url": "https://github.com/huggingface/datasets/pull/4205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4205.patch",
"merged_at": "2022-05-03T15:21:48"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,212,431,764
| 4,204
|
Add Recall Metric Card
|
closed
| 2022-04-22T14:24:26
| 2022-05-03T13:23:23
| 2022-05-03T13:16:24
|
https://github.com/huggingface/datasets/pull/4204
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4204",
"html_url": "https://github.com/huggingface/datasets/pull/4204",
"diff_url": "https://github.com/huggingface/datasets/pull/4204.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4204.patch",
"merged_at": "2022-05-03T13:16:24"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks good to me! "
] |
1,212,431,067
| 4,203
|
Add Precision Metric Card
|
closed
| 2022-04-22T14:23:48
| 2022-05-03T14:23:40
| 2022-05-03T14:16:46
|
https://github.com/huggingface/datasets/pull/4203
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4203",
"html_url": "https://github.com/huggingface/datasets/pull/4203",
"diff_url": "https://github.com/huggingface/datasets/pull/4203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4203.patch",
"merged_at": "2022-05-03T14:16:45"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,212,326,288
| 4,202
|
Fix some type annotation in doc
|
closed
| 2022-04-22T12:53:31
| 2022-04-22T15:03:00
| 2022-04-22T14:56:43
|
https://github.com/huggingface/datasets/pull/4202
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4202",
"html_url": "https://github.com/huggingface/datasets/pull/4202",
"diff_url": "https://github.com/huggingface/datasets/pull/4202.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4202.patch",
"merged_at": "2022-04-22T14:56:43"
}
|
thomasw21
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,212,086,420
| 4,201
|
Update GH template for dataset viewer issues
|
closed
| 2022-04-22T09:34:44
| 2022-05-06T08:38:43
| 2022-04-26T08:45:55
|
https://github.com/huggingface/datasets/pull/4201
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4201",
"html_url": "https://github.com/huggingface/datasets/pull/4201",
"diff_url": "https://github.com/huggingface/datasets/pull/4201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4201.patch",
"merged_at": "2022-04-26T08:45:55"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"You can see rendering at: https://github.com/huggingface/datasets/blob/6b48fedbdafe12a42c7b6edcecc32820af1a4822/.github/ISSUE_TEMPLATE/dataset-viewer.yml"
] |
1,211,980,110
| 4,200
|
Add to docs how to load from local script
|
closed
| 2022-04-22T08:08:25
| 2022-05-06T08:39:25
| 2022-04-23T05:47:25
|
https://github.com/huggingface/datasets/pull/4200
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4200",
"html_url": "https://github.com/huggingface/datasets/pull/4200",
"diff_url": "https://github.com/huggingface/datasets/pull/4200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4200.patch",
"merged_at": "2022-04-23T05:47:24"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,211,953,308
| 4,199
|
Cache miss during reload for datasets using image fetch utilities through map
|
closed
| 2022-04-22T07:47:08
| 2022-04-26T17:00:32
| 2022-04-26T13:38:26
|
https://github.com/huggingface/datasets/issues/4199
| null |
apsdehal
| false
|
[
"Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache",
"Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nwith \r\n```python\r\nUSER_AGENT = get_datasets_user_agent()\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": USER_AGENT},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nfixes the issue?",
"Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to those. I saw that `http_get` does exist.",
"You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https://github.com/huggingface/datasets/pull/4100#issuecomment-1097994003.",
"Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that discussion, possible on slack?"
] |
1,211,456,559
| 4,198
|
There is no dataset
|
closed
| 2022-04-21T19:19:26
| 2022-05-03T11:29:05
| 2022-04-22T06:12:25
|
https://github.com/huggingface/datasets/issues/4198
| null |
wilfoderek
| false
|
[] |
1,211,342,558
| 4,197
|
Add remove_columns=True
|
closed
| 2022-04-21T17:28:13
| 2023-09-24T10:02:32
| 2022-04-22T14:45:30
|
https://github.com/huggingface/datasets/pull/4197
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4197",
"html_url": "https://github.com/huggingface/datasets/pull/4197",
"diff_url": "https://github.com/huggingface/datasets/pull/4197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4197.patch",
"merged_at": null
}
|
thomasw21
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps://github.com/huggingface/datasets/blob/bf432011ff9155a5bc16c03956bc63e514baf80d/src/datasets/arrow_dataset.py#L2232.\r\n\r\n(in the `batched` case, we can also copy the inputs' values (list objects) to ignore in-place modifications to the inputs' columns)\r\n\r\nI think `remove_columns=True` has no meaning, so I'm not a fan of this change.",
"@mariosasko copy does have a cost associated with it ... and plus you'll have to consider `deepcopy` Imagine columnds that are list of list of list of list .... Though I have to agree that `remove_columns=True` doesn't make sense (but, IMO, neither does it in its current use-case as it should refer to `input_columns`) ",
"Okay closing this PR for the following reasons:\r\n - `remove_columns=True` was expected to keep the `.update`-like operator for `.map`. I initially thought it would be a good way to ignore function side effects and only keep output of that function (cf. PR description).\r\n - expected `remove_columns=True` is a bad API according to @mariosasko and introduces unecessary changes for little gain (strictly equivalent to `remove_columns=dset.column_names`)"
] |
1,211,271,261
| 4,196
|
Embed image and audio files in `save_to_disk`
|
closed
| 2022-04-21T16:25:18
| 2022-12-14T18:22:59
| 2022-12-14T18:22:59
|
https://github.com/huggingface/datasets/issues/4196
| null |
lhoestq
| false
|
[] |
1,210,958,602
| 4,194
|
Support lists of multi-dimensional numpy arrays
|
closed
| 2022-04-21T12:22:26
| 2022-05-12T15:16:34
| 2022-05-12T15:08:40
|
https://github.com/huggingface/datasets/pull/4194
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4194",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"merged_at": "2022-05-12T15:08:40"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,210,734,701
| 4,193
|
Document save_to_disk and push_to_hub on images and audio files
|
closed
| 2022-04-21T09:04:36
| 2022-04-22T09:55:55
| 2022-04-22T09:49:31
|
https://github.com/huggingface/datasets/pull/4193
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4193",
"html_url": "https://github.com/huggingface/datasets/pull/4193",
"diff_url": "https://github.com/huggingface/datasets/pull/4193.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4193.patch",
"merged_at": "2022-04-22T09:49:31"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch, I updated the docstrings"
] |
1,210,692,554
| 4,192
|
load_dataset can't load local dataset,Unable to find ...
|
closed
| 2022-04-21T08:28:58
| 2022-04-25T16:51:57
| 2022-04-22T07:39:53
|
https://github.com/huggingface/datasets/issues/4192
| null |
ahf876828330
| false
|
[
"Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?",
"Hi @ahf876828330, \r\n\r\nAs @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.\r\n\r\nIn your case, as the dataset script is local, you should better point to your local loading script:\r\n```python\r\ndataset = load_dataset(\"dataset/opus_books.py\")\r\n```\r\n\r\nPlease, feel free to re-open this issue if the previous code snippet does not work for you.",
"> Hi! :)\r\n> \r\n> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?\r\n\r\nYes,you are right!So if I have a metadata dataset local,How can I turn it to a dataset that can be used by the load_dataset() function?Are there some examples?",
"The metadata file isn't a dataset so you can't turn it into one. You should try @albertvillanova's code snippet above (now merged in the docs [here](https://huggingface.co/docs/datasets/master/en/loading#local-loading-script)), which uses your local loading script `opus_books.py` to:\r\n\r\n1. Download the actual dataset. \r\n2. Once the dataset is downloaded, `load_dataset` will load it for you."
] |
1,210,028,090
| 4,191
|
feat: create an `Array3D` column from a list of arrays of dimension 2
|
closed
| 2022-04-20T18:04:32
| 2022-05-12T15:08:40
| 2022-05-12T15:08:40
|
https://github.com/huggingface/datasets/issues/4191
| null |
SaulLu
| false
|
[
"Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `data_map` are arrays of dimension 2\r\n - the outer list in `prepare_dataset_2D` should not be taken into account in the dimension counting, as it is used because in `map` you pass `batched=True`\r\n\r\nNote that for the 3D alternatives you mention:\r\n- In `prepare_dataset_3D_ter`, you create an `Array3D` from arrays of dimension 3:\r\n - the array `data_map[index][np.newaxis, :, :]` has dimension 3\r\n - the outer list in `prepare_dataset_3D_ter` is the one used by `batched=True`\r\n- In `prepare_dataset_3D_bis`, you create an `Array3D` from a list of list of lists:\r\n - the value of `data_map[index].tolist()` is a list of lists\r\n - it is enclosed by another list `[data_map[index].tolist()]`, thus giving a list of list of lists\r\n - the outer list is the one used by `batched=True`\r\n\r\nTherefore, if I understand correctly, your request would be to be able to create an `Array3D` from a list of an array of dimension 2:\r\n- In `prepare_dataset_3D`, `data_map[index]` is an array of dimension 2\r\n- it is enclosed by a list `[data_map[index]]`, thus giving a list of an array of dimension 2\r\n- the outer list is the one used by `batched=True`\r\n\r\nPlease, feel free to tell me if I did not understand you correctly.",
"Hi @albertvillanova ,\r\n\r\nIndeed my message was confusing and you guessed right :smile: : I think would be interesting to be able to create an Array3D from a list of an array of dimension 2. \r\n\r\nFor the 2D case I should have given as a \"similar\" example:\r\n```python\r\n\r\ndata_map_1D = {\r\n 1: np.array([0.2, 0.4]),\r\n 2: np.array([0.1, 0.4]),\r\n}\r\n\r\ndef prepare_dataset_2D(batch):\r\n batch[\"pixel_values\"] = [[data_map_1D[index]] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_2D = ds.map(\r\n prepare_dataset_2D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array2D(shape=(1, 2), dtype=\"float32\")})\r\n)\r\n```"
] |
1,209,901,677
| 4,190
|
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
|
closed
| 2022-04-20T16:08:01
| 2022-04-22T13:58:25
| 2022-04-22T13:52:00
|
https://github.com/huggingface/datasets/pull/4190
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4190",
"html_url": "https://github.com/huggingface/datasets/pull/4190",
"diff_url": "https://github.com/huggingface/datasets/pull/4190.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4190.patch",
"merged_at": "2022-04-22T13:52:00"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,209,881,351
| 4,189
|
Document how to use FAISS index for special operations
|
closed
| 2022-04-20T15:51:56
| 2022-05-06T08:43:10
| 2022-05-06T08:35:52
|
https://github.com/huggingface/datasets/pull/4189
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4189",
"html_url": "https://github.com/huggingface/datasets/pull/4189",
"diff_url": "https://github.com/huggingface/datasets/pull/4189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4189.patch",
"merged_at": "2022-05-06T08:35:52"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,209,740,957
| 4,188
|
Support streaming cnn_dailymail dataset
|
closed
| 2022-04-20T14:04:36
| 2022-05-11T13:39:06
| 2022-04-20T15:52:49
|
https://github.com/huggingface/datasets/pull/4188
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4188",
"html_url": "https://github.com/huggingface/datasets/pull/4188",
"diff_url": "https://github.com/huggingface/datasets/pull/4188.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4188.patch",
"merged_at": "2022-04-20T15:52:49"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Did you run the `datasets-cli` command before merging to make sure you generate all the examples ?"
] |
1,209,721,532
| 4,187
|
Don't duplicate data when encoding audio or image
|
closed
| 2022-04-20T13:50:37
| 2022-04-21T09:17:00
| 2022-04-21T09:10:47
|
https://github.com/huggingface/datasets/pull/4187
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4187",
"html_url": "https://github.com/huggingface/datasets/pull/4187",
"diff_url": "https://github.com/huggingface/datasets/pull/4187.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4187.patch",
"merged_at": "2022-04-21T09:10:47"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for audio\" - why would that be?\r\n\r\nThe `path` would always point to some location in the `cache_dir`? I think this can be problematic. I would have expected that after I did `dataset.save_to_disk(...)` that I can remove the cache dir. But maybe just because I'm not familiar with HF. Or maybe the docs can be improved to clarify this.\r\n",
"We could always load every data file into `bytes` and save it this way the audio as bytes in `arrow` format, but the problem then would be that it makes the `file` column useless, *i.e.* people cannot inspect the audio file locally anymore or else they would need to first save bytes as a file which is not evident. This either breaks backwards compatibility or forces the user to stored 2x the required size locally. There was a longer discussion here: https://github.com/huggingface/datasets/issues/3663\r\n\r\nIt's a good argument though that `dataset.save_to_disk(...)` should save everything that is needed to the disk and should be independent of other folders, but I do think the arguments of #3663 to not break backwards compatibility and to allow people to inspect the downloaded audio files locally are a bit more important here. \r\n\r\nBut maybe, we could add a flag, `save_files_as_bytes` or `make_independent`, `make_self_contained` or a better name to `save_to_disk(...)` and `push_to_hub(...)` that would allow to make the resulting folder completely independent. ",
"What do you think @mariosasko @lhoestq @polinaeterna @anton-l ?\r\n",
"For context: you can either store the path to local images or audio files, or the bytes of those files.\r\n\r\nIf your images and audio files are local files, then the arrow file from `save_to_disk` will store paths to these files.\r\nIf you want to include the bytes or your images or audio files instead, you must `read()` those files first.\r\nThis can be done by storing the \"bytes\" instead of the \"path\" of the images or audio files.\r\n\r\nOn the other hand, the resulting Parquet files from `push_to_hub` are self-contained, so that anyone can reload the dataset from the Hub. If your dataset contains image or audio data, the Parquet files will store the bytes of your images or audio files.\r\n\r\nFor now I just updated the documentation: https://github.com/huggingface/datasets/pull/4193. Maybe we can also embed the image and audio bytes in `save_to_disk` when we implement sharding, so that is can be done as efficiently as `push_to_hub`.\r\n\r\nAnyway, merging this one :)"
] |
1,209,463,599
| 4,186
|
Fix outdated docstring about default dataset config
|
closed
| 2022-04-20T10:04:51
| 2022-04-22T12:54:44
| 2022-04-22T12:48:31
|
https://github.com/huggingface/datasets/pull/4186
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4186",
"html_url": "https://github.com/huggingface/datasets/pull/4186",
"diff_url": "https://github.com/huggingface/datasets/pull/4186.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4186.patch",
"merged_at": "2022-04-22T12:48:31"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,209,429,743
| 4,185
|
Librispeech documentation, clarification on format
|
open
| 2022-04-20T09:35:55
| 2022-04-21T11:00:53
| null |
https://github.com/huggingface/datasets/issues/4185
| null |
albertz
| false
|
[
"(@patrickvonplaten )",
"Also cc @lhoestq here",
"The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds[\"audio\"][\"array\"][0]` as this will decode all dataset samples, but instead `ds[0][\"audio\"][\"array\"]` see: https://huggingface.co/docs/datasets/audio_process#audio-datasets\r\n\r\n",
"So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?\r\n\r\nAnd is it simple to also store it re-encoded as ogg or mp3 instead?\r\n",
"Hey, \r\n\r\nSorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path of the audio files with the `path` parameter.\r\n\r\nI'm currently changing this for Librispeech here: https://github.com/huggingface/datasets/pull/4184 .\r\nYou should be able to see the audio file in the original `flac` format under `path` then. I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ? ",
"> I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?\r\n\r\nSure, I would expect that `load_dataset(\"librispeech_asr\")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding logic would be some separate generic function. So I could do sth like `dataset.reencode_as_ogg(**ogg_encode_opts).save_to_disk(...)` or so.\r\n",
"A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https://github.com/huggingface/datasets/pull/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.\r\n\r\nSo, instead of `save_to_disk`/`load_from_disk`, we would use `to_parquet`,`from_parquet`? Is there any downside? Are arrow files more efficient?\r\n\r\nRelated is also the doc update in #4193.\r\n",
"`save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.\r\n\r\nTherefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.\r\n\r\nParquet files are used for cold storage: to use memory mapping on a Parquet dataset, you first have to convert it to Arrow. We use Parquet to reduce the I/O when pushing/downloading data from the Hugging face Hub. When you load a Parquet file from the Hub, it is converted to Arrow on the fly during the download."
] |
1,208,592,669
| 4,184
|
[Librispeech] Add 'all' config
|
closed
| 2022-04-19T16:27:56
| 2024-08-02T05:03:04
| 2022-04-22T09:45:17
|
https://github.com/huggingface/datasets/pull/4184
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4184",
"html_url": "https://github.com/huggingface/datasets/pull/4184",
"diff_url": "https://github.com/huggingface/datasets/pull/4184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4184.patch",
"merged_at": "2022-04-22T09:45:17"
}
|
patrickvonplaten
| true
|
[
"Fix https://github.com/huggingface/datasets/issues/4179",
"_The documentation is not available anymore as the PR was closed or merged._",
"Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do sth like:\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```\r\n?\r\n",
"> Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n> \r\n> And to get the subsets, I do sth like:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"librispeech_asr\")\r\n> train_ds = ds[\"train\"]\r\n> dev_clean_ds = ds[\"dev-clean\"]\r\n> dev_other_ds = ds[\"dev-other\"]\r\n> test_clean_ds = ds[\"test-clean\"]\r\n> test_other_ds = ds[\"test-other\"]\r\n> ```\r\n> \r\n> ?\r\n\r\nYou could do:\r\n\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\", \"all\") # <- note that we have to pass a config\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```",
"So, `load_dataset(\"librispeech_asr\")` is not possible, it must be `load_dataset(\"librispeech_asr\", \"all\")`?\r\n\r\nWhy is that?\r\n\r\nThe docs say:\r\n```\r\nname: `str` name, optional configuration for the dataset that affects the data generated on disk. Different\r\n `builder_config`s will have their own subdirectories and versions.\r\n If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n```\r\nhttps://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/src/datasets/builder.py#L228\r\n\r\nOr maybe you could just define `DEFAULT_CONFIG_NAME`?\r\n",
"> If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n\r\nOh crap this is outdated documentation. No it doesn't take the first config by default.\r\n\r\nEDIT: opened a PR to fix this: https://github.com/huggingface/datasets/pull/4186",
"> No it doesn't take the first config by default.\r\n\r\nBut defining `DEFAULT_CONFIG_NAME` would work?\r\n\r\nSo should we define `DEFAULT_CONFIG_NAME = \"all\"` here as well? I think this is a reasonable default config.\r\n\r\nDon't most datasets have some default config?\r\n",
"> But defining DEFAULT_CONFIG_NAME would work?\r\n>\r\n> So should we define DEFAULT_CONFIG_NAME = \"all\" here as well? I think this is a reasonable default config.\r\n\r\nYes that would work, and I also find it reasonable to do it :)\r\n\r\n> Don't most datasets have some default config?\r\n\r\nMost datasets only have one configuration, so the single configuration is the default one. Then other datasets gave several configurations, and whether they have a default one is decided case-by-case.\r\n\r\ne.g. `glue` is a benchmark and doesn't have a default task, one must choose which task of `glue` they want to use explicitely.",
"Thanks a lot for the feedback! \r\n\r\nUsing `\"all\"` now as the default config. I changed the layout a bit so that there is not a single \"train\", but instead we have multiple \"train.clean.100\", \"train.clean.360\", \"train.other.500\". This way we don't even need to do filtering and it's also cleaner IMO.\r\n\r\n@albertz - you should now be able to do the following:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\") # <- run this once to download, prepare dataset and cache everything\r\n\r\n# The following operations will be very fast since all the downloading and processing is already cached\r\ntrain_1 = load_dataset(\"librispeech_asr\", split=\"train.clean.100\")\r\nprint(train_1)\r\ntrain_2 = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360\")\r\nprint(train_2)\r\ntrain_full = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360+train.other.500\")\r\nprint(train_full)\r\ndev_clean_ds = load_dataset(\"librispeech_asr\", split=\"validation.clean\")\r\nprint(dev_clean_ds)\r\ndev_other_ds = load_dataset(\"librispeech_asr\", split=\"validation.other\")\r\nprint(dev_other_ds)\r\ntest_clean_ds = load_dataset(\"librispeech_asr\", split=\"test.clean\")\r\nprint(test_clean_ds)\r\ntest_other_ds = load_dataset(\"librispeech_asr\", split=\"test.other\")\r\nprint(test_other_ds)\r\n```\r\n\r\n\r\n",
"Think this way we have the best of both worlds. Also @lhoestq, I think we could highlight better in the docs that it's possible to combine different splits. We do this actually quite a lot for speech. For Common Voice many people include \"validation\" in the training if the data is too small, e.g.: https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L147\r\n\r\nShould we maybe add a short section to the loading tutorial here: https://huggingface.co/docs/datasets/v2.1.0/en/loading#hugging-face-hub ? (Happy to do it)",
"Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n\r\nNote in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: https://github.com/rwth-i6/i6_core/pull/253)\r\n\r\nSo with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain = ds[\"train\"]\r\n```\r\nOr with your latest proposal, it would look like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain_ds = datasets.concatenate_datasets(\r\n [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n```\r\nright?\r\n",
"> Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n> \r\n> Note in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: [rwth-i6/i6_core#253](https://github.com/rwth-i6/i6_core/pull/253))\r\n> \r\n> So with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train = ds[\"train\"]\r\n> ```\r\n> \r\n> Or with your latest proposal, it would look like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train_ds = datasets.concatenate_datasets(\r\n> [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n> ```\r\n> \r\n> right?\r\n\r\nI see the use case! The only advantage by calling `datasets` multiple times is that one can easily \"merge\" splits with `\"+\"`, but yeah you can do the exact same with `concatenate`.\r\n\r\n@lhoestq what do you think is the best approach with `load_from_disk`? \r\n\r\n@albertz, you could also define the `cache_dir` when doing `load_dataset(...)` which will then put all the relevant `arrow` files int the cache dir that you defined, e.g.:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\", cache_dir=\"/easy/to/access/directory\")\r\n```",
"@albertz, I took a read through https://github.com/rwth-i6/i6_core/pull/253 . \r\n\r\nI think the best would be the following:\r\n\r\n1. Do `ds = load_dataset(..., cache_dir=\"/dir/that/is/easy/to/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n2. Do `ds.save_to_disk(\"local/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after https://github.com/huggingface/datasets/pull/4184#discussion_r854132740 is fixed and can be done for each person individually.\r\n3. `ds = datasets.load_from_disk(\"local/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.",
"@lhoestq - I think this one is good to go",
"> @albertz, I took a read through [rwth-i6/i6_core#253](https://github.com/rwth-i6/i6_core/pull/253) .\r\n> \r\n> I think the best would be the following:\r\n> \r\n> 1. Do `ds = load_dataset(..., cache_dir=\"/dir/that/is/easy/to/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n> 2. Do `ds.save_to_disk(\"local/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after [[Librispeech] Add 'all' config #4184 (comment)](https://github.com/huggingface/datasets/pull/4184#discussion_r854132740) is fixed and can be done for each person individually.\r\n> 3. `ds = datasets.load_from_disk(\"local/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.\r\n\r\nOh, so you say that our current implementation in https://github.com/rwth-i6/i6_core/pull/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of `save_to_disk`. I think it would be good to clarify that in the doc of `save_to_disk`, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nSo, you say we anyway need to share the cache dir among users? But we would want to make sure that after the initial download and preparation of the data, this is set to readonly, because we want to make sure that other people will not modify the data in any way. Right?\r\n\r\nBut then, we don't really need the `save_to_disk` and `load_from_disk` at all, right?\r\n",
"@albertz \r\n\r\n> Oh, so you say that our current implementation in https://github.com/rwth-i6/i6_core/pull/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of save_to_disk. I think it would be good to clarify that in the doc of save_to_disk, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nOh, I wasn't aware that audio files are handled this way. Then we should have the cache directory as an additional job output, so that we keep the audio files. \r\n\r\n> So, you say we anyway need to share the cache dir among users?\r\n\r\nNo, the cache dir can still be a directory in the job output folder. Then the audio paths in the corresponding dataset column correspond to the flac files in that directory. This way the \"output\" of the job is contained into the job directory and we don't write files to a global cache directory that is independent of the sisyphus graph.\r\n\r\nIf we want to share the audio data between different users, we can just link to a central instance of the job (similar to how we do it with the `DownloadLibriSpeechCorpusJob`).",
"@dthulke - that's a good point actually! So you can do both things:\r\n\r\n1. Convert all audio files to bytes. Bytes can be saved by `arrow` so in this case you can do `save_to_disk(...)`, but then you cannot really inspect the audio files locally as they'll just be saved within a large arrow file (this actually used to be the default case but we're changing this now). The problem of this is summarized here a bit: https://github.com/huggingface/datasets/issues/3663 . You can still do this if you'd like, e.g. you could do:\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\n\r\ndef read_file(batch):\r\n with open(batch[\"file\"], \"r\") as f:\r\n batch[\"bytes\"] = f.read() \r\n return batch\r\n\r\nds = ds.map(read_file)\r\nds.save_to_disk(\"/path\") <- the saved arrow object will now contain everything you need\r\n```\r\n\r\nhowever this is not recommend - it's should be much easier to just save the path to the downloaded audio files.\r\n\r\n2. Not convert audio files to bytes, but just leave them in their original file format. Then only the path to the original files will be save in arrow. This will be the default case. This means that when you do `load_dataset(...)` both the orginal audio data and the arrow file will be saved in the `cache_dir` (which can be saved locally for every user or in a shared cache - we actually use a shared cache quite a bit at Hugging Face). When do you do `save_to_disk(...)` now only the `path` will be saved in `arrow` format (after this PR is merged, you'll see that the `arrow files should be very light weight` meaning that `save_to_disk(...)` can be done for every user, but has a dependency on the `cache_dir` (because the audio files live there).\r\n\r\n=> Now what you could do as well would be to simply move all the audio files to the folder you want (the `save_to_disk(...)` folder) and then change the path of every sample to this folder (maybe with `map(...)`) and then this folder would be self contained. I do however think it's better to just specific a `cache_dir` and re-use `load_dataset(...)` every time instead of `load_from_disk` or `save_to_disk(...)`. Note that you can even pass the relevant cache files to `load_dataset(...)` here: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/loading_methods#datasets.load_dataset.data_files in which case you can be 100% sure that nothing is redownloaded. \r\n\r\nWe discussed storing audio files quite a bit, e.g. see: https://github.com/huggingface/datasets/issues/3663 and had (too many) changes around this topic recently, but we've come to the conclusion that the best is to leave the audio format in the format it was originally (`.flac` for Librispeech) so that the user can easily inspect it / understand the data. Arrow cannot save data is `.flac` so we'll just save a path to the original data. Curious to hear your guys opinion on this as well.",
"So what I would suggest here is to do the following:\r\n\r\n1. Do `load_dataset(..., cache_dir=/a/read-only/folder)`\r\n2. \r\n- Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading \r\n\r\nor \r\n\r\n- If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(/some/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `/some/path`",
"> So what I would suggest here is to do the following:\r\n> \r\n> 1. Do `load_dataset(..., cache_dir=/a/read-only/folder)`\r\n> \r\n> * Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading\r\n> \r\n> or\r\n> \r\n> * If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(/some/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `/some/path`\r\n\r\nAlso relevant here: https://github.com/huggingface/datasets/issues/3663",
"I also added some documentation about how `save_to_disk` handles audio files here: https://github.com/huggingface/datasets/pull/4193",
"> > So, you say we anyway need to share the cache dir among users?\r\n> \r\n> No, the cache dir can still be a directory in the job output folder.\r\n\r\n@dthulke But this is what I mean. When we share the job output folder, it means we share the cache dir among users.\r\n\r\nI wonder if `load_dataset(..., cache_dir=job_output_cache_dir)` is always save to do then, that it really would not modify the `job_output_cache_dir`.\r\n\r\nWe could enforce that by making the `job_output_cache_dir` read-only afterwards. We currently don't do this.\r\n\r\n@patrickvonplaten @dthulke But in any case, we actually prefer the data content to be inside the dataset (the arrow files). Lots of small files would be very problematic for our cache manager. We have one main copy of the data on NFS, but accessing the NFS directly by all computing nodes is not feasible, so the cache manager will have copies of the files on the nodes. So it means, whenever we access some file, we query the cache manager DB whether the file is already cached somewhere (some other computing node) and if so, it copies it from the other computing node and not from NFS. This works very well when there are not too many files (but the files can be big). So, we want to have only a few but big files. Even for NFS access this is much better.\r\n\r\nI also commented in #3663.\r\n",
"Hey @albertz @dthulke,\r\n\r\nThanks a lot for your input! \r\n\r\nWe've discussed quite a bit with @lhoestq and we think the best approach is the following:\r\n\r\n\r\na)\r\n`load_dataset(...)` will not store both bytes and the files because this would mean that 3x the size of the dataset would often be needed (1. the compressed `tar.gz` file, 2. the extracted file b, 3. the raw bytes in arrow format). \r\n\r\nFor canonical datasets like librispeech and common voice I think we want to keep the dataset filenames because of i) no breaking changes and ii) reasons explained in #3663\r\n\r\nHowever it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder e.g. this line: https://huggingface.co/datasets/common_voice/blob/main/common_voice.py#L671\r\n\r\nAnd then it'll be allowed to save the bytes and the dataset will be self-contained out-of-the-box when using `load_dataset(...)`\r\n\r\nb) Now, one major problem that you guys uncovered is that `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap. This means that after we've corrected this when you do download the canonical librispeech dataset the following will work:\r\n\r\n```python\r\nds = load_dataset(\"....\") # <- here we have a dependency on the filepathes\r\nds[0][\"audio\"][\"bytes\"] # <- will not work\r\n\r\nds.save_to_disk(\"/local/path\") # <- now we want to have a self-contained dataset in arrow format, so we load the files into bytes and save it in arrow format\r\n\r\n# now you can delete everything besides \"/local/path\"\r\n\r\nds = load_from_disk(\"/local/path\") # <- this will work\r\n```\r\n\r\nSo either option a) where you define your own librispeech data downloading script (you guys could just sign up here: https://huggingface.co/join) and upload a dataset loading script in private mode so that no one can see it and you would always store the audio as bytes or b) where you first load then save to disk then delete cache would work. \r\n\r\nHope that fits in your vision :-)\r\n\r\ncc @lhoestq @mariosasko ",
"@patrickvonplaten sounds like a good approach to me. For b) this could even be configurable with a parameter like `embed_external_files` as you have for `push_to_hub` (if people prefer to keep separate audio files).\r\n",
"> However it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder\r\n\r\nI don't exactly understand. In all cases, we need to extract it to prepare the dataset, or not? No matter if we want to store the raw bytes inside the dataset or leaving them as local files. Just in the first case, we can safely delete the extracted files after the dataset preparation.\r\n\r\n> `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap.\r\n\r\nFor us, this sounds exactly like what we want.\r\n\r\nBut regarding not introducing breaking changes, wouldn't this maybe also break some setups for users who don't expect this new behavior?\r\n",
"@albertz I would suggest to move the discussion on implementation details on our side to the following issue: rwth-i6/i6_core/issues/257",
"I like the idea of adding `embed_external_files` and set it to True by default to `save_to_disk`.\r\nIt's indeed a kind of breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:\r\n1. I like the idea of having it self contained, in case you want to delete your cache\r\n2. users also upload these Arrow files to cloud storage via the `fs` parameter, and in this case they would expect to upload a self-contained dataset\r\n3. consistency with `push_to_hub`\r\n\r\nIf it sounds good to you I'll open an issue to discuss this and track the advancements",
"Closed #4179.",
"> ```python\r\n> load_dataset(\"librispeech_asr\")\r\n> ```\r\n\r\nHi when I run:\r\n\r\nfrom datasets import load_dataset\r\nimport pandas as pd\r\nfrom multiprocessing import Pool\r\nimport os\r\ndata_dir = my own path\r\ndataset=load_dataset(\"librispeech_asr\", cache_dir=data_dir)\r\n\r\nafter downloading those files and the spliting process, here is a error like this:\r\n---------------------------------------------------------------------------\r\nExpectedMoreSplits Traceback (most recent call last)\r\n/tmp/ipykernel_815982/814767946.py in <module>\r\n 4 import os\r\n 5 data_dir = '/disk/scratch2/s1905792/librispeech'\r\n----> 6 dataset=load_dataset(\"librispeech_asr\", cache_dir=data_dir)\r\n 7 #test = load_dataset(\"librispeech_asr\", \"all\", \"train\", cache_dir=data_dir)\r\n\r\n/disk/scratch2/s1905792/anaconda3/envs/py3.7-gpu/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1813 try_from_hf_gcs=try_from_hf_gcs,\r\n 1814 num_proc=num_proc,\r\n-> 1815 storage_options=storage_options,\r\n 1816 )\r\n 1817 \r\n\r\n/disk/scratch2/s1905792/anaconda3/envs/py3.7-gpu/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)\r\n 911 verification_mode=verification_mode,\r\n 912 **prepare_split_kwargs,\r\n--> 913 **download_and_prepare_kwargs,\r\n 914 )\r\n 915 # Sync info\r\n\r\n/disk/scratch2/s1905792/anaconda3/envs/py3.7-gpu/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 1020 \r\n 1021 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS:\r\n-> 1022 verify_splits(self.info.splits, split_dict)\r\n 1023 \r\n 1024 # Update the info object with the splits.\r\n\r\n/disk/scratch2/s1905792/anaconda3/envs/py3.7-gpu/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits)\r\n 89 return\r\n 90 if len(set(expected_splits) - set(recorded_splits)) > 0:\r\n---> 91 raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))\r\n 92 if len(set(recorded_splits) - set(expected_splits)) > 0:\r\n 93 raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits)))\r\n\r\nExpectedMoreSplits: {'validation.other', 'test.other', 'test.clean', 'validation.clean', 'train.other.500', 'train.clean.360', 'train.clean.100'}\r\n\r\nCould you please tell me where I was wrong and did I miss any files?\r\nBest Regards,\r\nXiaoliang",
"Hi @wxlsummer,\r\n\r\nLet's continue the discussion in the Community tab of the dataset: https://huggingface.co/datasets/openslr/librispeech_asr/discussions/11"
] |
1,208,449,335
| 4,183
|
Document librispeech configs
|
closed
| 2022-04-19T14:26:59
| 2023-09-24T10:02:24
| 2022-04-19T15:15:20
|
https://github.com/huggingface/datasets/pull/4183
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4183",
"html_url": "https://github.com/huggingface/datasets/pull/4183",
"diff_url": "https://github.com/huggingface/datasets/pull/4183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4183.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"I think the main purpose of #4179 was how to be able to load both configs into one, so should we maybe add this part of the code: https://github.com/huggingface/datasets/issues/4179#issuecomment-1102383717 \r\n\r\nto the doc? \r\n\r\nActually @lhoestq would this work given that they have different split names: https://huggingface.co/datasets/librispeech_asr#data-splits ? ",
"This doc extension does not explain why I can't simply load the whole dataset. Or what workaround I need to get the whole dataset, which is what people usually want for Librispeech.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq, I can add a `\"all\"` config to Librispeech have the datasets already cached somewhere ",
"I'm closing this PR then, feel free to continue the discussion in https://github.com/huggingface/datasets/issues/4179\r\n"
] |
1,208,285,235
| 4,182
|
Zenodo.org download is not responding
|
closed
| 2022-04-19T12:26:57
| 2022-04-20T07:11:05
| 2022-04-20T07:11:05
|
https://github.com/huggingface/datasets/issues/4182
| null |
dkajtoch
| false
|
[
"[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]",
"Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo. You can see this on their website: https://marcobaroni.org/composes/sick.html\r\n\r\nAnd yes, you are right: Zenodo is currently having some incidents and people are reporting problems from it.\r\n\r\nOn the other hand, we could contact the data owners and propose them to host their data at our Hugging Face Hub.\r\n\r\n@julien-c I guess so.\r\n",
"Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. ",
"Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub.",
"Ahhh good point! License is the problem :("
] |
1,208,194,805
| 4,181
|
Support streaming FLEURS dataset
|
closed
| 2022-04-19T11:09:56
| 2022-07-25T11:44:02
| 2022-07-25T11:44:02
|
https://github.com/huggingface/datasets/issues/4181
| null |
patrickvonplaten
| false
|
[
"Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.",
"Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \r\nhttps://huggingface.co/datasets/google/fleurs/commit/dcf80160cd77977490a8d32b370c027107f2407b \r\n\r\nreal quick. \r\n\r\nI think the problem is that we cannot ensure that the metadata file is found before the audio. Or is this possible somehow @lhoestq ? ",
"@patrickvonplaten I think the metadata file should be found first because the audio files are contained in a folder next to the metadata files (just as in common voice), so the metadata files should be \"on top of the list\" as they are closer to the root in the directories hierarchy ",
"@patrickvonplaten but apparently it doesn't... I don't really know why.",
"Yeah! Any ideas what could be the reason here? cc @lhoestq ?",
"The order of the files is determined when the TAR archive is created, depending on the commands the creator ran.\r\nIf the metadata file is not at the beginning of the file, that makes streaming completely inefficient. In this case the TAR archive needs to be recreated in an appropriate order.",
"Actually we could maybe just host the metadata file ourselves and then stream the audio data only. Don't think that this would be a problem for the FLEURS authors (I can ask them :-)) ",
"I made a PR to their repo to support streaming (by uploading the metadata file to the Hub). See:\r\n- https://huggingface.co/datasets/google/fleurs/discussions/4",
"I'm closing this issue as the PR above has been merged."
] |
1,208,042,320
| 4,180
|
Add some iteration method on a dataset column (specific for inference)
|
closed
| 2022-04-19T09:15:45
| 2025-06-17T13:08:50
| 2025-06-17T13:08:50
|
https://github.com/huggingface/datasets/issues/4180
| null |
Narsil
| false
|
[
"Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio\"]` ? Currently it returns a list with all the decoded audio data in memory.\r\n\r\nIt would be a breaking change though, since `isinstance(dataset[\"audio\"], list)` wouldn't work anymore, but we could implement a `Sequence` so that `dataset[\"audio\"][0]` still works and only loads one item in memory.\r\n\r\nYour alternative suggestion with `iterate` is also sensible, though maybe less satisfactory in terms of experience IMO",
"I agree that current behavior (decoding all audio file sin the dataset when accessing `dataset[\"audio\"]`) is not useful, IMHO. Indeed in our docs, we are constantly warning our collaborators not to do that.\r\n\r\nTherefore I upvote for a \"useful\" behavior of `dataset[\"audio\"]`. I don't think the breaking change is important in this case, as I guess no many people use it with its current behavior. Therefore, for me it seems reasonable to return a generator (instead of an in-memeory list) for \"special\" features, like Audio/Image.\r\n\r\n@lhoestq on the other hand I don't understand your proposal about Pandas-like... ",
"I recall I had the same idea while working on the `Image` feature, so I agree implementing something similar to `pd.Series` that lazily brings elements in memory would be beneficial.",
"@lhoestq @mariosasko Could you please give a link to that new feature of `pandas.Series`? As far as I remember since I worked with pandas for more than 6 years, there was no lazy in-memory feature; it was everything in-memory; that was the reason why other frameworks were created, like Vaex or Dask, e.g. ",
"Yea pandas doesn't do lazy loading. I was referring to pandas.Series to say that they have a dedicated class to represent a column ;)"
] |
1,208,001,118
| 4,179
|
Dataset librispeech_asr fails to load
|
closed
| 2022-04-19T08:45:48
| 2022-07-27T16:10:00
| 2022-07-27T16:10:00
|
https://github.com/huggingface/datasets/issues/4179
| null |
albertz
| false
|
[
"@patrickvonplaten Hi! I saw that you prepared this? :)",
"Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https://github.com/tensorflow/datasets/issues/3885",
"Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ",
"Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n",
"If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https://huggingface.co/docs/datasets/v2.1.0/en/process#concatenate",
"Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n",
"cc @lhoestq FYI maybe the docs can be improved here",
"Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?",
"Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like",
"Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183",
"> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n",
"+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs",
"Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)",
"I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n",
"Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/main_classes#datasets.Dataset.filter )",
"But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```",
"Hi @patrickvonplaten ,\r\n\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?",
"Hmm, I don't really see how that's possible: https://github.com/huggingface/datasets/blob/d22e39a0693d4be7410cf9a5d41fd5aac22be3cc/datasets/librispeech_asr/librispeech_asr.py#L51\r\n\r\nNote that all datasets related to `\"clean\"` are downloaded, but only `\"train.100\"` should be used. \r\n\r\ncc @lhoestq @albertvillanova @mariosasko can we do anything against download dataset links that are not related to the \"split\" that one actually needs. E.g. why should the split `\"train.360\"` be downloaded if for the user executes the above command:\r\n\r\n```py\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\")\r\n```",
"@patrickvonplaten This problem is a bit harder than it may seem, and it has to do with how our scripts are structured - `_split_generators` downloads data for a split before its definition. There was an attempt to fix this in https://github.com/huggingface/datasets/pull/2249, but it wasn't flexible enough. Luckily, I have a plan of attack, and this issue is on our short-term roadmap, so I'll work on it soon.\r\n\r\nIn the meantime, one can use streaming or manually download a dataset script, remove unwanted splits and load a dataset via `load_dataset`.",
"> load_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?\r\n\r\nSince this bug is still there and google led me here when I was searching for a solution, I am writing down how to quickly fix it (as suggested by @mariosasko) for whoever else is not familiar with how the HF Hub works.\r\n\r\nDownload the [librispeech_asr.py](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py) script and remove the unwanted splits both from the [`_DL_URLS` dictionary](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L47-L68) and from the [`_split_generators` function](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L121-L241).\r\n[Here ](https://huggingface.co/datasets/andreagasparini/librispeech_test_only) I made an example with only the test sets.\r\n\r\nThen either save the script locally and load the dataset via \r\n```python\r\nload_dataset(\"${local_path}/librispeech_asr.py\")\r\n```\r\n\r\nor [create a new dataset repo on the hub](https://huggingface.co/new-dataset) named \"librispeech_asr\" and upload the script there, then you can just run\r\n```python\r\nload_dataset(\"${hugging_face_username}/librispeech_asr\")\r\n```",
"Fixed by https://github.com/huggingface/datasets/pull/4184"
] |
1,207,787,073
| 4,178
|
[feat] Add ImageNet dataset
|
closed
| 2022-04-19T06:01:35
| 2022-04-29T21:43:59
| 2022-04-29T21:37:08
|
https://github.com/huggingface/datasets/pull/4178
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4178",
"html_url": "https://github.com/huggingface/datasets/pull/4178",
"diff_url": "https://github.com/huggingface/datasets/pull/4178.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4178.patch",
"merged_at": "2022-04-29T21:37:08"
}
|
apsdehal
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the comments. I believe I have addressed all of them and also decreased the size of the dummy data file, so it should be ready for a re-review. I also made a change to allow adding synset mapping and valprep script in config in case we add ImageNet 21k some time later. ",
"@lhoestq I have updated the PR to address all of the review comments."
] |
1,207,535,920
| 4,177
|
Adding missing subsets to the `SemEval-2018 Task 1` dataset
|
open
| 2022-04-18T22:59:30
| 2022-10-05T10:38:16
| null |
https://github.com/huggingface/datasets/pull/4177
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4177",
"html_url": "https://github.com/huggingface/datasets/pull/4177",
"diff_url": "https://github.com/huggingface/datasets/pull/4177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4177.patch",
"merged_at": null
}
|
micahcarroll
| true
|
[
"Datasets are not tracked in this repository anymore. You should move this PR to the [discussions page of this dataset](https://huggingface.co/datasets/sem_eval_2018_task_1/discussions)"
] |
1,206,515,563
| 4,176
|
Very slow between two operations
|
closed
| 2022-04-17T23:52:29
| 2022-04-18T00:03:00
| 2022-04-18T00:03:00
|
https://github.com/huggingface/datasets/issues/4176
| null |
yanan1116
| false
|
[] |
1,205,589,842
| 4,175
|
Add WIT Dataset
|
closed
| 2022-04-15T13:42:32
| 2023-09-24T10:02:38
| 2022-05-02T14:26:41
|
https://github.com/huggingface/datasets/pull/4175
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4175",
"html_url": "https://github.com/huggingface/datasets/pull/4175",
"diff_url": "https://github.com/huggingface/datasets/pull/4175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4175.patch",
"merged_at": null
}
|
thomasw21
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Coming in late with some context.\r\n\r\nThere are two versions of the WIT dataset:\r\n1. The original source dataset managed by Wikimedia. It has more information, raw image representations, and each row corresponds to an image linked to all of its captions wherever it happens in Wikipedia (in multiple languages)\r\n2. The Google version, corresponding to the data script in this PR, which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of `urllib` could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nThe Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https://github.com/huggingface/datasets/pull/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nHow do you want to move forward? IMO the best way would be to have a WIT dataset under the Wikimedia org with both configurations, but it depends on everyone's timelines",
"Okay after offline discussion. We'll improve this versions and push it to the hub under `google` namespace. \r\n\r\n> which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of urllib could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nAh interesting wasn't aware of this duplication issue, concretely it'll just mean that our dataset in bigger than expected ... I think this should be handled after this loading script (though I have to figure our how to spawn a dl_manager).\r\n\r\n> The Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https://github.com/huggingface/datasets/pull/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nSimilarly a script will be written and pushed to `wikimedia` organisation.",
"@mariosasko can you make one last review concerning the text description changes? Then I'll handle putting it under `google` namespace and close this PR.",
"Looks all good now. Great job! ",
"Closing as this has been migrated to the hub under `google` namespace: https://huggingface.co/datasets/google/wit"
] |
1,205,575,941
| 4,174
|
Fix when map function modifies input in-place
|
closed
| 2022-04-15T13:23:15
| 2022-04-15T14:52:07
| 2022-04-15T14:45:58
|
https://github.com/huggingface/datasets/pull/4174
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4174",
"html_url": "https://github.com/huggingface/datasets/pull/4174",
"diff_url": "https://github.com/huggingface/datasets/pull/4174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4174.patch",
"merged_at": "2022-04-15T14:45:58"
}
|
thomasw21
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,204,657,114
| 4,173
|
Stream private zipped images
|
closed
| 2022-04-14T15:15:07
| 2022-05-05T14:05:54
| 2022-05-05T13:58:35
|
https://github.com/huggingface/datasets/pull/4173
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4173",
"html_url": "https://github.com/huggingface/datasets/pull/4173",
"diff_url": "https://github.com/huggingface/datasets/pull/4173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4173.patch",
"merged_at": "2022-05-05T13:58:35"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"oops looks like some tests are failing sorry, will fix them tomorrow\r\n\r\nEDIT: not today but asap hopefully",
"cc @mariosasko this is ready for review, let me know what you think !"
] |
1,204,433,160
| 4,172
|
Update assin2 dataset_infos.json
|
closed
| 2022-04-14T11:53:06
| 2022-04-15T14:47:42
| 2022-04-15T14:41:22
|
https://github.com/huggingface/datasets/pull/4172
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4172",
"html_url": "https://github.com/huggingface/datasets/pull/4172",
"diff_url": "https://github.com/huggingface/datasets/pull/4172.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4172.patch",
"merged_at": "2022-04-15T14:41:22"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,204,413,620
| 4,170
|
to_tf_dataset rewrite
|
closed
| 2022-04-14T11:30:58
| 2022-06-06T14:31:12
| 2022-06-06T14:22:09
|
https://github.com/huggingface/datasets/pull/4170
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4170",
"html_url": "https://github.com/huggingface/datasets/pull/4170",
"diff_url": "https://github.com/huggingface/datasets/pull/4170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4170.patch",
"merged_at": "2022-06-06T14:22:09"
}
|
Rocketknight1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"[Magic is now banned](https://www.youtube.com/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!",
"@gante I renamed the default collator to `minimal_tf_collate_fn`!",
"@lhoestq @sgugger @gante \r\n\r\nI think this should now be ready, it looks good in testing! I'll try a few more notebooks today and tomorrow to be sure before I merge. Key changes are:\r\n\r\n- No column autodetection magic (will make a separate PR to add this as a `transformers` function)\r\n- Drops non-numerical features automatically (this is more of a 'DataLoader' method, we'll have a separate method to expose 'raw' datasets to `tf.data`)\r\n- Better autodetection of numerical features.\r\n- Shouldn't randomly crash mid-function :skull: \r\n\r\nWe definitely have some questions still to resolve about how to handle making a 'DataLoader' dataset versus a 'raw' dataset - see [the Notion doc](https://www.notion.so/huggingface2/Splitting-to_tf_dataset-c2e0773c4bec484384064b30ed634383) if you're interested. Still, since this PR is just fixes/improvements to an existing method which never supported non-numerical features anyway, we can merge it before we've resolved those issues, and then think about how to name and split things afterwards.",
"P.S. I'll take out the region comments at the end before I merge, I promise! They're just helpful while I'm editing it",
"+1 for the tests\r\n\r\n> Drops non-numerical features automatically\r\n\r\nCan you give more details on how this work and the rationale as well ? This is not explained in the docs\r\n\r\nAlso why are you adding `error_on_missing` and `auto_fix_label_names ` ? The rationale is not clear to me. In particular I think it is sensible enough to expect users to not ask columns that don't exist, and to rename a label column when required.",
"@lhoestq I rewrote those parts - they were causing some other issues too! `error_on_missing` and `auto_fix_label_names` have been removed. The new logic is to simply drop (before batch collation) all columns the user doesn't ask for, but not to raise errors if the user asked for columns not in the dataset, as they may be added by the collator. Hopefully this cleans it up and matches the documentation better!",
"@lhoestq New tests are now in!",
"Seeing some other random tests failing that don't look to be associated with this PR.",
"@lhoestq I can't figure out these test failures! They don't seem related to this PR at all, but I rebased to the latest version and they keep happening, even though they're not visible on master.",
"Thanks for the ping, will take a look tomorrow :)\r\n\r\nMaybe the rebase didn't go well for the code recently merged about label alignment from https://github.com/huggingface/datasets/pull/4277 ?",
"It's very strange! The rebase looks fine to me. I might try to move my changes to a new branch from `master` and see if I can figure out which change causes this problem to appear.",
"@lhoestq Got it! It was caused by a name collision - I was importing `typing.Sequence`, but the code also needed `features.Sequence`. The tests from that PR were expecting the latter but got the former, and then crashed.",
"@lhoestq Thanks! Also, when you're ready, don't merge it immediately! I'd like to do a quick round of manual testing with the very final build once you're happy to make sure it still works in our notebooks and examples.",
"@lhoestq Tests look good to me, merging now!"
] |
1,203,995,869
| 4,169
|
Timit_asr dataset cannot be previewed recently
|
closed
| 2022-04-14T03:28:31
| 2023-02-03T04:54:57
| 2022-05-06T16:06:51
|
https://github.com/huggingface/datasets/issues/4169
| null |
YingLi001
| false
|
[
"Thanks for reporting. The bug has already been detected, and we hope to fix it soon.",
"TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it",
"> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might take a bit more time to fix it\r\n\r\nThank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in the list. So I am a little bit confused. If *'timit_asr'* need to be manually downloaded, does that mean we can **not** automatically download it **any more** in the future?",
"Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset(\"timir_asr\", data_dir=\"path/to/extracted/data\")`\r\n\r\nThe URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the dataset owners (LDC) notified us about it.",
"I downloaded the timit_asr data and unzipped. But I can't run my code. Could you resolve this problem for me? Thanks\r\n\r\n import soundfile as sf\r\n import torch\r\n from datasets import load_dataset\r\n dataset = load_dataset(\"timit_asr\", data_dir=\"/Users/nguyenvannham/Documents/test_case/data\")\r\n \r\n \r\n Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]Traceback (most recent call last):\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1571, in _prepare_split_single\r\n for key, record in generator:\r\n\r\n File \"/Users/nguyenvannham/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 138, in _generate_examples\r\n with txt_path.open(encoding=\"utf-8\") as op:\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/pathlib.py\", line 1252, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/pathlib.py\", line 1120, in _opener\r\n return self._accessor.open(self, flags, mode)\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/nguyenvannham/Documents/test_case/data/train/DR1/FCJF0/SA1.WAV.TXT'\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/var/folders/t9/l8d3rwpn1k33_gjtqs732lzc0000gn/T/ipykernel_3891/1203313828.py\", line 1, in <module>\r\n dataset = load_dataset(\"timit_asr\", data_dir=\"/Users/nguyenvannham/Documents/test_case/data\")\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/load.py\", line 1758, in load_dataset\r\n builder_instance.download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 860, in download_and_prepare\r\n self._download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1612, in _download_and_prepare\r\n super()._download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 953, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1450, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1607, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset"
] |
1,203,867,540
| 4,168
|
Add code examples to API docs
|
closed
| 2022-04-13T23:03:38
| 2022-04-27T18:53:37
| 2022-04-27T18:48:34
|
https://github.com/huggingface/datasets/pull/4168
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4168",
"html_url": "https://github.com/huggingface/datasets/pull/4168",
"diff_url": "https://github.com/huggingface/datasets/pull/4168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4168.patch",
"merged_at": "2022-04-27T18:48:34"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r\n\r\nI think it's ok to be repetitive to get more clarity. Many users come from `transformers` and may have little experience with some processing methods (especially torch users).\r\n\r\n> Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n\r\nMaybe let's do it case by case, depending on whether there are parameters that are likely to be used often ?\r\n\r\n> For the class_encode_column function, let me know if there is a simpler dataset with fewer columns (currently using winograd_wsc) so it is easier for users to see what changed.\r\n\r\nYou can try with `boolq`, it has a boolean column that can be converted to labels\r\n\r\n> Where possible, I try to show the input before and the output after using a function like flatten for example. Do you think this is too much and just showing the usage (ie, >>> ds.flatten()) will be sufficient?\r\n\r\nNo I don't think it's too much, it's nice this way thanks :)",
"Updated each code example so they are fully reproducible (where applicable)! The next step will be to identify some functions where we can show off some parameters that are useful or commonly used. Some useful parameters can be:\r\n\r\n- use `map(batched=True)` to process batches of examples.\r\n- set a seed in `shuffle`.\r\n- set `shuffle` and `seed` in `train_test_split`.\r\n\r\nLet me know if you think of anything else related to the functions in `arrow_dataset.py`!",
"Cool thanks ! I think you can also do `num_proc` for `map`"
] |
1,203,761,614
| 4,167
|
Avoid rate limit in update hub repositories
|
closed
| 2022-04-13T20:32:17
| 2022-04-13T20:56:41
| 2022-04-13T20:50:32
|
https://github.com/huggingface/datasets/pull/4167
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4167",
"html_url": "https://github.com/huggingface/datasets/pull/4167",
"diff_url": "https://github.com/huggingface/datasets/pull/4167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4167.patch",
"merged_at": "2022-04-13T20:50:32"
}
|
lhoestq
| true
|
[
"I also set GIT_LFS_SKIP_SMUDGE=1 to speed up git clones",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,203,758,004
| 4,166
|
Fix exact match
|
closed
| 2022-04-13T20:28:06
| 2022-05-03T12:23:31
| 2022-05-03T12:16:27
|
https://github.com/huggingface/datasets/pull/4166
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4166",
"html_url": "https://github.com/huggingface/datasets/pull/4166",
"diff_url": "https://github.com/huggingface/datasets/pull/4166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4166.patch",
"merged_at": "2022-05-03T12:16:27"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,203,730,187
| 4,165
|
Fix google bleu typos, examples
|
closed
| 2022-04-13T19:59:54
| 2022-05-03T12:23:52
| 2022-05-03T12:16:44
|
https://github.com/huggingface/datasets/pull/4165
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4165",
"html_url": "https://github.com/huggingface/datasets/pull/4165",
"diff_url": "https://github.com/huggingface/datasets/pull/4165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4165.patch",
"merged_at": "2022-05-03T12:16:44"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,203,661,346
| 4,164
|
Fix duplicate key in multi_news
|
closed
| 2022-04-13T18:48:24
| 2022-04-13T21:04:16
| 2022-04-13T20:58:02
|
https://github.com/huggingface/datasets/pull/4164
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4164",
"html_url": "https://github.com/huggingface/datasets/pull/4164",
"diff_url": "https://github.com/huggingface/datasets/pull/4164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4164.patch",
"merged_at": "2022-04-13T20:58:02"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,203,539,268
| 4,163
|
Optional Content Warning for Datasets
|
open
| 2022-04-13T16:38:01
| 2022-06-09T20:39:02
| null |
https://github.com/huggingface/datasets/issues/4163
| null |
TristanThrush
| false
|
[
"Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. ",
"Hi @mariosasko, thanks for explaining how to add this feature. \r\n\r\nIf the current dataset yaml is:\r\n```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\n---\r\n```\r\n\r\nCan you provide a minimal working example of how to added the gated prompt?\r\n\r\nThanks!",
"```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\nextra_gated_prompt: \"This repository contains harmful content.\"\r\n---\r\n```\r\n\\+ enable `User Access requests` under the Settings pane.\r\n\r\nThere's a brief guide here https://discuss.huggingface.co/t/how-to-customize-the-user-access-requests-message/13953 , and you can see the field in action here, https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/README.md (you need to agree the terms in the Dataset Card pane to be able to access the files pane, so this comes up 403 at first).\r\n\r\nAnd a working example here! https://huggingface.co/datasets/DDSC/dkhate :) Great to be able to mitigate harms in text.",
"-- is there a way to gate content anonymously, i.e. without registering which users access it?",
"+1 to @leondz's question. One scenario is if you don't want the dataset to be indexed by search engines or viewed in browser b/c of upstream conditions on data, but don't want to collect emails. Some ability to turn off the dataset viewer or add a gating mechanism without emails would be fantastic."
] |
1,203,421,909
| 4,162
|
Add Conceptual 12M
|
closed
| 2022-04-13T14:57:23
| 2022-04-15T08:13:01
| 2022-04-15T08:06:25
|
https://github.com/huggingface/datasets/pull/4162
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4162",
"html_url": "https://github.com/huggingface/datasets/pull/4162",
"diff_url": "https://github.com/huggingface/datasets/pull/4162.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4162.patch",
"merged_at": "2022-04-15T08:06:25"
}
|
thomasw21
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like your dummy_data.zip file is not in the right location ;)\r\ndatasets/datasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip\r\n->\r\ndatasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip"
] |
1,203,230,485
| 4,161
|
Add Visual Genome
|
closed
| 2022-04-13T12:25:24
| 2022-04-21T15:42:49
| 2022-04-21T13:08:52
|
https://github.com/huggingface/datasets/pull/4161
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4161",
"html_url": "https://github.com/huggingface/datasets/pull/4161",
"diff_url": "https://github.com/huggingface/datasets/pull/4161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4161.patch",
"merged_at": "2022-04-21T13:08:52"
}
|
thomasw21
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my `master` is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n \r\n cc @mariosasko @lhoestq ",
"> some tasks don't fit anything in tasks.json. Do I remove them in task_categories?\r\n\r\nYou can keep them, but add `other-` as a prefix to those tasks to make the CI ignore it\r\n\r\n> some tasks should exist, typically visual-question-answering (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my master is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n\r\nFeel free to merge upstream/master into your branch ;)\r\n\r\nEDIT: actually I just noticed you've already done this, thanks !",
"After offline discussions: will keep that image essentially it's necessary as I have a mapping that creates a mapping between url and local path (images are downloaded via a zip file) and dummy data needs to store that dummy image. The issue is when I read an annotation, I get a url, compute the local path, and basically I assume the local path exists since I've extracted all the images ... This isn't true if dummy data doesn't have all the images, so instead I've added a script that \"fixes\" the dummy data after using the CLI, it essentially adds the dummy image in the zip corresponding to the url."
] |
1,202,845,874
| 4,160
|
RGBA images not showing
|
closed
| 2022-04-13T06:59:23
| 2022-06-21T16:43:11
| 2022-06-21T16:43:11
|
https://github.com/huggingface/datasets/issues/4160
| null |
cceyda
| false
|
[
"Thanks for reporting. It's a known issue, and we hope to fix it soon.",
"Fixed, thanks!"
] |
1,202,522,153
| 4,159
|
Add `TruthfulQA` dataset
|
closed
| 2022-04-12T23:19:04
| 2022-06-08T15:51:33
| 2022-06-08T14:43:34
|
https://github.com/huggingface/datasets/pull/4159
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4159",
"html_url": "https://github.com/huggingface/datasets/pull/4159",
"diff_url": "https://github.com/huggingface/datasets/pull/4159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4159.patch",
"merged_at": "2022-06-08T14:43:34"
}
|
jon-tow
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful 🤗 )"
] |
1,202,376,843
| 4,158
|
Add AUC ROC Metric
|
closed
| 2022-04-12T20:53:28
| 2022-04-26T19:41:50
| 2022-04-26T19:35:22
|
https://github.com/huggingface/datasets/pull/4158
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4158",
"html_url": "https://github.com/huggingface/datasets/pull/4158",
"diff_url": "https://github.com/huggingface/datasets/pull/4158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4158.patch",
"merged_at": "2022-04-26T19:35:22"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,202,239,622
| 4,157
|
Fix formatting in BLEU metric card
|
closed
| 2022-04-12T18:29:51
| 2022-04-13T14:30:25
| 2022-04-13T14:16:34
|
https://github.com/huggingface/datasets/pull/4157
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4157",
"html_url": "https://github.com/huggingface/datasets/pull/4157",
"diff_url": "https://github.com/huggingface/datasets/pull/4157.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4157.patch",
"merged_at": "2022-04-13T14:16:34"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,202,220,531
| 4,156
|
Adding STSb-TR dataset
|
closed
| 2022-04-12T18:10:05
| 2022-10-03T09:36:25
| 2022-10-03T09:36:25
|
https://github.com/huggingface/datasets/pull/4156
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4156",
"html_url": "https://github.com/huggingface/datasets/pull/4156",
"diff_url": "https://github.com/huggingface/datasets/pull/4156.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4156.patch",
"merged_at": null
}
|
figenfikri
| true
|
[
"Thanks for your contribution, @figenfikri.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
1,202,183,608
| 4,155
|
Make HANS dataset streamable
|
closed
| 2022-04-12T17:34:13
| 2022-04-13T12:03:46
| 2022-04-13T11:57:35
|
https://github.com/huggingface/datasets/pull/4155
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4155",
"html_url": "https://github.com/huggingface/datasets/pull/4155",
"diff_url": "https://github.com/huggingface/datasets/pull/4155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4155.patch",
"merged_at": "2022-04-13T11:57:34"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,202,145,721
| 4,154
|
Generate tasks.json taxonomy from `huggingface_hub`
|
closed
| 2022-04-12T17:12:46
| 2022-04-14T10:32:32
| 2022-04-14T10:26:13
|
https://github.com/huggingface/datasets/pull/4154
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4154",
"html_url": "https://github.com/huggingface/datasets/pull/4154",
"diff_url": "https://github.com/huggingface/datasets/pull/4154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4154.patch",
"merged_at": "2022-04-14T10:26:13"
}
|
julien-c
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok recomputed the json file, this should be ready to review now! @lhoestq ",
"Note: the generated JSON from `hf/hub-docs` can be found in the output of a GitHub Action run on that repo, for instance in https://github.com/huggingface/hub-docs/runs/6006686983?check_suite_focus=true\r\n\r\n(click on \"Run export-tasks script\")",
"Should we not add the tasks with hideInDatasets?",
"yes, probably true – i'll change that in a PR in `hub-docs`",
"Yes that's good :) feel free to merge",
"thanks to the both of you!"
] |
1,202,040,506
| 4,153
|
Adding Text-based NP Enrichment (TNE) dataset
|
closed
| 2022-04-12T15:47:03
| 2022-05-03T14:05:48
| 2022-05-03T14:05:48
|
https://github.com/huggingface/datasets/pull/4153
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4153",
"html_url": "https://github.com/huggingface/datasets/pull/4153",
"diff_url": "https://github.com/huggingface/datasets/pull/4153.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4153.patch",
"merged_at": "2022-05-03T14:05:48"
}
|
yanaiela
| true
|
[
"Hey @lhoestq, can you please have a look? 🙏",
"Great, thanks again @lhoestq! I think we're good to go now",
"Done"
] |
1,202,034,115
| 4,152
|
ArrayND error in pyarrow 5
|
closed
| 2022-04-12T15:41:40
| 2022-05-04T09:29:46
| 2022-05-04T09:29:46
|
https://github.com/huggingface/datasets/issues/4152
| null |
lhoestq
| false
|
[
"Where do we bump the required pyarrow version? Any inputs on how I fix this issue? ",
"We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci/config.yaml` and `.github/workflows/benchmarks.yaml`"
] |
1,201,837,999
| 4,151
|
Add missing label for emotion description
|
closed
| 2022-04-12T13:17:37
| 2022-04-12T13:58:50
| 2022-04-12T13:58:50
|
https://github.com/huggingface/datasets/pull/4151
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4151",
"html_url": "https://github.com/huggingface/datasets/pull/4151",
"diff_url": "https://github.com/huggingface/datasets/pull/4151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4151.patch",
"merged_at": "2022-04-12T13:58:50"
}
|
lijiazheng99
| true
|
[] |
1,201,689,730
| 4,150
|
Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split)
|
closed
| 2022-04-12T11:15:55
| 2022-04-28T21:02:44
| 2022-04-28T21:02:44
|
https://github.com/huggingface/datasets/issues/4150
| null |
polinaeterna
| false
|
[] |
1,201,389,221
| 4,149
|
load_dataset for winoground returning decoding error
|
closed
| 2022-04-12T08:16:16
| 2022-05-04T23:40:38
| 2022-05-04T23:40:38
|
https://github.com/huggingface/datasets/issues/4149
| null |
odellus
| false
|
[
"I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n```\r\nbut I found out that wasn't the case\r\n```python\r\n[x for x in dataset]\r\n...\r\nClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```",
"Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069 \r\n\r\nThe following structure will be supported soon:\r\n```\r\nmetadata.json\r\nimages/\r\n image0.png\r\n image1.png\r\n ...\r\n```\r\nWhere `metadata.json` is a JSON Lines file with labels or other metadata, and each line must have a \"file_name\" field with the name of the image file.\r\n\r\nFor the moment are only supported:\r\n- JSON files only\r\n- image files only\r\n\r\nSince this dataset is a mix of the two, at the moment it fails trying to read the images as JSON.\r\n\r\nTherefore to be able to load this dataset we need to wait for the new structure to be supported (very soon ^^), or add a dataset script in the repository that reads both the JSON and the images cc @TristanThrush \r\n",
"We'll also investigate the issue with the streaming download manager in https://github.com/huggingface/datasets/issues/4139 ;) thanks for reporting",
"Are there any updates on this?",
"In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that.",
"I mirrored the files at https://huggingface.co/datasets/facebook/winoground in a folder on my local machine `winground`\r\nand when I tried\r\n```python\r\nimport datasets\r\nds = datasets.load_from_disk('./winoground')\r\n```\r\nI get the following error\r\n```python\r\n--------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 ds = datasets.load_from_disk('./winoground')\r\n\r\nFile ~/.local/lib/python3.8/site-packages/datasets/load.py:1759, in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1757 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1758 else:\r\n-> 1759 raise FileNotFoundError(\r\n 1760 f\"Directory {dataset_path} is neither a dataset directory nor a dataset dict directory.\"\r\n 1761 )\r\n\r\nFileNotFoundError: Directory ./winoground is neither a dataset directory nor a dataset dict directory.\r\n```\r\nso still some work to be done on the backend imo.",
"Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.\r\n\r\nOnce we do support images with metadata you'll be able to use `load_dataset(\"facebook/winoground\")` directly (or `load_dataset(\"./winoground\")` of you've cloned the winoground repository locally).",
"Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:\r\n\r\n`examples = load_dataset('facebook/winoground', use_auth_token=<your auth token>)`\r\n\r\nLet me know if there are any issues",
"Adding the dataset loading script definitely didn't take as long as I thought it would 😅",
"killer"
] |
1,201,169,242
| 4,148
|
fix confusing bleu metric example
|
closed
| 2022-04-12T06:18:26
| 2022-04-13T14:16:34
| 2022-04-13T14:16:34
|
https://github.com/huggingface/datasets/issues/4148
| null |
aizawa-naoki
| false
|
[] |
1,200,756,008
| 4,147
|
Adjust path to datasets tutorial in How-To
|
closed
| 2022-04-12T01:20:34
| 2022-04-12T08:32:24
| 2022-04-12T08:26:02
|
https://github.com/huggingface/datasets/pull/4147
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4147",
"html_url": "https://github.com/huggingface/datasets/pull/4147",
"diff_url": "https://github.com/huggingface/datasets/pull/4147.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4147.patch",
"merged_at": "2022-04-12T08:26:02"
}
|
NimaBoscarino
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,200,215,789
| 4,146
|
SAMSum dataset viewer not working
|
closed
| 2022-04-11T16:22:57
| 2022-04-29T16:26:09
| 2022-04-29T16:26:09
|
https://github.com/huggingface/datasets/issues/4146
| null |
aakashnegi10
| false
|
[
"https://huggingface.co/datasets/samsum\r\n\r\n```\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n```",
"Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details about why the dataset cannot be streamed.",
"It looks like the host (https://arxiv.org) doesn't allow HTTP Range requests, which is what we use to stream data.\r\n\r\nThis can be fix if we host the data ourselves, which is ok since the dataset is under CC BY-NC-ND 4.0"
] |
1,200,209,781
| 4,145
|
Redirect TIMIT download from LDC
|
closed
| 2022-04-11T16:17:55
| 2022-04-13T15:39:31
| 2022-04-13T15:33:04
|
https://github.com/huggingface/datasets/pull/4145
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4145",
"html_url": "https://github.com/huggingface/datasets/pull/4145",
"diff_url": "https://github.com/huggingface/datasets/pull/4145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4145.patch",
"merged_at": "2022-04-13T15:33:03"
}
|
lhoestq
| true
|
[
"CI is failing because some tags are outdated, but they're fixed in #4067 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"We may do a release pretty soon (today ?), let me know if it's fine to include it in the new release",
"Fine to include this change!"
] |
1,200,016,983
| 4,144
|
Fix splits in local packaged modules, local datasets without script and hub datasets without script
|
closed
| 2022-04-11T13:57:33
| 2022-04-29T09:12:14
| 2022-04-28T21:02:45
|
https://github.com/huggingface/datasets/pull/4144
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4144",
"html_url": "https://github.com/huggingface/datasets/pull/4144",
"diff_url": "https://github.com/huggingface/datasets/pull/4144.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4144.patch",
"merged_at": "2022-04-28T21:02:44"
}
|
polinaeterna
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks !\r\nI'm in favor of this change, even though it's a breaking change:\r\n\r\nif you had a dataset\r\n```\r\ndata/\r\n train.csv\r\n test.csv\r\n```\r\n\r\nthen running this code would now return both train and test splits:\r\n```python\r\nload_dataset(\"csv\", data_dir=\"data/\")\r\n```\r\nwhereas right now it returns only a train split with the data from both CSV files.\r\n\r\nIn my opinion it's ok do do this breaking change because:\r\n- it makes this behavior consistent with `load_dataset(\"path/to/data\")` that also returns both splits: data_files resolution must be the same\r\n- I don't expect too many affected users (unless people really wanted to group train and test images in the train split on purpose ?) compared to the many new users to come (especially with #4069 )\r\n- this usage will become more and more common as we add packaged builder and imagefolder/audiofolder usage grows, so it may be better to do this change early\r\n\r\nLet me know if you think this is acceptable @mariosasko @albertvillanova or not, and if you think we need to first have a warning for some time before switching to this new behavior",
"Also, if people really want to put train and test, say, images in a single train split they could do \r\n`load_dataset(\"imagefolder\", data_files={\"train\": \"/path/to/data/**})`. Probably (arguably :)), if this is a more counterintuitive case, then it should require manual files specification, not a default one (in which we expect that users do want to infer splits from filenames / dir structure but currently they have to pass smth like `{\"train\": \"/path/to/data/train*\", \"test\": \"/path/to/data/test*\"}` explicitly as `data_files`) ",
"I also like this change, and I don't think we even need a warning during the transition period, considering I've been asked several times since the release of `imagefolder` why splits are not correctly inferred if the directory structure is as follows:\r\n```\r\ndata_dir\r\n train\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n test\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n```",
"Cool ! Feel free to add a test (maybe something similar to `test_PackagedDatasetModuleFactory_with_data_dir` but with a data_dir that contains several splits) and mark this PR as ready for review then @polinaeterna :)",
"@lhoestq @mariosasko do you think it's a good idea to do the same with `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` (see the latest change). If we agree on the current change, doing \r\n```python\r\nds = load_dataset(\"polinaeterna/jsonl_test\", data_dir=\"data/\")\r\n```\r\non dataset with the following structure:\r\n```\r\ntrain.jsonl\r\ntest.jsonl\r\ndata/\r\n train.jsonl\r\n test.jsonl\r\n```\r\nwill result in having two splits from files under `data/` dir in specified repo, while master version returns a single train split. \r\nThe same would be for local dataset without script if doing smth like:\r\n```python\r\nds = load_dataset(\"/home/polina/workspace/repos/jsonl_test\", data_dir=\"/home/polina/workspace/repos/jsonl_test/data\")\r\n```\r\n(though I'm not sure I understand this use case :D)\r\nLet me know if you think we should preserve the same logic for all factories or if I should roll back this change.",
"@lhoestq to test passing subdirectory (`base_path`) to data_files functions and methods, I extended the temporary test directory with data so that it contains subdirectory. Because of that the number of files in this directory increased, so I had to change some numbers and patterns to account for this change - [907ddf0](https://github.com/huggingface/datasets/pull/4144/commits/907ddf09d3afece5afbae18675c859d6e453f2bf)\r\n\r\nDo you think it's ok? Another option is to create another tmp dir and do all the checks inside it. "
] |
1,199,937,961
| 4,143
|
Unable to download `Wikepedia` 20220301.en version
|
closed
| 2022-04-11T13:00:14
| 2022-08-17T00:37:55
| 2022-04-21T17:04:14
|
https://github.com/huggingface/datasets/issues/4143
| null |
beyondguo
| false
|
[
"Hi! We've recently updated the Wikipedia script, so these changes are only available on master and can be fetched as follows:\r\n```python\r\ndataset_wikipedia = load_dataset(\"wikipedia\", \"20220301.en\", revision=\"master\")\r\n```",
"Hi, how can I load the previous \"20200501.en\" version of wikipedia which had been downloaded to the default path? Thanks!",
"@JiaQiSJTU just reinstall the previous verision of the package, e.g. `!pip install -q datasets==1.0.0`"
] |
1,199,794,750
| 4,142
|
Add ObjectFolder 2.0 dataset
|
open
| 2022-04-11T10:57:51
| 2022-10-05T10:30:49
| null |
https://github.com/huggingface/datasets/issues/4142
| null |
osanseviero
| false
|
[
"Datasets are not tracked in this repository anymore."
] |
1,199,610,885
| 4,141
|
Why is the dataset not visible under the dataset preview section?
|
closed
| 2022-04-11T08:36:42
| 2022-04-11T18:55:32
| 2022-04-11T17:09:49
|
https://github.com/huggingface/datasets/issues/4141
| null |
Nid989
| false
|
[] |
1,199,492,356
| 4,140
|
Error loading arxiv data set
|
closed
| 2022-04-11T07:06:34
| 2022-04-12T16:24:08
| 2022-04-12T16:24:08
|
https://github.com/huggingface/datasets/issues/4140
| null |
yjqiu
| false
|
[
"Hi! I think this error may be related to using an older version of the library. I was able to load the dataset without any issues using the latest version of `datasets`. Can you upgrade to the latest version of `datasets` and try again? :)",
"Hi! As @stevhliu suggested, to fix the issue, update the lib to the newest version with:\r\n```\r\npip install -U datasets\r\n```\r\nand download the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset('scientific_papers', 'arxiv', download_mode=\"force_redownload\")\r\n```",
"Thanks for the quick response! It works now. The problem is that I used nlp. load_dataset instead of datasets. load_dataset."
] |
1,199,443,822
| 4,139
|
Dataset viewer issue for Winoground
|
closed
| 2022-04-11T06:11:41
| 2022-06-21T16:43:58
| 2022-06-21T16:43:58
|
https://github.com/huggingface/datasets/issues/4139
| null |
alcinos
| false
|
[
"related (same dataset): https://github.com/huggingface/datasets/issues/4149. But the issue is different. Looking at it",
"I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset viewer isn't passing through the identity of the user who has signed the licensing agreement when making the request to GET [examples.jsonl](https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl).",
"Pinging @SBrandeis, as it seems related to gated datasets and access tokens.",
"To replicate:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset= datasets.load_dataset('facebook/winoground', name='facebook--winoground', split='train', use_auth_token=\"hf_app_...\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 439, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 85, in _generate_tables\r\n for file_idx, file in enumerate(files):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 679, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 731, in _iter_from_urlpaths\r\n for dirpath, _, filenames in xwalk(urlpath, use_auth_token=use_auth_token):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 623, in xwalk\r\n for dirpath, dirnames, filenames in fs.walk(main_hop):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 372, in walk\r\n listing = self.ls(path, detail=True, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 85, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 65, in sync\r\n raise return_result\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 196, in _ls\r\n out = await self._ls_real(url, detail=detail, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 150, in _ls_real\r\n self._raise_not_found_for_status(r, url)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 208, in _raise_not_found_for_status\r\n response.raise_for_status()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py\", line 1004, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```\r\n\r\n*edited to fix `use_token` -> `use_auth_token`, thx @odellus*",
"~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~\r\nNevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue.",
"After investigation with @severo , we found a potential culprit: https://github.com/huggingface/datasets/blob/3cd0a009a43f9f174056d70bfa2ca32216181926/src/datasets/utils/streaming_download_manager.py#L610-L624\r\n\r\nThe streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating content of a zip archive\r\n\r\ncc @albertvillanova @lhoestq ",
"I was able to reproduce it on a private dataset, let me work on a fix",
"Hey @lhoestq, Thanks for working on a fix! Any plans to merge #4173 into master? ",
"Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;)",
"The fix has been merged, we'll do a new release soon, and update the dataset viewer",
"Fixed, thanks!\r\n<img width=\"1119\" alt=\"Capture d’écran 2022-06-21 à 18 41 09\" src=\"https://user-images.githubusercontent.com/1676121/174853571-afb0749c-4178-4c89-ab40-bb162a449788.png\">\r\n"
] |
1,199,291,730
| 4,138
|
Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
|
closed
| 2022-04-11T02:07:13
| 2022-04-19T03:15:46
| 2022-04-16T15:46:29
|
https://github.com/huggingface/datasets/issues/4138
| null |
iluvvatar
| false
|
[
"To reproduce:\r\n\r\n```python\r\n>>> import datasets\r\n>>> datasets.get_dataset_split_names('MalakhovIlya/RuREBus', config_name='raw_txt')\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 101, in _split_generators\r\n decode_file_names(folder)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 26, in decode_file_names\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py\", line 66, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\nTypeError: xwalk() got an unexpected keyword argument 'topdown'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nIt's not related to the dataset viewer. Maybe @albertvillanova or @lhoestq could help more on this issue.",
"Hi! This issue stems from the fact that `xwalk`, which is a streamable version of `os.walk`, doesn't support the `topdown` param due to `fsspec`'s `walk` also not supporting it, so fixing this issue could be tricky. \r\n\r\n@MalakhovIlyaPavlovich You can avoid the error by tweaking your data processing and not using this param. (and `Path.rename`, which also cannot be streamed) ",
"@mariosasko thank you for your reply. I couldn't reproduce error showed by @severo either on Ubuntu 20.04.3 LTS, Windows 10 and Google Colab environments. But trying to avoid using os.walk(topdown=False) and Path.rename(), In _split_generators I replaced\r\n```\r\ndef decode_file_names(folder):\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n root = Path(root)\r\n for file in files:\r\n old_name = root / Path(file)\r\n new_name = root / Path(\r\n file.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n for dir in dirs:\r\n old_name = root / Path(dir)\r\n new_name = root / Path(dir.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\ndecode_file_names(folder)\r\n```\r\nby\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nif not is_url(zip_file):\r\n folder = extract(zip_file)\r\nelse:\r\n folder = None\r\n```\r\nand now everything works well except data viewer for \"raw_txt\" subset: dataset preview on hub shows \"No data.\". As far as I understand dl_manager.download returns original URL when we are calling datasets.get_dataset_split_names and my suspicions are that dataset viewer can do smth similar. I couldn't find information about how it works. I would be very grateful, if you could tell me how to fix this)",
"This is what I get when I try to stream the `raw_txt` subset:\r\n```python\r\n>>> dset = load_dataset(\"MalakhovIlya/RuREBus\", \"raw_txt\", split=\"raw_txt\", streaming=True)\r\n>>> next(iter(dset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nStopIteration\r\n```\r\nSo there is a bug in your script.",
"streaming=True helped me to find solution. I fixed\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nfolder = extract(zip_file)\r\n```\r\nby \r\n```\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\npath = os.path.join(folder, 'MED_txt/unparsed_txt')\r\nfor root, dirs, files in os.walk(path):\r\n decoded_root_name = Path(root).name.encode('cp437').decode('cp866')\r\n```\r\n@mariosasko thank you for your help :)"
] |
1,199,000,453
| 4,137
|
Add single dataset citations for TweetEval
|
closed
| 2022-04-10T11:51:54
| 2022-04-12T07:57:22
| 2022-04-12T07:51:15
|
https://github.com/huggingface/datasets/pull/4137
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4137",
"html_url": "https://github.com/huggingface/datasets/pull/4137",
"diff_url": "https://github.com/huggingface/datasets/pull/4137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4137.patch",
"merged_at": "2022-04-12T07:51:15"
}
|
gchhablani
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The `test_dataset_cards` method is failing with the error:\r\n\r\n```\r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE The following typing errors are found: {'annotations_creators': \"(Expected `typing.List` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\\nOR\\n(Expected `typing.Dict` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\"}\r\n```\r\n\r\nAdding `found` as annotation creators."
] |
1,198,307,610
| 4,135
|
Support streaming xtreme dataset for PAN-X config
|
closed
| 2022-04-09T06:19:48
| 2022-05-06T08:39:40
| 2022-04-11T06:54:14
|
https://github.com/huggingface/datasets/pull/4135
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4135",
"html_url": "https://github.com/huggingface/datasets/pull/4135",
"diff_url": "https://github.com/huggingface/datasets/pull/4135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4135.patch",
"merged_at": "2022-04-11T06:54:14"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,197,937,146
| 4,134
|
ELI5 supporting documents
|
open
| 2022-04-08T23:36:27
| 2022-04-13T13:52:46
| null |
https://github.com/huggingface/datasets/issues/4134
| null |
saurabh-0077
| false
|
[
"Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;)"
] |
1,197,830,623
| 4,133
|
HANS dataset preview broken
|
closed
| 2022-04-08T21:06:15
| 2022-04-13T11:57:34
| 2022-04-13T11:57:34
|
https://github.com/huggingface/datasets/issues/4133
| null |
pietrolesci
| false
|
[
"The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to /home/slesage/.cache/huggingface/datasets/hans/plain_text/1.0.0/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n",
"Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?",
"Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.