id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,295,852,650
| 4,643
|
Rename master to main
|
closed
| 2022-07-06T13:34:30
| 2022-07-06T15:36:46
| 2022-07-06T15:25:08
|
https://github.com/huggingface/datasets/pull/4643
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4643",
"html_url": "https://github.com/huggingface/datasets/pull/4643",
"diff_url": "https://github.com/huggingface/datasets/pull/4643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4643.patch",
"merged_at": "2022-07-06T15:25:08"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"All the mentions I found on google were simple URLs that will be redirected, so it's fine. I also checked the spaces and we should be good:\r\n- dalle-mini used to install the master branch but [it's no longer the case](https://huggingface.co/spaces/flax-community/dalle-mini/commit/b78c972afd5c2d2bed087be6479fe5c9c6cfa741)\r\n- same for [logo generator](https://huggingface.co/spaces/tom-doerr/logo_generator/commit/a9ea330e518870d0ca8f65abb56f71d86750d8e4)\r\n- I opened a PR to fix [vision-datasets-viewer](https://huggingface.co/spaces/nateraw/vision-datasets-viewer/discussions/1)\r\n",
"Ok let's rename the branch, and then we can merge this PR"
] |
1,295,748,083
| 4,642
|
Streaming issue for ccdv/pubmed-summarization
|
closed
| 2022-07-06T12:13:07
| 2022-07-06T14:17:34
| 2022-07-06T14:17:34
|
https://github.com/huggingface/datasets/issues/4642
| null |
lewtun
| false
|
[
"Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ",
"Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.",
"I've opened a PR on their Hub dataset to support streaming: https://huggingface.co/datasets/ccdv/pubmed-summarization/discussions/2"
] |
1,295,633,250
| 4,641
|
Dataset Viewer issue for kmfoda/booksum
|
closed
| 2022-07-06T10:38:16
| 2022-07-06T13:25:28
| 2022-07-06T11:58:06
|
https://github.com/huggingface/datasets/issues/4641
| null |
lewtun
| false
|
[
"Thanks for reporting, @lewtun.\r\n\r\nIt works locally in streaming mode:\r\n```\r\n{'bid': 27681,\r\n 'is_aggregate': True,\r\n 'source': 'cliffnotes',\r\n 'chapter_path': 'all_chapterized_books/27681-chapters/chapters_1_to_2.txt',\r\n 'summary_path': 'finished_summaries/cliffnotes/The Last of the Mohicans/section_1_part_0.txt',\r\n 'book_id': 'The Last of the Mohicans.chapters 1-2',\r\n 'summary_id': 'chapters 1-2',\r\n 'content': None,\r\n 'summary': '{\"name\": \"Chapters 1-2\", \"url\": \"https://web.archive.org/web/20201101053205/https://www.cliffsnotes.com/literature/l/the-last-of-the-mohicans/summary-and-analysis/chapters-12\", \"summary\": \"Before any characters appear, the time and geography are made clear. Though it is the last war that England and France waged for a country that neither would retain, the wilderness between the forces still has to be...\r\n```\r\n\r\nI'm forcing the refresh of the preview. ",
"The preview appears as expected once the refresh forced.",
"Thank you @albertvillanova 🤗 !"
] |
1,295,495,699
| 4,640
|
Support all split in streaming mode
|
open
| 2022-07-06T08:56:38
| 2022-07-06T15:19:55
| null |
https://github.com/huggingface/datasets/pull/4640
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4640",
"html_url": "https://github.com/huggingface/datasets/pull/4640",
"diff_url": "https://github.com/huggingface/datasets/pull/4640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4640.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4640). All of your documentation changes will be reflected on that endpoint."
] |
1,295,367,322
| 4,639
|
Add HaGRID -- HAnd Gesture Recognition Image Dataset
|
open
| 2022-07-06T07:41:32
| 2022-07-06T07:41:32
| null |
https://github.com/huggingface/datasets/issues/4639
| null |
osanseviero
| false
|
[] |
1,295,233,315
| 4,638
|
The speechocean762 dataset
|
closed
| 2022-07-06T06:17:30
| 2022-10-03T09:34:36
| 2022-10-03T09:34:36
|
https://github.com/huggingface/datasets/pull/4638
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4638",
"html_url": "https://github.com/huggingface/datasets/pull/4638",
"diff_url": "https://github.com/huggingface/datasets/pull/4638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4638.patch",
"merged_at": null
}
|
jimbozhang
| true
|
[
"CircleCL reported two errors, but I didn't find the reason. The error message:\r\n```\r\n_________________ ERROR collecting tests/test_dataset_cards.py _________________\r\ntests/test_dataset_cards.py:53: in <module>\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\ntests/test_dataset_cards.py:35: in get_changed_datasets\r\n diff_output = check_output([\"git\", \"diff\", \"--name-only\", \"origin/master...HEAD\"], cwd=repo_path)\r\n../.pyenv/versions/3.6.15/lib/python3.6/subprocess.py:356: in check_output\r\n **kwargs).stdout\r\n../.pyenv/versions/3.6.15/lib/python3.6/subprocess.py:438: in run\r\n output=stdout, stderr=stderr)\r\nE subprocess.CalledProcessError: Command '['git', 'diff', '--name-only', 'origin/master...HEAD']' returned non-zero exit status 128.\r\n\r\n=========================== short test summary info ============================\r\nERROR tests/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\nERROR tests/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\n= 4011 passed, 2357 skipped, 2 xfailed, 1 xpassed, 116 warnings, 2 errors in 284.32s (0:04:44) =\r\n\r\nExited with code exit status 1\r\n```\r\nI'm not sure if it was caused by this PR ...\r\n\r\nI ran `tests/test_dataset_cards.py` in my local environment, and it passed:\r\n```\r\n(venv)$ pytest tests/test_dataset_cards.py\r\n============================== test session starts ==============================\r\nplatform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /home/zhangjunbo/src/datasets\r\nplugins: forked-1.4.0, datadir-1.3.1, xdist-2.5.0\r\ncollected 1531 items\r\n\r\ntests/test_dataset_cards.py ..... [100%]\r\n======================= 766 passed, 765 skipped in 2.55s ========================\r\n```\r\n",
"@sanchit-gandhi could you also maybe take a quick look? :-)",
"Thanks for your contribution, @jimbozhang. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help.",
"> Thanks for your contribution, @jimbozhang. Are you still interested in adding this dataset?\r\n> \r\n> We are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n> \r\n> We would suggest you create this dataset there. Please, feel free to tell us if you need some help.\r\n\r\nYes, I just planned to finish this dataset these days, and this suggestion is just in time! Thanks a lot!\r\nI will create this dataset to Hugging Face Hub soon, maybe this week."
] |
1,294,818,236
| 4,637
|
The "all" split breaks streaming
|
open
| 2022-07-05T21:56:49
| 2022-07-15T13:59:30
| null |
https://github.com/huggingface/datasets/issues/4637
| null |
cakiki
| false
|
[
"Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.",
"@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!",
"@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I just pushed the test (to see if it impacts other tests).",
"It impacted the test `test_generator_based_download_and_prepare` and I have fixed this.\r\n\r\nSo that you can copy the test I implemented in my PR and then implement a fix for this issue that passes the test `tests/test_builder.py::test_builder_as_streaming_dataset`.",
"Hi @cakiki are you still interested in working on this? Are you planning to open a PR?",
"Hi @albertvillanova ! Sorry it took so long; I wanted to spend this weekend working on it."
] |
1,294,547,836
| 4,636
|
Add info in docs about behavior of download_config.num_proc
|
closed
| 2022-07-05T17:01:00
| 2022-07-28T10:40:32
| 2022-07-28T10:40:32
|
https://github.com/huggingface/datasets/issues/4636
| null |
nateraw
| false
|
[] |
1,294,475,931
| 4,635
|
Dataset Viewer issue for vadis/sv-ident
|
closed
| 2022-07-05T15:48:13
| 2022-07-06T07:13:33
| 2022-07-06T07:12:14
|
https://github.com/huggingface/datasets/issues/4635
| null |
e-tornike
| false
|
[
"Thanks for reporting, @e-tornike \r\n\r\nSome context:\r\n- #4527 \r\n\r\nThe dataset loads locally in streaming mode:\r\n```python\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"vadis/sv-ident\", split=\"validation\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configuration default\r\nOut[2]: \r\n{'sentence': 'Im Falle von Umweltbelastungen kann selten eindeutig entschieden werden, ob Unbedenklichkeitswerte bereits erreicht oder überschritten sind, die die menschliche Gesundheit oder andere Wohlfahrts»güter« beeinträchtigen.',\r\n 'is_variable': 0,\r\n 'variable': [],\r\n 'research_data': [],\r\n 'doc_id': '51971',\r\n 'uuid': 'ee3d7f88-1a3e-4a59-997f-e986b544a604',\r\n 'lang': 'de'}\r\n```",
"~~I have forced the refresh of the split in the preview without success.~~\r\n\r\nI have forced the refresh of the split in the preview, and now it works.",
"Preview seems to work now. \r\n\r\nhttps://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation",
"OK, thank you @e-tornike.\r\n\r\nApparently, after forcing the refresh, we just had to wait a little until it is effectively refreshed. ",
"I'm closing this issue as it was solved after forcing the refresh of the split in the preview.",
"Thanks a lot! :)"
] |
1,294,405,251
| 4,634
|
Can't load the Hausa audio dataset
|
closed
| 2022-07-05T14:47:36
| 2022-09-13T14:07:32
| 2022-09-13T14:07:32
|
https://github.com/huggingface/datasets/issues/4634
| null |
moro23
| false
|
[
"Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."
] |
1,294,367,783
| 4,633
|
[data_files] Only match separated split names
|
closed
| 2022-07-05T14:18:11
| 2022-07-18T13:20:29
| 2022-07-18T13:07:33
|
https://github.com/huggingface/datasets/pull/4633
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4633",
"html_url": "https://github.com/huggingface/datasets/pull/4633",
"diff_url": "https://github.com/huggingface/datasets/pull/4633.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4633.patch",
"merged_at": "2022-07-18T13:07:33"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran a script to find affected datasets (just did it on non-private non-gated). Adding \"testing\" and \"evaluation\" fixes all of of them except one:\r\n- projecte-aina/cat_manynames:\thuman_annotated_testset.tsv\r\n\r\nLet me open a PR on their repository to fix it\r\nEDIT: pr [here](https://huggingface.co/datasets/projecte-aina/cat_manynames/discussions/2)",
"Feel free to merge @albertvillanova if it's all good to you :)",
"Thanks for the feedback @albertvillanova I took your comments into account :)\r\n- added numbers as supported delimiters\r\n- used list comprehension to create the patterns list\r\n- updated the docs and the tests according to your comments\r\n\r\nLet me know what you think !",
"I ended up removing the patching and the context manager :) merging"
] |
1,294,166,880
| 4,632
|
'sort' method sorts one column only
|
closed
| 2022-07-05T11:25:26
| 2023-07-25T15:04:27
| 2023-07-25T15:04:27
|
https://github.com/huggingface/datasets/issues/4632
| null |
shachardon
| false
|
[
"Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made you think it was not the case ? Did you experience a situation where it was only sorting one column ?",
"Hi! thank you for your quick reply!\r\nI wanted to sort the `cnn_dailymail` dataset by the length of the labels (num of characters). I added a new column to the dataset (`ds.add_column`) with the lengths and then sorted by this new column. Only the new length column was sorted, the reset left in their original order. ",
"That's unexpected, can you share the code you used to get this ?"
] |
1,293,545,900
| 4,631
|
Update WinoBias README
|
closed
| 2022-07-04T20:24:40
| 2022-07-07T13:23:32
| 2022-07-07T13:11:47
|
https://github.com/huggingface/datasets/pull/4631
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4631",
"html_url": "https://github.com/huggingface/datasets/pull/4631",
"diff_url": "https://github.com/huggingface/datasets/pull/4631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4631.patch",
"merged_at": "2022-07-07T13:11:46"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,293,470,728
| 4,630
|
fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py.
|
closed
| 2022-07-04T18:26:55
| 2022-07-05T15:19:52
| 2022-07-05T15:08:21
|
https://github.com/huggingface/datasets/pull/4630
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4630",
"html_url": "https://github.com/huggingface/datasets/pull/4630",
"diff_url": "https://github.com/huggingface/datasets/pull/4630.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4630.patch",
"merged_at": "2022-07-05T15:08:21"
}
|
gugarosa
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,293,418,800
| 4,629
|
Rename repo default branch to main
|
closed
| 2022-07-04T17:16:10
| 2022-07-06T15:49:57
| 2022-07-06T15:49:57
|
https://github.com/huggingface/datasets/issues/4629
| null |
albertvillanova
| false
|
[] |
1,293,361,308
| 4,628
|
Fix time type `_arrow_to_datasets_dtype` conversion
|
closed
| 2022-07-04T16:20:15
| 2022-07-07T14:08:38
| 2022-07-07T13:57:12
|
https://github.com/huggingface/datasets/pull/4628
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4628",
"html_url": "https://github.com/huggingface/datasets/pull/4628",
"diff_url": "https://github.com/huggingface/datasets/pull/4628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4628.patch",
"merged_at": "2022-07-07T13:57:11"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,293,287,798
| 4,627
|
fixed duplicate calculation of spearmanr function in metrics wrapper.
|
closed
| 2022-07-04T15:02:01
| 2022-07-07T12:41:09
| 2022-07-07T12:41:09
|
https://github.com/huggingface/datasets/pull/4627
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4627",
"html_url": "https://github.com/huggingface/datasets/pull/4627",
"diff_url": "https://github.com/huggingface/datasets/pull/4627.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4627.patch",
"merged_at": "2022-07-07T12:41:09"
}
|
benlipkin
| true
|
[
"Great, can open a PR in `evaluate` as well to optimize this.\r\n\r\nRelatedly, I wanted to add a new metric, Kendall Tau (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kendalltau.html). If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the `datasets` or `evaluate` repo (or both)?\r\n\r\nThanks!",
"PR opened in`evaluate` library with same minor adjustment: https://github.com/huggingface/evaluate/pull/176 ",
"> If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the datasets or evaluate repo (or both)?\r\n\r\nI think you could just add it to `evaluate`, we're not adding new metrics in this repo anymore"
] |
1,293,256,269
| 4,626
|
Add non-commercial licensing info for datasets for which we removed tags
|
open
| 2022-07-04T14:32:43
| 2022-07-08T14:27:29
| null |
https://github.com/huggingface/datasets/issues/4626
| null |
lhoestq
| false
|
[
"yep plus `license_details` also makes sense for this IMO"
] |
1,293,163,744
| 4,625
|
Unpack `dl_manager.iter_files` to allow parallization
|
closed
| 2022-07-04T13:16:58
| 2022-07-05T11:11:54
| 2022-07-05T11:00:48
|
https://github.com/huggingface/datasets/pull/4625
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4625",
"html_url": "https://github.com/huggingface/datasets/pull/4625",
"diff_url": "https://github.com/huggingface/datasets/pull/4625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4625.patch",
"merged_at": "2022-07-05T11:00:48"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool thanks ! Yup it sounds like the right solution.\r\n\r\nIt looks like `_generate_tables` needs to be updated as well to fix the CI"
] |
1,293,085,058
| 4,624
|
Remove all paperswithcode_id: null
|
closed
| 2022-07-04T12:11:32
| 2023-09-24T10:05:19
| 2022-07-04T13:10:38
|
https://github.com/huggingface/datasets/pull/4624
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4624",
"html_url": "https://github.com/huggingface/datasets/pull/4624",
"diff_url": "https://github.com/huggingface/datasets/pull/4624.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4624.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side",
"Yup it's maybe better to support it on the Hub side then indeed, thanks ! Closing this one"
] |
1,293,042,894
| 4,623
|
Loading MNIST as Pytorch Dataset
|
open
| 2022-07-04T11:33:10
| 2022-07-04T14:40:50
| null |
https://github.com/huggingface/datasets/issues/4623
| null |
jameschapman19
| false
|
[
"Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ",
"So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ",
"This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```",
"Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!"
] |
1,293,031,939
| 4,622
|
Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present)
|
closed
| 2022-07-04T11:23:20
| 2022-07-15T14:37:23
| 2022-07-15T14:24:24
|
https://github.com/huggingface/datasets/pull/4622
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4622",
"html_url": "https://github.com/huggingface/datasets/pull/4622",
"diff_url": "https://github.com/huggingface/datasets/pull/4622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4622.patch",
"merged_at": "2022-07-15T14:24:24"
}
|
polinaeterna
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq @mariosasko pls take a look at https://github.com/huggingface/datasets/pull/4622/commits/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metadata_files` are not empty for the case when `self.config.drop_metadata=True` because I think we should be aligned with the config and preserve labels if `self.config.drop_labels=False` (the default value) and `self.config.drop_metadata=True` but `metadata_files` are passed. This is an extremely unlikely use case (when `self.config.drop_metadata=True`, but `metadata_files` are passed to `_generate_examples()`) since users usually do not use `_generate_examples()` alone but I believe it would be consistent to have the same behavior as in `_splits_generators()`. This change requires change in tests too if we suppose that we want to preserve labels (default value of `self.config.drop_labels` is False) when `self.config.drop_metadata=True`, even if `metadata_files` are for some reason provided (as it is done in tests). \r\n\r\nwdyt about this change?\r\n",
"@lhoestq it wouldn't raise an error if we check `example.keys() == {\"image\", \"label\"}` as test checks only `_generate_examples`, not `encode_example`. and in the implementation of this PR `_generate_examples` would return both `image` and `label` key in the case when `drop_metadata=True` and `drop_labels=False` (default) as it seems that we agreed on that :)",
"and on the other hand it would raise an error if `label` column is missing in _generate_examples when `drop_metadata=True` and `drop_labels=False`\r\n\r\nby \"it\" i mean tests :D (`test_generate_examples_with_metadata_that_misses_one_image`, `test_generate_examples_with_metadata_in_wrong_location` and `test_generate_examples_drop_metadata`)",
"Perhaps we could make `self.config.drop_metadata = None` and `self.config.drop_labels = None` the defaults to see explicitly what the user wants. This would then turn into `self.config.drop_metadata = False` and `self.config.drop_labels = True` if metadata files are present and `self.config.drop_metadata = True` and `self.config.drop_labels = False` if not. And if the user wants to have the `label` column alongside metadata columns, it can do so by passing `drop_labels = False` explicitely (in that scenario we have to check that the `label` column is not already present in metadata files). And maybe we can also improve the logging messages.\r\n\r\nI find it problematic that the current implementation drops labels in some scenarios even if `self.config.drop_labels = False`, and the user doesn't have control over this behavior.\r\n\r\nLet me know what you think."
] |
1,293,030,128
| 4,621
|
ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present
|
closed
| 2022-07-04T11:21:44
| 2022-07-15T14:24:24
| 2022-07-15T14:24:24
|
https://github.com/huggingface/datasets/issues/4621
| null |
polinaeterna
| false
|
[] |
1,292,797,878
| 4,620
|
Data type is not recognized when using datetime.time
|
closed
| 2022-07-04T08:13:38
| 2022-07-07T13:57:11
| 2022-07-07T13:57:11
|
https://github.com/huggingface/datasets/issues/4620
| null |
severo
| false
|
[
"cc @mariosasko ",
"Hi, thanks for reporting! I'm investigating the issue."
] |
1,292,107,275
| 4,619
|
np arrays get turned into native lists
|
open
| 2022-07-02T17:54:57
| 2022-07-03T20:27:07
| null |
https://github.com/huggingface/datasets/issues/4619
| null |
ZhaofengWu
| false
|
[
"If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```",
"I see, thanks! Any idea if the default numpy → list conversion might cause precision loss?",
"I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved."
] |
1,292,078,225
| 4,618
|
contribute data loading for object detection datasets with yolo data format
|
open
| 2022-07-02T15:21:59
| 2022-07-21T14:10:44
| null |
https://github.com/huggingface/datasets/issues/4618
| null |
faizankshaikh
| false
|
[
"Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?",
"@mariosasko sounds good to me!\r\n",
"Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script",
"1. Like this: `load_dataset(\"hf-loaders/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader."
] |
1,291,307,428
| 4,615
|
Fix `embed_storage` on features inside lists/sequences
|
closed
| 2022-07-01T11:52:08
| 2022-07-08T12:13:10
| 2022-07-08T12:01:36
|
https://github.com/huggingface/datasets/pull/4615
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4615",
"html_url": "https://github.com/huggingface/datasets/pull/4615",
"diff_url": "https://github.com/huggingface/datasets/pull/4615.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4615.patch",
"merged_at": "2022-07-08T12:01:35"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,291,218,020
| 4,614
|
Ensure ConcatenationTable.cast uses target_schema metadata
|
closed
| 2022-07-01T10:22:08
| 2022-07-19T13:48:45
| 2022-07-19T13:36:24
|
https://github.com/huggingface/datasets/pull/4614
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4614",
"html_url": "https://github.com/huggingface/datasets/pull/4614",
"diff_url": "https://github.com/huggingface/datasets/pull/4614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4614.patch",
"merged_at": "2022-07-19T13:36:24"
}
|
dtuit
| true
|
[
"Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,291,181,193
| 4,613
|
Align/fix license metadata info
|
closed
| 2022-07-01T09:50:50
| 2022-07-01T12:53:57
| 2022-07-01T12:42:47
|
https://github.com/huggingface/datasets/pull/4613
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4613",
"html_url": "https://github.com/huggingface/datasets/pull/4613",
"diff_url": "https://github.com/huggingface/datasets/pull/4613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4613.patch",
"merged_at": "2022-07-01T12:42:46"
}
|
julien-c
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you thank you! Let's merge and pray? 😱 ",
"I just need to add `license_details` to the validator and yup we can merge"
] |
1,290,984,660
| 4,612
|
Release 2.3.0 broke custom iterable datasets
|
closed
| 2022-07-01T06:46:07
| 2022-07-05T15:08:21
| 2022-07-05T15:08:21
|
https://github.com/huggingface/datasets/issues/4612
| null |
aapot
| false
|
[
"Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.",
"Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?",
"Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!"
] |
1,290,940,874
| 4,611
|
Preserve member order by MockDownloadManager.iter_archive
|
closed
| 2022-07-01T05:48:20
| 2022-07-01T16:59:11
| 2022-07-01T16:48:28
|
https://github.com/huggingface/datasets/pull/4611
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4611",
"html_url": "https://github.com/huggingface/datasets/pull/4611",
"diff_url": "https://github.com/huggingface/datasets/pull/4611.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4611.patch",
"merged_at": "2022-07-01T16:48:28"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,290,603,827
| 4,610
|
codeparrot/github-code failing to load
|
closed
| 2022-06-30T20:24:48
| 2022-07-05T14:24:13
| 2022-07-05T09:19:56
|
https://github.com/huggingface/datasets/issues/4610
| null |
PyDataBlog
| false
|
[
"I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_files.py#L547\r\n\r\n@mariosasko could you please confirm my finding? And are there any changes that need to be done from my side?",
"Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it",
"> Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it\n\nI can't wait for that releasee. Broke my application",
"This simple workaround should fix: https://huggingface.co/datasets/codeparrot/github-code/discussions/2\r\n\r\n`get_patterns_in_dataset_repository` can treat whether `base_path=None`, so we just need to make sure that codeparrot/github-code `_split_generators` calls with such an argument.",
"I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ? \r\n@lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?",
"Actually I think it's just simpler to fix it in the dataset itself, let me open a PR\r\n\r\nEDIT: PR opened here: https://huggingface.co/datasets/codeparrot/github-code/discussions/3",
"PR is merged, it's working now ! Closing this one :)",
"> I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ?\r\n> @lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?\r\n\r\nYou are definitely right, sorry about it. I always keep forgetting that we need to keep in mind users from past versions, my bad."
] |
1,290,392,083
| 4,609
|
librispeech dataset has to download whole subset when specifing the split to use
|
closed
| 2022-06-30T16:38:24
| 2022-07-12T21:44:32
| 2022-07-12T21:44:32
|
https://github.com/huggingface/datasets/issues/4609
| null |
sunhaozhepy
| false
|
[
"Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.",
"Hi,\r\n\r\nThat's a great help. Thank you very much."
] |
1,290,298,002
| 4,608
|
Fix xisfile, xgetsize, xisdir, xlistdir in private repo
|
closed
| 2022-06-30T15:23:21
| 2022-07-06T12:45:59
| 2022-07-06T12:34:19
|
https://github.com/huggingface/datasets/pull/4608
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4608",
"html_url": "https://github.com/huggingface/datasets/pull/4608",
"diff_url": "https://github.com/huggingface/datasets/pull/4608.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4608.patch",
"merged_at": "2022-07-06T12:34:19"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Added tests for xisfile, xgetsize, xlistdir and xglob for private repos, and also tests for xwalk that was untested"
] |
1,290,171,941
| 4,607
|
Align more metadata with other repo types (models,spaces)
|
closed
| 2022-06-30T13:52:12
| 2022-07-01T12:00:37
| 2022-07-01T11:49:14
|
https://github.com/huggingface/datasets/pull/4607
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4607",
"html_url": "https://github.com/huggingface/datasets/pull/4607",
"diff_url": "https://github.com/huggingface/datasets/pull/4607.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4607.patch",
"merged_at": "2022-07-01T11:49:14"
}
|
julien-c
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I just set a default value (None) for the deprecated licenses and languages fields, which should fix most of the CI failures.\r\n\r\nNote that the CI should still be red because you edited many dataset cards and they're still missing some content - but this is unrelated to this PR so we can ignore these failures",
"thanks so much @lhoestq !!",
"There's also a follow-up PR to this one, in #4613 – I would suggest to merge all of them at the same time and hope not too many things are broken 🙀 🙀 ",
"Alright merging this one now, let's see how broken things get"
] |
1,290,083,534
| 4,606
|
evaluation result changes after `datasets` version change
|
closed
| 2022-06-30T12:43:26
| 2023-07-25T15:05:26
| 2023-07-25T15:05:26
|
https://github.com/huggingface/datasets/issues/4606
| null |
thnkinbtfly
| false
|
[
"Hi! The GH/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `revision=\"2.2.0\"`) to `load_dataset.`\r\n"
] |
1,290,058,970
| 4,605
|
Dataset Viewer issue for boris/gis_filtered
|
closed
| 2022-06-30T12:23:34
| 2022-07-06T12:34:19
| 2022-07-06T12:34:19
|
https://github.com/huggingface/datasets/issues/4605
| null |
WaterKnight1998
| false
|
[
"Yes, this dataset is \"gated\": you first have to go to https://huggingface.co/datasets/boris/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).",
"I already did that, it returns error when using streaming",
"Oh, sorry, I misread. Looking at it. Maybe @huggingface/datasets or @SBrandeis ",
"I could reproduce the error, even though I provided my token and accepted the gate form. It looks like an error from `datasets`",
"This is indeed a bug in `datasets`. Parquet datasets in gated/private repositories can't be streamed properly, which caused the viewer to fail. I opened a PR at https://github.com/huggingface/datasets/pull/4608"
] |
1,289,963,962
| 4,604
|
Update CI Windows orb
|
closed
| 2022-06-30T11:00:31
| 2022-06-30T13:33:11
| 2022-06-30T13:22:26
|
https://github.com/huggingface/datasets/pull/4604
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4604",
"html_url": "https://github.com/huggingface/datasets/pull/4604",
"diff_url": "https://github.com/huggingface/datasets/pull/4604.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4604.patch",
"merged_at": "2022-06-30T13:22:25"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,289,963,331
| 4,603
|
CI fails recurrently and randomly on Windows
|
closed
| 2022-06-30T10:59:58
| 2022-06-30T13:22:25
| 2022-06-30T13:22:25
|
https://github.com/huggingface/datasets/issues/4603
| null |
albertvillanova
| false
|
[] |
1,289,950,379
| 4,602
|
Upgrade setuptools in windows CI
|
closed
| 2022-06-30T10:48:41
| 2023-09-24T10:05:10
| 2022-06-30T12:46:17
|
https://github.com/huggingface/datasets/pull/4602
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4602",
"html_url": "https://github.com/huggingface/datasets/pull/4602",
"diff_url": "https://github.com/huggingface/datasets/pull/4602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4602.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,289,924,715
| 4,601
|
Upgrade pip in WIN CI
|
closed
| 2022-06-30T10:25:42
| 2023-09-24T10:04:25
| 2022-06-30T10:43:38
|
https://github.com/huggingface/datasets/pull/4601
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4601",
"html_url": "https://github.com/huggingface/datasets/pull/4601",
"diff_url": "https://github.com/huggingface/datasets/pull/4601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4601.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It failed terribly"
] |
1,289,177,042
| 4,600
|
Remove multiple config section
|
closed
| 2022-06-29T19:09:21
| 2022-07-04T17:41:20
| 2022-07-04T17:29:41
|
https://github.com/huggingface/datasets/pull/4600
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4600",
"html_url": "https://github.com/huggingface/datasets/pull/4600",
"diff_url": "https://github.com/huggingface/datasets/pull/4600.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4600.patch",
"merged_at": "2022-07-04T17:29:41"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,288,849,933
| 4,599
|
Smooth-BLEU bug fixed
|
closed
| 2022-06-29T14:51:42
| 2022-09-23T07:42:40
| 2022-09-23T07:42:40
|
https://github.com/huggingface/datasets/pull/4599
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4599",
"html_url": "https://github.com/huggingface/datasets/pull/4599",
"diff_url": "https://github.com/huggingface/datasets/pull/4599.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4599.patch",
"merged_at": null
}
|
Aktsvigun
| true
|
[
"Thanks @Aktsvigun for your fix.\r\n\r\nHowever, metrics in `datasets` are in deprecation mode:\r\n- #4739\r\n\r\nYou should transfer this PR to the `evaluate` library: https://github.com/huggingface/evaluate\r\n\r\nJust for context, here the link to the PR by @Aktsvigun on tensorflow/nmt:\r\n- https://github.com/tensorflow/nmt/pull/488"
] |
1,288,774,514
| 4,598
|
Host financial_phrasebank data on the Hub
|
closed
| 2022-06-29T13:59:31
| 2022-07-01T09:41:14
| 2022-07-01T09:29:36
|
https://github.com/huggingface/datasets/pull/4598
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4598",
"html_url": "https://github.com/huggingface/datasets/pull/4598",
"diff_url": "https://github.com/huggingface/datasets/pull/4598.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4598.patch",
"merged_at": "2022-07-01T09:29:36"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,288,672,007
| 4,597
|
Streaming issue for financial_phrasebank
|
closed
| 2022-06-29T12:45:43
| 2022-07-01T09:29:36
| 2022-07-01T09:29:36
|
https://github.com/huggingface/datasets/issues/4597
| null |
lewtun
| false
|
[
"cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)",
"Let's see if their license allows hosting their data on the Hub.",
"License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub."
] |
1,288,381,735
| 4,596
|
Dataset Viewer issue for universal_dependencies
|
closed
| 2022-06-29T08:50:29
| 2022-09-07T11:29:28
| 2022-09-07T11:29:27
|
https://github.com/huggingface/datasets/issues/4596
| null |
Jordy-VL
| false
|
[
"Thanks, looking at it!",
"Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/188867795-4f7dd438-d4f2-46cd-8a92-20a37fb2d6bc.png\">\r\n"
] |
1,288,275,976
| 4,595
|
Dataset Viewer issue with False positive PII redaction
|
closed
| 2022-06-29T07:15:57
| 2022-06-29T08:29:41
| 2022-06-29T08:27:49
|
https://github.com/huggingface/datasets/issues/4595
| null |
cakiki
| false
|
[
"The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/cakiki/rosetta-code/discussions\r\n",
"This was indeed a scraping issue which I assumed was a display issue; sorry about that!"
] |
1,288,070,023
| 4,594
|
load_from_disk suggests incorrect fix when used to load DatasetDict
|
closed
| 2022-06-29T01:40:01
| 2022-06-29T04:03:44
| 2022-06-29T04:03:44
|
https://github.com/huggingface/datasets/issues/4594
| null |
dvsth
| false
|
[] |
1,288,067,699
| 4,593
|
Fix error message when using load_from_disk to load DatasetDict
|
closed
| 2022-06-29T01:34:27
| 2022-06-29T04:01:59
| 2022-06-29T04:01:39
|
https://github.com/huggingface/datasets/pull/4593
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4593",
"html_url": "https://github.com/huggingface/datasets/pull/4593",
"diff_url": "https://github.com/huggingface/datasets/pull/4593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4593.patch",
"merged_at": null
}
|
dvsth
| true
|
[] |
1,288,029,377
| 4,592
|
Issue with jalFaizy/detect_chess_pieces when running datasets-cli test
|
closed
| 2022-06-29T00:15:54
| 2022-06-29T10:30:03
| 2022-06-29T07:49:27
|
https://github.com/huggingface/datasets/issues/4592
| null |
faizankshaikh
| false
|
[
"Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/repositories-pull-requests-discussions\r\n\r\nThe Discussion tab for your \"jalFaizy/detect_chess_pieces\" dataset is here: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions\r\nYou can use it to ask for help by pinging the Datasets maintainers: see our docs here: https://huggingface.co/docs/datasets/master/en/share#ask-for-a-help-and-reviews\r\n\r\nI'm transferring this discussion to your Discussion tab and trying to address it: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/1",
"Thank you @albertvillanova , I will keep that in mind.\r\n\r\nJust a quick note - I posted the issue on Github because the dataset viewer suggested me to \"open an issue for direct support\". Maybe it can be updated with your suggestion\r\n\r\n\r\n\r\n\r\n",
"Thank you pointing this out: yes, definitely, we should fix the error message. We are working on this."
] |
1,288,021,332
| 4,591
|
Can't push Images to hub with manual Dataset
|
closed
| 2022-06-29T00:01:23
| 2022-07-08T12:01:36
| 2022-07-08T12:01:35
|
https://github.com/huggingface/datasets/issues/4591
| null |
cceyda
| false
|
[
"Hi, thanks for reporting! This issue stems from the changes introduced in https://github.com/huggingface/datasets/pull/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but doesn't change the types, hence the failure."
] |
1,287,941,058
| 4,590
|
Generalize meta_path json file creation in load.py [#4540]
|
closed
| 2022-06-28T21:48:06
| 2022-07-08T14:55:13
| 2022-07-07T13:17:45
|
https://github.com/huggingface/datasets/pull/4590
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4590",
"html_url": "https://github.com/huggingface/datasets/pull/4590",
"diff_url": "https://github.com/huggingface/datasets/pull/4590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4590.patch",
"merged_at": "2022-07-07T13:17:44"
}
|
VijayKalmath
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova, Can you please review this PR for Issue #4540 ",
"@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.",
"Hi ! Sure feel free to join our discord ^^ \r\nhttps://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)"
] |
1,287,600,029
| 4,589
|
Permission denied: '/home/.cache' when load_dataset with local script
|
closed
| 2022-06-28T16:26:03
| 2022-06-29T06:26:28
| 2022-06-29T06:25:08
|
https://github.com/huggingface/datasets/issues/4589
| null |
jiangh0
| false
|
[] |
1,287,368,751
| 4,588
|
Host head_qa data on the Hub and fix NonMatchingChecksumError
|
closed
| 2022-06-28T13:39:28
| 2022-07-05T16:01:15
| 2022-07-05T15:49:52
|
https://github.com/huggingface/datasets/pull/4588
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4588",
"html_url": "https://github.com/huggingface/datasets/pull/4588",
"diff_url": "https://github.com/huggingface/datasets/pull/4588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4588.patch",
"merged_at": "2022-07-05T15:49:52"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks 🙏 ",
"@younesbelkada we have just merged this PR."
] |
1,287,291,494
| 4,587
|
Validate new_fingerprint passed by user
|
closed
| 2022-06-28T12:46:21
| 2022-06-28T14:11:57
| 2022-06-28T14:00:44
|
https://github.com/huggingface/datasets/pull/4587
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4587",
"html_url": "https://github.com/huggingface/datasets/pull/4587",
"diff_url": "https://github.com/huggingface/datasets/pull/4587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4587.patch",
"merged_at": "2022-06-28T14:00:44"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,287,105,636
| 4,586
|
Host pn_summary data on the Hub instead of Google Drive
|
closed
| 2022-06-28T10:05:05
| 2022-06-28T14:52:56
| 2022-06-28T14:42:03
|
https://github.com/huggingface/datasets/pull/4586
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4586",
"html_url": "https://github.com/huggingface/datasets/pull/4586",
"diff_url": "https://github.com/huggingface/datasets/pull/4586.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4586.patch",
"merged_at": "2022-06-28T14:42:03"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,287,064,929
| 4,585
|
Host multi_news data on the Hub instead of Google Drive
|
closed
| 2022-06-28T09:32:06
| 2022-06-28T14:19:35
| 2022-06-28T14:08:48
|
https://github.com/huggingface/datasets/pull/4585
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4585",
"html_url": "https://github.com/huggingface/datasets/pull/4585",
"diff_url": "https://github.com/huggingface/datasets/pull/4585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4585.patch",
"merged_at": "2022-06-28T14:08:48"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,286,911,993
| 4,584
|
Add binary classification task IDs
|
closed
| 2022-06-28T07:30:39
| 2023-09-24T10:04:04
| 2023-01-26T09:27:52
|
https://github.com/huggingface/datasets/pull/4584
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4584",
"html_url": "https://github.com/huggingface/datasets/pull/4584",
"diff_url": "https://github.com/huggingface/datasets/pull/4584.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4584.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.",
"> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https://github.com/huggingface/hub-docs/pull/217",
"I don't think we need to update this file anymore. We should remove it IMO, and simply update the dataset [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging)",
"I'm closing this PR."
] |
1,286,790,871
| 4,583
|
<code> implementation of FLAC support using torchaudio
|
closed
| 2022-06-28T05:24:21
| 2022-06-28T05:47:02
| 2022-06-28T05:47:02
|
https://github.com/huggingface/datasets/pull/4583
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4583",
"html_url": "https://github.com/huggingface/datasets/pull/4583",
"diff_url": "https://github.com/huggingface/datasets/pull/4583.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4583.patch",
"merged_at": null
}
|
rafael-ariascalles
| true
|
[] |
1,286,517,060
| 4,582
|
add_column should preserve _indexes
|
open
| 2022-06-27T22:35:47
| 2022-07-06T15:19:54
| null |
https://github.com/huggingface/datasets/pull/4582
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4582",
"html_url": "https://github.com/huggingface/datasets/pull/4582",
"diff_url": "https://github.com/huggingface/datasets/pull/4582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4582.patch",
"merged_at": null
}
|
cceyda
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4582). All of your documentation changes will be reflected on that endpoint."
] |
1,286,362,907
| 4,581
|
Dataset Viewer issue for pn_summary
|
closed
| 2022-06-27T20:56:12
| 2022-06-28T14:42:03
| 2022-06-28T14:42:03
|
https://github.com/huggingface/datasets/issues/4581
| null |
lewtun
| false
|
[
"linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?",
"Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n",
"Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else."
] |
1,286,312,912
| 4,580
|
Dataset Viewer issue for multi_news
|
closed
| 2022-06-27T20:25:25
| 2022-06-28T14:08:48
| 2022-06-28T14:08:48
|
https://github.com/huggingface/datasets/issues/4580
| null |
lewtun
| false
|
[
"Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.",
"I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt"
] |
1,286,106,285
| 4,579
|
Support streaming cfq dataset
|
closed
| 2022-06-27T17:11:23
| 2022-07-04T19:35:01
| 2022-07-04T19:23:57
|
https://github.com/huggingface/datasets/pull/4579
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4579",
"html_url": "https://github.com/huggingface/datasets/pull/4579",
"diff_url": "https://github.com/huggingface/datasets/pull/4579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4579.patch",
"merged_at": "2022-07-04T19:23:57"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either yield from buffer\r\n - or iterate over samples and either yield or buffer the sample\r\n \r\n The speed gain obviously depends on how the indexes are sorted in the split file:\r\n - Best case: indices are [1, 2, 3]\r\n - Worst case (no speed gain): indices are [3, 1, 2] or [3, 2, 1]\r\n\r\nLet me know what you think.",
"I have to update the dummy data so that it aligns with the real data (inside the archive, the samples file `dataset.json` is the last member).",
"There is an issue when testing `test_load_dataset_cfq` with dummy data:\r\n- `MockDownloadManager.iter_archive` yields FIRST `'cfq/dataset.json'`\r\n- [`Streaming`]`DownloadManager.iter_archive` yields LAST `'cfq/dataset.json'` when using real data tar.gz archive\r\n\r\nNote that this issue arises only with dummy data: loading the real dataset works smoothly for all configurations: I recreated the `dataset_infos.json` file to check it (it generated the same file).",
"This PR should be merged first:\r\n- #4611",
"Impressive, thank you ! :o \r\n\r\nfeel free to merge master into this branch, now that the files order is respected. You can merge if the CI is green :)"
] |
1,286,086,400
| 4,578
|
[Multi Configs] Use directories to differentiate between subsets/configurations
|
open
| 2022-06-27T16:55:11
| 2023-06-14T15:43:05
| null |
https://github.com/huggingface/datasets/issues/4578
| null |
lhoestq
| false
|
[
"I want to be able to create folders in a model.",
"How to set new split names, instead of train/test/validation? For example, I have a local dataset, consists of several subsets, named \"A\", \"B\", and \"C\". How can I create a huggingface dataset, with splits A/B/C ?\r\n\r\nThe document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?",
"> The document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?\r\n\r\nIt works the same - you just need to use local paths instead of URLs"
] |
1,285,703,775
| 4,577
|
Add authentication tip to `load_dataset`
|
closed
| 2022-06-27T12:05:34
| 2022-07-04T13:13:15
| 2022-07-04T13:01:30
|
https://github.com/huggingface/datasets/pull/4577
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4577",
"html_url": "https://github.com/huggingface/datasets/pull/4577",
"diff_url": "https://github.com/huggingface/datasets/pull/4577.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4577.patch",
"merged_at": "2022-07-04T13:01:30"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,285,698,576
| 4,576
|
Include `metadata.jsonl` in resolved data files
|
closed
| 2022-06-27T12:01:29
| 2022-07-01T12:44:55
| 2022-06-30T10:15:32
|
https://github.com/huggingface/datasets/pull/4576
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4576",
"html_url": "https://github.com/huggingface/datasets/pull/4576",
"diff_url": "https://github.com/huggingface/datasets/pull/4576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4576.patch",
"merged_at": "2022-06-30T10:15:31"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?",
"Yes, that's indeed the problem. My solution in https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https://github.com/huggingface/datasets/blob/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95/src/datasets/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?",
"@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n",
"The CI still struggles but you can merge since at least one of the two WIN CI succeeded"
] |
1,285,446,700
| 4,575
|
Problem about wmt17 zh-en dataset
|
closed
| 2022-06-27T08:35:42
| 2022-08-23T10:01:02
| 2022-08-23T10:00:21
|
https://github.com/huggingface/datasets/issues/4575
| null |
winterfell2021
| false
|
[
"Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`.",
"@albertvillanova @lhoestq Could you take a look at this issue?",
"@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets/table.py` in `array_cast` function, however, the 'zh' item is none.",
"I found some 'zh' item is none while 'c[hn]' is not.\r\nSo the code may change to:\r\n```python\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if vo.get('zh'):\r\n tmp['zh'] = vo['zh']\r\n else:\r\n tmp['zh'] = vo['c[hn]']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```",
"I just pushed a fix, we'll do a new release of `datasets` soon to include this fix. In the meantime you can use the fixed dataset by passing `revision=\"main\"` to `load_dataset`"
] |
1,285,380,616
| 4,574
|
Support streaming mlsum dataset
|
closed
| 2022-06-27T07:37:03
| 2022-07-21T13:37:30
| 2022-07-21T12:40:00
|
https://github.com/huggingface/datasets/pull/4574
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4574",
"html_url": "https://github.com/huggingface/datasets/pull/4574",
"diff_url": "https://github.com/huggingface/datasets/pull/4574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4574.patch",
"merged_at": "2022-07-21T12:40:00"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/datasets/tests/conftest.py'.\r\ntests/conftest.py:13: in <module>\r\n import datasets\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>\r\n from .arrow_dataset import Dataset\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_dataset.py:62: in <module>\r\n from .arrow_reader import ArrowReader\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_reader.py:29: in <module>\r\n from .download.download_config import DownloadConfig\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/__init__.py:10: in <module>\r\n from .streaming_download_manager import StreamingDownloadManager\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/streaming_download_manager.py:20: in <module>\r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/__init__.py:13: in <module>\r\n from .s3filesystem import S3FileSystem # noqa: F401\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py:1: in <module>\r\n import s3fs\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/__init__.py:1: in <module>\r\n from .core import S3FileSystem, S3File\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/core.py:12: in <module>\r\n from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync\r\nE ImportError: cannot import name 'maybe_sync'\r\n```\r\n\r\nThe installed `s3fs` version is too old. What about pinning a min version?",
"Maybe you can try setting the same minimum version as fsspec ? `s3fs>=2021.11.1`",
"Yes, I have checked that they both require to have the same version. \r\n\r\nThe issue then was coming from aiobotocore, boto3, botocore. I have changed them from strict to min version requirements.\r\n> s3fs 2021.11.1 depends on aiobotocore~=2.0.1",
"I have updated all min versions so that they are compatible one with each other. I'm pushing again...",
"Thanks !",
"Nice!"
] |
1,285,023,629
| 4,573
|
Fix evaluation metadata for ncbi_disease
|
closed
| 2022-06-26T20:29:32
| 2023-09-24T09:35:07
| 2022-09-23T09:38:02
|
https://github.com/huggingface/datasets/pull/4573
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4573",
"html_url": "https://github.com/huggingface/datasets/pull/4573",
"diff_url": "https://github.com/huggingface/datasets/pull/4573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4573.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] |
1,285,022,499
| 4,572
|
Dataset Viewer issue for mlsum
|
closed
| 2022-06-26T20:24:17
| 2022-07-21T12:40:01
| 2022-07-21T12:40:01
|
https://github.com/huggingface/datasets/issues/4572
| null |
lewtun
| false
|
[
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] |
1,284,883,289
| 4,571
|
move under the facebook org?
|
open
| 2022-06-26T11:19:09
| 2023-09-25T12:05:18
| null |
https://github.com/huggingface/datasets/issues/4571
| null |
lewtun
| false
|
[
"Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ",
"I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?",
"fwiw: the dataset viewer is working. Renaming the issue"
] |
1,284,846,168
| 4,570
|
Dataset sharding non-contiguous?
|
closed
| 2022-06-26T08:34:05
| 2022-06-30T11:00:47
| 2022-06-26T14:36:20
|
https://github.com/huggingface/datasets/issues/4570
| null |
cakiki
| false
|
[
"This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.",
"Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread 😄 ",
"Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ",
"@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ",
"This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)."
] |
1,284,833,694
| 4,569
|
Dataset Viewer issue for sst2
|
closed
| 2022-06-26T07:32:54
| 2022-06-27T06:37:48
| 2022-06-27T06:37:48
|
https://github.com/huggingface/datasets/issues/4569
| null |
lewtun
| false
|
[
"Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ",
"Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)"
] |
1,284,655,624
| 4,568
|
XNLI cache reload is very slow
|
closed
| 2022-06-25T16:43:56
| 2022-07-04T14:29:40
| 2022-07-04T14:29:40
|
https://github.com/huggingface/datasets/issues/4568
| null |
Muennighoff
| false
|
[
"Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ",
"Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.",
"Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)."
] |
1,284,528,474
| 4,567
|
Add evaluation data for amazon_reviews_multi
|
closed
| 2022-06-25T09:40:52
| 2023-09-24T09:35:22
| 2022-09-23T09:37:23
|
https://github.com/huggingface/datasets/pull/4567
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4567",
"html_url": "https://github.com/huggingface/datasets/pull/4567",
"diff_url": "https://github.com/huggingface/datasets/pull/4567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4567.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] |
1,284,397,594
| 4,566
|
Document link #load_dataset_enhancing_performance points to nowhere
|
closed
| 2022-06-25T01:18:19
| 2023-01-24T16:33:40
| 2023-01-24T16:33:40
|
https://github.com/huggingface/datasets/issues/4566
| null |
subercui
| false
|
[
"Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?",
"https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works."
] |
1,284,141,666
| 4,565
|
Add UFSC OCPap dataset
|
closed
| 2022-06-24T20:07:54
| 2022-07-06T19:03:02
| 2022-07-06T19:03:02
|
https://github.com/huggingface/datasets/issues/4565
| null |
johnnv1
| false
|
[
"I will add this directly on the hub (same as #4486)—in https://huggingface.co/lapix"
] |
1,283,932,333
| 4,564
|
Support streaming bookcorpus dataset
|
closed
| 2022-06-24T16:13:39
| 2022-07-06T09:34:48
| 2022-07-06T09:23:04
|
https://github.com/huggingface/datasets/pull/4564
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4564",
"html_url": "https://github.com/huggingface/datasets/pull/4564",
"diff_url": "https://github.com/huggingface/datasets/pull/4564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4564.patch",
"merged_at": "2022-07-06T09:23:04"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,283,914,383
| 4,563
|
Support streaming allocine dataset
|
closed
| 2022-06-24T15:55:03
| 2022-06-24T16:54:57
| 2022-06-24T16:44:41
|
https://github.com/huggingface/datasets/pull/4563
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4563",
"html_url": "https://github.com/huggingface/datasets/pull/4563",
"diff_url": "https://github.com/huggingface/datasets/pull/4563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4563.patch",
"merged_at": "2022-06-24T16:44:41"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,283,779,557
| 4,562
|
Dataset Viewer issue for allocine
|
closed
| 2022-06-24T13:50:38
| 2022-06-27T06:39:32
| 2022-06-24T16:44:41
|
https://github.com/huggingface/datasets/issues/4562
| null |
lewtun
| false
|
[
"I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n",
"Let me have a look...",
"Thanks for the quick fix @albertvillanova ",
"Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).",
"> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)"
] |
1,283,624,242
| 4,561
|
Add evaluation data to acronym_identification
|
closed
| 2022-06-24T11:17:33
| 2022-06-27T09:37:55
| 2022-06-27T08:49:22
|
https://github.com/huggingface/datasets/pull/4561
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4561",
"html_url": "https://github.com/huggingface/datasets/pull/4561",
"diff_url": "https://github.com/huggingface/datasets/pull/4561.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4561.patch",
"merged_at": "2022-06-27T08:49:22"
}
|
lewtun
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,283,558,873
| 4,560
|
Add evaluation metadata to imagenet-1k
|
closed
| 2022-06-24T10:12:41
| 2023-09-24T09:35:32
| 2022-09-23T09:37:03
|
https://github.com/huggingface/datasets/pull/4560
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4560",
"html_url": "https://github.com/huggingface/datasets/pull/4560",
"diff_url": "https://github.com/huggingface/datasets/pull/4560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4560.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] |
1,283,544,937
| 4,559
|
Add action names in schema_guided_dstc8 dataset card
|
closed
| 2022-06-24T10:00:01
| 2022-06-24T10:54:28
| 2022-06-24T10:43:47
|
https://github.com/huggingface/datasets/pull/4559
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4559",
"html_url": "https://github.com/huggingface/datasets/pull/4559",
"diff_url": "https://github.com/huggingface/datasets/pull/4559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4559.patch",
"merged_at": "2022-06-24T10:43:47"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,283,479,650
| 4,558
|
Add evaluation metadata to wmt14
|
closed
| 2022-06-24T09:08:54
| 2023-09-24T09:35:39
| 2022-09-23T09:36:50
|
https://github.com/huggingface/datasets/pull/4558
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4558",
"html_url": "https://github.com/huggingface/datasets/pull/4558",
"diff_url": "https://github.com/huggingface/datasets/pull/4558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4558.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4558). All of your documentation changes will be reflected on that endpoint.",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] |
1,283,473,889
| 4,557
|
Add evaluation metadata to wmt16
|
closed
| 2022-06-24T09:04:23
| 2023-09-24T09:35:49
| 2022-09-23T09:36:32
|
https://github.com/huggingface/datasets/pull/4557
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4557",
"html_url": "https://github.com/huggingface/datasets/pull/4557",
"diff_url": "https://github.com/huggingface/datasets/pull/4557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4557.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4557). All of your documentation changes will be reflected on that endpoint.",
"> Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?\r\n\r\nyes :)",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] |
1,283,462,881
| 4,556
|
Dataset Viewer issue for conll2003
|
closed
| 2022-06-24T08:55:18
| 2022-06-24T09:50:39
| 2022-06-24T09:50:39
|
https://github.com/huggingface/datasets/issues/4556
| null |
lewtun
| false
|
[
"Fixed, thanks."
] |
1,283,451,651
| 4,555
|
Dataset Viewer issue for xtreme
|
closed
| 2022-06-24T08:46:08
| 2022-06-24T09:50:45
| 2022-06-24T09:50:45
|
https://github.com/huggingface/datasets/issues/4555
| null |
lewtun
| false
|
[
"Fixed, thanks."
] |
1,283,369,453
| 4,554
|
Fix WMT dataset loading issue and docs update (Re-opened)
|
closed
| 2022-06-24T07:26:16
| 2022-07-08T15:39:20
| 2022-07-08T15:27:44
|
https://github.com/huggingface/datasets/pull/4554
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4554",
"html_url": "https://github.com/huggingface/datasets/pull/4554",
"diff_url": "https://github.com/huggingface/datasets/pull/4554.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4554.patch",
"merged_at": "2022-07-08T15:27:44"
}
|
khushmeeet
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,282,779,560
| 4,553
|
Stop dropping columns in to_tf_dataset() before we load batches
|
closed
| 2022-06-23T18:21:05
| 2022-07-04T19:00:13
| 2022-07-04T18:49:01
|
https://github.com/huggingface/datasets/pull/4553
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4553",
"html_url": "https://github.com/huggingface/datasets/pull/4553",
"diff_url": "https://github.com/huggingface/datasets/pull/4553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4553.patch",
"merged_at": "2022-07-04T18:49:01"
}
|
Rocketknight1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.",
"Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from the dataset rather than with `.remove_columns()` to make sure we don't break transforms), and tests are green so we're ready for review!",
"@lhoestq Test is in!"
] |
1,282,615,646
| 4,552
|
Tell users to upload on the hub directly
|
closed
| 2022-06-23T15:47:52
| 2022-06-26T15:49:46
| 2022-06-26T15:39:11
|
https://github.com/huggingface/datasets/pull/4552
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4552",
"html_url": "https://github.com/huggingface/datasets/pull/4552",
"diff_url": "https://github.com/huggingface/datasets/pull/4552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4552.patch",
"merged_at": "2022-06-26T15:39:11"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! I updated the two remaining files"
] |
1,282,534,807
| 4,551
|
Perform hidden file check on relative data file path
|
closed
| 2022-06-23T14:49:11
| 2022-06-30T14:49:20
| 2022-06-30T14:38:18
|
https://github.com/huggingface/datasets/pull/4551
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4551",
"html_url": "https://github.com/huggingface/datasets/pull/4551",
"diff_url": "https://github.com/huggingface/datasets/pull/4551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4551.patch",
"merged_at": "2022-06-30T14:38:18"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https://github.com/huggingface/datasets/issues/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they seem to work (will push them on Monday; btw they don't break any of fsspec's tests, so maybe we can contribute this as an enhancement to them). Also, perhaps we should include the files starting with `__` in the results again (we hadn't had issues with this pattern before). WDYT?",
"I see. Feel free to merge this one if it's good for you btw :)\r\n\r\n> Also, perhaps we should include the files starting with __ in the results again (we hadn't had issues with this pattern before)\r\n\r\nThe point was mainly to ignore `__pycache__` directories for example. Also also for consistency with the iter_files/iter_archive which are already ignoring them",
"Very elegant solution! Feel free to merge if the CI is green after adding the tests.",
"CI failure is unrelated to this PR"
] |
1,282,374,441
| 4,550
|
imdb source error
|
closed
| 2022-06-23T13:02:52
| 2022-06-23T13:47:05
| 2022-06-23T13:47:04
|
https://github.com/huggingface/datasets/issues/4550
| null |
Muhtasham
| false
|
[
"Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n"
] |
1,282,312,975
| 4,549
|
FileNotFoundError when passing a data_file inside a directory starting with double underscores
|
closed
| 2022-06-23T12:19:24
| 2022-06-30T14:38:18
| 2022-06-30T14:38:18
|
https://github.com/huggingface/datasets/issues/4549
| null |
lhoestq
| false
|
[
"I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`",
"We're working on a fix ;)"
] |
1,282,218,096
| 4,548
|
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
|
closed
| 2022-06-23T10:58:57
| 2022-06-30T10:15:32
| 2022-06-30T10:15:32
|
https://github.com/huggingface/datasets/issues/4548
| null |
polinaeterna
| false
|
[
"I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)"
] |
1,282,160,517
| 4,547
|
[CI] Fix some warnings
|
closed
| 2022-06-23T10:10:49
| 2022-06-28T14:10:57
| 2022-06-28T13:59:54
|
https://github.com/huggingface/datasets/pull/4547
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4547",
"html_url": "https://github.com/huggingface/datasets/pull/4547",
"diff_url": "https://github.com/huggingface/datasets/pull/4547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4547.patch",
"merged_at": "2022-06-28T13:59:54"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR",
"good catch, I thought I resolved them all sorry",
"Alright it should be good now"
] |
1,282,093,288
| 4,546
|
[CI] fixing seqeval install in ci by pinning setuptools-scm
|
closed
| 2022-06-23T09:24:37
| 2022-06-23T10:24:16
| 2022-06-23T10:13:44
|
https://github.com/huggingface/datasets/pull/4546
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4546",
"html_url": "https://github.com/huggingface/datasets/pull/4546",
"diff_url": "https://github.com/huggingface/datasets/pull/4546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4546.patch",
"merged_at": "2022-06-23T10:13:44"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,280,899,028
| 4,545
|
Make DuplicateKeysError more user friendly [For Issue #2556]
|
closed
| 2022-06-22T21:01:34
| 2022-06-28T09:37:06
| 2022-06-28T09:26:04
|
https://github.com/huggingface/datasets/pull/4545
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4545",
"html_url": "https://github.com/huggingface/datasets/pull/4545",
"diff_url": "https://github.com/huggingface/datasets/pull/4545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4545.patch",
"merged_at": "2022-06-28T09:26:04"
}
|
VijayKalmath
| true
|
[
"> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,280,500,340
| 4,544
|
[CI] seqeval installation fails sometimes on python 3.6
|
closed
| 2022-06-22T16:35:23
| 2022-06-23T10:13:44
| 2022-06-23T10:13:44
|
https://github.com/huggingface/datasets/issues/4544
| null |
lhoestq
| false
|
[] |
1,280,379,781
| 4,543
|
[CI] Fix upstream hub test url
|
closed
| 2022-06-22T15:34:27
| 2022-06-22T16:37:40
| 2022-06-22T16:27:37
|
https://github.com/huggingface/datasets/pull/4543
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4543",
"html_url": "https://github.com/huggingface/datasets/pull/4543",
"diff_url": "https://github.com/huggingface/datasets/pull/4543.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4543.patch",
"merged_at": "2022-06-22T16:27:37"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Remaining CI failures are unrelated to this fix, merging"
] |
1,280,269,445
| 4,542
|
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
|
open
| 2022-06-22T14:42:00
| 2022-10-11T08:45:45
| null |
https://github.com/huggingface/datasets/issues/4542
| null |
lhoestq
| false
|
[
"This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ",
"cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!",
"Noted and I will look into the thread in detail tomorrow once I log back in. ",
"@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ",
"> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok",
"So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ",
"> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)",
"Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ",
"@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ",
"Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example",
"@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```",
"@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ",
"Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types",
"If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.",
"> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?",
"> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ",
"> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^",
"Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).",
"Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ",
"@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?",
"> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.",
"If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?",
"@lhoestq why one would convert to TFRecords after unbatching? ",
"> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ",
"Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)",
"> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ",
"I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ",
"Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ",
"Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330µs/image to 30ms/image)",
"Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.