id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,280,161,436
| 4,541
|
Fix timestamp conversion from Pandas to Python datetime in streaming mode
|
closed
| 2022-06-22T13:40:01
| 2022-06-22T16:39:27
| 2022-06-22T16:29:09
|
https://github.com/huggingface/datasets/pull/4541
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4541",
"html_url": "https://github.com/huggingface/datasets/pull/4541",
"diff_url": "https://github.com/huggingface/datasets/pull/4541.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4541.patch",
"merged_at": "2022-06-22T16:29:09"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"CI failures are unrelated to this PR, merging"
] |
1,280,142,942
| 4,540
|
Avoid splitting by` .py` for the file.
|
closed
| 2022-06-22T13:26:55
| 2022-07-07T13:17:44
| 2022-07-07T13:17:44
|
https://github.com/huggingface/datasets/issues/4540
| null |
espoirMur
| false
|
[
"Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)",
"I will have a look.. \r\n\r\nThis weekend .. ",
"@albertvillanova , Can you have a look at #4590. \r\n\r\nThanks ",
"#self-assign"
] |
1,279,779,829
| 4,539
|
Replace deprecated logging.warn with logging.warning
|
closed
| 2022-06-22T08:32:29
| 2022-06-22T13:43:23
| 2022-06-22T12:51:51
|
https://github.com/huggingface/datasets/pull/4539
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4539",
"html_url": "https://github.com/huggingface/datasets/pull/4539",
"diff_url": "https://github.com/huggingface/datasets/pull/4539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4539.patch",
"merged_at": "2022-06-22T12:51:51"
}
|
hugovk
| true
|
[] |
1,279,409,786
| 4,538
|
Dataset Viewer issue for Pile of Law
|
closed
| 2022-06-22T02:48:40
| 2022-06-27T07:30:23
| 2022-06-26T22:26:22
|
https://github.com/huggingface/datasets/issues/4538
| null |
Breakend
| false
|
[
"Hi @Breakend, yes – we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!",
"Awesome! Thanks for confirming. cc @severo ",
"Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n"
] |
1,279,144,310
| 4,537
|
Fix WMT dataset loading issue and docs update
|
closed
| 2022-06-21T21:48:02
| 2022-06-24T07:05:43
| 2022-06-24T07:05:10
|
https://github.com/huggingface/datasets/pull/4537
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4537",
"html_url": "https://github.com/huggingface/datasets/pull/4537",
"diff_url": "https://github.com/huggingface/datasets/pull/4537.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4537.patch",
"merged_at": null
}
|
khushmeeet
| true
|
[
"The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream git@github.com:huggingface/datasets.git\r\ngit pull --ff-only upstream master\r\ngit checkout -b wmt-datasets-fix2\r\ngit cherry-pick f2d6c995d5153131168f64fc60fe33a7813739a4 a9fdead5f435aeb88c237600be28eb8d4fde4c55\r\n```",
"Closing this PR due to unwanted commit changes. Will be opening new PR for the same issue."
] |
1,278,734,727
| 4,536
|
Properly raise FileNotFound even if the dataset is private
|
closed
| 2022-06-21T17:05:50
| 2022-06-28T10:46:51
| 2022-06-28T10:36:10
|
https://github.com/huggingface/datasets/pull/4536
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4536",
"html_url": "https://github.com/huggingface/datasets/pull/4536",
"diff_url": "https://github.com/huggingface/datasets/pull/4536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4536.patch",
"merged_at": "2022-06-28T10:36:10"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,278,365,039
| 4,535
|
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
|
closed
| 2022-06-21T12:18:49
| 2022-06-27T16:25:09
| 2022-06-27T16:14:36
|
https://github.com/huggingface/datasets/pull/4535
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4535",
"html_url": "https://github.com/huggingface/datasets/pull/4535",
"diff_url": "https://github.com/huggingface/datasets/pull/4535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4535.patch",
"merged_at": "2022-06-27T16:14:36"
}
|
alvarobartt
| true
|
[
"Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/config.py#L183\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps://github.com/huggingface/datasets/blob/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640/src/datasets/arrow_dataset.py#L1122",
"_The documentation is not available anymore as the PR was closed or merged._",
"> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)",
"Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!",
"CI failures are unrelated to this PR and fixed on master, merging"
] |
1,277,897,197
| 4,534
|
Add `tldr_news` dataset
|
closed
| 2022-06-21T05:02:43
| 2022-06-23T14:33:54
| 2022-06-21T14:21:11
|
https://github.com/huggingface/datasets/pull/4534
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4534",
"html_url": "https://github.com/huggingface/datasets/pull/4534",
"diff_url": "https://github.com/huggingface/datasets/pull/4534.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4534.patch",
"merged_at": null
}
|
JulesBelveze
| true
|
[
"Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent 😃 ",
"Thanks, we will update the guide ;)"
] |
1,277,211,490
| 4,533
|
Timestamp not returned as datetime objects in streaming mode
|
closed
| 2022-06-20T17:28:47
| 2022-06-22T16:29:09
| 2022-06-22T16:29:09
|
https://github.com/huggingface/datasets/issues/4533
| null |
lhoestq
| false
|
[] |
1,277,167,129
| 4,532
|
Add Video feature
|
closed
| 2022-06-20T16:36:41
| 2022-11-10T16:59:51
| 2022-11-10T16:59:51
|
https://github.com/huggingface/datasets/pull/4532
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4532",
"html_url": "https://github.com/huggingface/datasets/pull/4532",
"diff_url": "https://github.com/huggingface/datasets/pull/4532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4532.patch",
"merged_at": null
}
|
nateraw
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4532). All of your documentation changes will be reflected on that endpoint.",
"@nateraw do you have any plans to continue this pr? Or should I write a custom loader script to use my video dataset in the hub?",
"@fcakyon I think we still want this feature in here, but my solution here isn't the right one, I'm afraid. Using my (very hacky) library is not the right move. Let's move to an issue to discuss the feature/workarounds for now. "
] |
1,277,054,172
| 4,531
|
Dataset Viewer issue for CSV datasets
|
closed
| 2022-06-20T14:56:24
| 2022-06-21T08:28:46
| 2022-06-21T08:28:27
|
https://github.com/huggingface/datasets/issues/4531
| null |
merveenoyan
| false
|
[
"this should now be fixed",
"Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture d’écran 2022-06-21 à 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n"
] |
1,276,884,962
| 4,530
|
Add AudioFolder packaged loader
|
closed
| 2022-06-20T12:54:02
| 2022-08-22T14:36:49
| 2022-08-22T14:20:40
|
https://github.com/huggingface/datasets/pull/4530
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4530",
"html_url": "https://github.com/huggingface/datasets/pull/4530",
"diff_url": "https://github.com/huggingface/datasets/pull/4530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4530.patch",
"merged_at": "2022-08-22T14:20:40"
}
|
polinaeterna
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq @mariosasko I don't know what to do with the test, do you have any ideas? :)",
"also it's passed in `pyarrow_latest_WIN`",
"If the error only happens on 3.6, maybe #4460 can help ^^' It seems to work in 3.7 on the windows CI\r\n\r\n> inferring labels is not the default behavior (drop_labels is set to True in config)\r\n\r\nI think it a missed opportunity to have a consistent API between imagefolder and audiofolder, since they do everything the same way. Can you give more details why you think we should drop the labels by default ?",
"Considering audio classification in audio is not as common as image classification in image, I'm ok with having different config defaults as long as they are properly documented (check [Papers With Code](https://paperswithcode.com/datasets) for stats and compare the classification numbers to the other tasks, do this for both modalities)\r\n\r\nAlso, WDYT about creating a generic folder loader that ImageFolder and AudioFolder then subclass to avoid having to update both of them when there is something to update/fix?",
"@lhoestq I think it doesn't change the API itself, it just doesn't infer labels by default, but you can **still** set `drop_labels=False` to `load_dataset` and the labels will be inferred. \r\nSuppose that one has data structured as follows:\r\n```\r\ndata/\r\n train/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n test/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n```\r\nIf users load this dataset with `load_dataset(\"audiofolder\", data_dir=\"data\")` (the most native way), they will get a `label` feature that will always be equal to 0 (= \"audio\"). To mitigate this, they will have to always specify `load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=True)` explicitly and I believe it's not convenient. \r\n\r\nAt the same time, `label` column can be added just as easy as adding one argument:` load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=False)`. As classification task is not as common, I think it should require more symbols to be added to the code :D \r\n\r\nBut this is definitely should be explained in the docs, which I've forgotten to update... I'll add this section soon.\r\n\r\nAlso +to the generic loader, will work on it. \r\n\r\n",
"If a metadata.jsonl file is present, then it doesn't have to infer the labels I agree. Note that this is already the case for imagefolder ;) in your case `load_dataset(\"audiofolder\", data_dir=\"data\")` won't return labels !\r\n\r\nLabels are only inferred if there are no metadata.jsonl",
"Feel free to merge the `main` branch into yours after updating your fork of `datasets`: https://github.com/huggingface/datasets/issues/4629\r\n\r\nThis should fix some errors in the CI",
"@mariosasko could you please review this PR again? :)\r\n\r\nmost of the tests for AutoFolder (base class for AudioFolder and ImageFolder) are now basically copied from Image/AudioFolder (their tests are also almost identical too) and adapted to test other methods. it should be refactored but i think this is not that important for now and might be done in the future PR, wdyt?",
"@mariosasko thank you for the review! I'm sorry I accidentally asked for the review again, ignore it."
] |
1,276,729,303
| 4,529
|
Ecoset
|
closed
| 2022-06-20T10:39:34
| 2023-10-26T09:12:32
| 2023-10-04T18:19:52
|
https://github.com/huggingface/datasets/issues/4529
| null |
DiGyt
| false
|
[
"Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it.",
"The dataset lives on the Hub [here](https://huggingface.co/datasets/kietzmannlab/ecoset), so I'm closing this issue.",
"Hey There, thanks for closing 🤗 \r\n\r\nForgot the issue existed, so I didn't close it after implementing the downloader :)"
] |
1,276,679,155
| 4,528
|
Memory leak when iterating a Dataset
|
closed
| 2022-06-20T10:03:14
| 2022-09-12T08:51:39
| 2022-09-12T08:51:39
|
https://github.com/huggingface/datasets/issues/4528
| null |
NouamaneTazi
| false
|
[
"Is someone assigned to this issue?",
"The same issue is being debugged here: https://github.com/huggingface/datasets/issues/4883\r\n",
"Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimport os, psutil\r\n\r\nprocess = psutil.Process(os.getpid())\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\ncorpus = load_dataset(\"BeIR/msmarco\", 'corpus', keep_in_memory=False, streaming=False)['corpus']\r\ncorpus = corpus.select(range(200000))\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\nbatch = None\r\n\r\nmem_before_start = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n batch = corpus[i:i+step]\r\n import objgraph\r\n #objgraph.show_refs([batch])\r\n #objgraph.show_refs([corpus])\r\n #sys.exit()\r\n gc.collect()\r\n\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n print(f\"{i:6d} {mem_after - mem_before:12.4f} {mem_after - mem_before_start:12.4f}\")\r\n\r\n```\r\n\r\nLet's run:\r\n\r\n```\r\n$ python ds2.py\r\n 0 36.5391 36.5391\r\n 20000 10.4609 47.0000\r\n 40000 5.9766 52.9766\r\n 60000 7.8906 60.8672\r\n 80000 6.0586 66.9258\r\n100000 8.4453 75.3711\r\n120000 6.7422 82.1133\r\n140000 8.5664 90.6797\r\n160000 5.7344 96.4141\r\n180000 8.3398 104.7539\r\n```\r\n\r\nYou can see the last column of total RSS memory keeps on growing in MBs. The mid column is by how much it was grown during a single iteration of the repro script (20000 items)",
"@NouamaneTazi, please check my analysis here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242599722 so if you agree with my research this Issue can be closed as well.\r\n\r\nI also made a suggestion at how to proceed to hunt for a real leak here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242600626\r\n\r\nyou may find this one to be useful as well https://github.com/huggingface/datasets/issues/4883#issuecomment-1242597966",
"Amazing job! Thanks for taking time to debug this 🤗\r\n\r\nFor my side, I tried to do some more research as well, but to no avail. https://github.com/huggingface/datasets/issues/4883#issuecomment-1243415957"
] |
1,276,583,536
| 4,527
|
Dataset Viewer issue for vadis/sv-ident
|
closed
| 2022-06-20T08:47:42
| 2022-06-21T16:42:46
| 2022-06-21T16:42:45
|
https://github.com/huggingface/datasets/issues/4527
| null |
albertvillanova
| false
|
[
"Fixed, thanks!\r\n![Uploading Capture d’écran 2022-06-21 à 18.42.40.png…]()\r\n\r\n"
] |
1,276,580,185
| 4,526
|
split cache used when processing different split
|
open
| 2022-06-20T08:44:58
| 2022-06-28T14:04:58
| null |
https://github.com/huggingface/datasets/issues/4526
| null |
gpucce
| false
|
[
"I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)",
"Hi, I think the issue happened because I was loading datasets under an `if` ... `else` statement and the condition would change the dataset I would need to load but instead the cached one was always returned. However, I believe that is expected behaviour, if so I'll close the issue.\r\n\r\nOtherwise I will try to provide a MWE"
] |
1,276,491,386
| 4,525
|
Out of memory error on workers while running Beam+Dataflow
|
closed
| 2022-06-20T07:28:12
| 2024-10-09T16:09:50
| 2024-10-09T16:09:50
|
https://github.com/huggingface/datasets/issues/4525
| null |
albertvillanova
| false
|
[
"Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?",
"@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.",
"Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink/Nemo/Samza/Spark, Google Dataflow...? Thank you.",
"I asked my colleague who ran the code and he said apache beam.",
"@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?",
"Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368",
"> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ",
"OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). ",
"@albertvillanova Doesn't DirectRunner offer distributed processing through?\r\n\r\nhttps://beam.apache.org/documentation/runners/direct/\r\n\r\n```\r\nSetting parallelism\r\n\r\nNumber of threads or subprocesses is defined by setting the direct_num_workers pipeline option. From 2.22.0, direct_num_workers = 0 is supported. When direct_num_workers is set to 0, it will set the number of threads/subprocess to the number of cores of the machine where the pipeline is running.\r\n\r\nSetting running mode\r\n\r\nIn Beam 2.19.0 and newer, you can use the direct_running_mode pipeline option to set the running mode. direct_running_mode can be one of ['in_memory', 'multi_threading', 'multi_processing'].\r\n\r\nin_memory: Runner and workers’ communication happens in memory (not through gRPC). This is a default mode.\r\n\r\nmulti_threading: Runner and workers communicate through gRPC and each worker runs in a thread.\r\n\r\nmulti_processing: Runner and workers communicate through gRPC and each worker runs in a subprocess.\r\n```",
"Unrelated to the OOM issue, but we deprecated datasets with Beam scripts in #6474. I think we can close this issue"
] |
1,275,909,186
| 4,524
|
Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)
|
open
| 2022-06-18T23:36:45
| 2022-06-21T00:38:20
| null |
https://github.com/huggingface/datasets/issues/4524
| null |
ddegenaro
| false
|
[
"Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n\r\nIn the case of the issue with Beam+Dataflow, the logs from the workers report an out of memory issue.",
"As I continued working on this today, I came to suspect that it is in fact an out of memory issue - I have a few more notebooks that I've left running, and if they produce the same error, I will try to get the logs. In the meantime, if there's any chance that there is a repo out there with those three languages already as .arrow files, or if you know about how much memory would be needed to actually download those sets, please let me know!"
] |
1,275,002,639
| 4,523
|
Update download url and improve card of `cats_vs_dogs` dataset
|
closed
| 2022-06-17T12:59:44
| 2022-06-21T14:23:26
| 2022-06-21T14:13:08
|
https://github.com/huggingface/datasets/pull/4523
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4523",
"html_url": "https://github.com/huggingface/datasets/pull/4523",
"diff_url": "https://github.com/huggingface/datasets/pull/4523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4523.patch",
"merged_at": "2022-06-21T14:13:08"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,274,929,328
| 4,522
|
Try to reduce the number of datasets that require manual download
|
open
| 2022-06-17T11:42:03
| 2022-06-17T11:52:48
| null |
https://github.com/huggingface/datasets/issues/4522
| null |
severo
| false
|
[] |
1,274,919,437
| 4,521
|
Datasets method `.map` not hashing
|
closed
| 2022-06-17T11:31:10
| 2022-08-04T12:08:16
| 2022-06-28T13:23:05
|
https://github.com/huggingface/datasets/issues/4521
| null |
sanchit-gandhi
| false
|
[
"Fix posted: https://github.com/huggingface/datasets/issues/4506#issuecomment-1157417219",
"Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps://stackoverflow.com/questions/72664827/can-pickle-dill-foo-but-not-lambda-x-foox",
"Thank @nalzok . That works for me:\r\n\r\n`pip install \"dill<0.3.5\"`"
] |
1,274,879,180
| 4,520
|
Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`
|
closed
| 2022-06-17T10:47:17
| 2022-06-28T14:47:17
| 2022-06-28T14:04:29
|
https://github.com/huggingface/datasets/issues/4520
| null |
sanchit-gandhi
| false
|
[
"I think this has been fixed by #4516, let me know if you encounter this again :)\r\n\r\nI re-ran your code in 3.7 and 3.9 and it works fine",
"Thank you!"
] |
1,274,110,623
| 4,519
|
Create new sections for audio and vision in guides
|
closed
| 2022-06-16T21:38:24
| 2022-07-07T15:36:37
| 2022-07-07T15:24:58
|
https://github.com/huggingface/datasets/pull/4519
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4519",
"html_url": "https://github.com/huggingface/datasets/pull/4519",
"diff_url": "https://github.com/huggingface/datasets/pull/4519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4519.patch",
"merged_at": "2022-07-07T15:24:58"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ready for review!\r\n\r\nThe `toctree` is a bit longer now with the sections. I think if we keep the audio/vision/text/dataset repository sections collapsed by default, and keep the general usage expanded, it may look a little cleaner and not as overwhelming. Let me know what you think! 😄 "
] |
1,274,010,628
| 4,518
|
Patch tests for hfh v0.8.0
|
closed
| 2022-06-16T19:45:32
| 2022-06-17T16:15:57
| 2022-06-17T16:06:07
|
https://github.com/huggingface/datasets/pull/4518
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4518",
"html_url": "https://github.com/huggingface/datasets/pull/4518",
"diff_url": "https://github.com/huggingface/datasets/pull/4518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4518.patch",
"merged_at": "2022-06-17T16:06:07"
}
|
LysandreJik
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,273,960,476
| 4,517
|
Add tags for task_ids:summarization-* and task_categories:summarization*
|
closed
| 2022-06-16T18:52:25
| 2022-07-08T15:14:23
| 2022-07-08T15:02:31
|
https://github.com/huggingface/datasets/pull/4517
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4517",
"html_url": "https://github.com/huggingface/datasets/pull/4517",
"diff_url": "https://github.com/huggingface/datasets/pull/4517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4517.patch",
"merged_at": "2022-07-08T15:02:31"
}
|
hobson
| true
|
[
"Associated community discussion is [here](https://huggingface.co/datasets/aeslc/discussions/1).\r\nPaper referenced in the `dataset_infos.json` is [here](https://arxiv.org/pdf/1906.03497.pdf). It mentions the _email-subject-generation_ task, which is not a tag mentioned in any other dataset so it was not added in this pull request. The _summarization_ task is mentioned as a related task.",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,273,825,640
| 4,516
|
Fix hashing for python 3.9
|
closed
| 2022-06-16T16:42:31
| 2022-06-28T13:33:46
| 2022-06-28T13:23:06
|
https://github.com/huggingface/datasets/pull/4516
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4516",
"html_url": "https://github.com/huggingface/datasets/pull/4516",
"diff_url": "https://github.com/huggingface/datasets/pull/4516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4516.patch",
"merged_at": "2022-06-28T13:23:05"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"What do you think @albertvillanova ?"
] |
1,273,626,131
| 4,515
|
Add uppercased versions of image file extensions for automatic module inference
|
closed
| 2022-06-16T14:14:49
| 2022-06-16T17:21:53
| 2022-06-16T17:11:41
|
https://github.com/huggingface/datasets/pull/4515
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4515",
"html_url": "https://github.com/huggingface/datasets/pull/4515",
"diff_url": "https://github.com/huggingface/datasets/pull/4515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4515.patch",
"merged_at": "2022-06-16T17:11:40"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,273,505,230
| 4,514
|
Allow .JPEG as a file extension
|
closed
| 2022-06-16T12:36:20
| 2022-06-20T08:18:46
| 2022-06-16T17:11:40
|
https://github.com/huggingface/datasets/issues/4514
| null |
DiGyt
| false
|
[
"Hi, thanks for reporting! I've opened a PR with the fix.",
"Wow, that was quick! Thank you very much 🙏 "
] |
1,273,450,338
| 4,513
|
Update Google Cloud Storage documentation and add Azure Blob Storage example
|
closed
| 2022-06-16T11:46:09
| 2022-06-23T17:05:11
| 2022-06-23T16:54:59
|
https://github.com/huggingface/datasets/pull/4513
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4513",
"html_url": "https://github.com/huggingface/datasets/pull/4513",
"diff_url": "https://github.com/huggingface/datasets/pull/4513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4513.patch",
"merged_at": "2022-06-23T16:54:59"
}
|
alvarobartt
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should remove the `>>>` in the Python blocks before the in-line code comments or keep them.\r\n\r\n\r\n",
"Comments are ignored by doctest, so I think we can remove the `>>>` :)",
"Cool I'll remove those now 👍🏻",
"Sure @lhoestq, I just kept that structure as that was the more similar one to the one that was already there, but we can go with that approach, just let me know whether I should change the headers so as to leave all those providers in the same level (`h2`). Thanks!"
] |
1,273,378,129
| 4,512
|
Add links to vision tasks scripts in ADD_NEW_DATASET template
|
closed
| 2022-06-16T10:35:35
| 2022-07-08T14:07:50
| 2022-07-08T13:56:23
|
https://github.com/huggingface/datasets/pull/4512
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4512",
"html_url": "https://github.com/huggingface/datasets/pull/4512",
"diff_url": "https://github.com/huggingface/datasets/pull/4512.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4512.patch",
"merged_at": "2022-07-08T13:56:23"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to the PR's changes. Merging."
] |
1,273,336,874
| 4,511
|
Support all negative values in ClassLabel
|
closed
| 2022-06-16T09:59:39
| 2025-07-23T18:38:15
| 2022-06-16T13:54:07
|
https://github.com/huggingface/datasets/pull/4511
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4511",
"html_url": "https://github.com/huggingface/datasets/pull/4511",
"diff_url": "https://github.com/huggingface/datasets/pull/4511.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4511.patch",
"merged_at": "2022-06-16T13:54:07"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for this fix! I'm not sure what the release timeline is, but FYI #4508 is a breaking issue for transformer token classification using Trainer and PyTorch. PyTorch defaults to -100 as the ignored label for [negative log loss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html?highlight=nllloss#torch.nn.NLLLoss), so switching labels to -1 leads to index errors using Trainer defaults.\r\n\r\nAs a workaround, I'm using master branch directly (`pip install git+https://github.com/huggingface/datasets.git@master` for anyone who needs to do the same) until this gets released.",
"The new release `2.4` fixes the issue, feel free to update `datasets` :) \r\n```\r\npip install -U datasets\r\n```",
"@lhoestq I hope it's OK to ping you here. I've noticed that `encode_example` does only work with -1. I already created #7645 to fix the documentation, but then I stumbled across your original changes to the docs text in this PR.\r\n\r\nI am talking about this part in `ClassLabel -> encode_example`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129"
] |
1,273,260,396
| 4,510
|
Add regression test for `ArrowWriter.write_batch` when batch is empty
|
closed
| 2022-06-16T08:53:51
| 2022-06-16T12:38:02
| 2022-06-16T12:28:19
|
https://github.com/huggingface/datasets/pull/4510
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4510",
"html_url": "https://github.com/huggingface/datasets/pull/4510",
"diff_url": "https://github.com/huggingface/datasets/pull/4510.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4510.patch",
"merged_at": "2022-06-16T12:28:19"
}
|
alvarobartt
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value."
] |
1,273,227,760
| 4,509
|
Support skipping Parquet to Arrow conversion when using Beam
|
closed
| 2022-06-16T08:25:38
| 2022-11-07T16:22:41
| 2022-11-07T16:22:41
|
https://github.com/huggingface/datasets/pull/4509
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4509",
"html_url": "https://github.com/huggingface/datasets/pull/4509",
"diff_url": "https://github.com/huggingface/datasets/pull/4509.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4509.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4509). All of your documentation changes will be reflected on that endpoint.",
"When #4724 is merged, we can just pass `file_format=\"parquet\"` to `download_and_prepare` and it will output parquet fiels without converting to arrow",
"I think we can close this one"
] |
1,272,718,921
| 4,508
|
cast_storage method from datasets.features
|
closed
| 2022-06-15T20:47:22
| 2022-06-16T13:54:07
| 2022-06-16T13:54:07
|
https://github.com/huggingface/datasets/issues/4508
| null |
romainremyb
| false
|
[
"Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by calling `dataset = dataset.cast_column(\"labels\", Sequence(Value(\"int\")))` beforehand. The token-classification examples in Transformers introduce a new `labels` column, so their type is also `Sequence(Value(\"int\"))`, which doesn't lead to an error as this type unbounded. ",
"I'm fine with re-adding support for all negative values for unknown/missing labels @mariosasko, wdyt ?"
] |
1,272,615,932
| 4,507
|
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
|
closed
| 2022-06-15T18:56:34
| 2022-06-16T10:40:08
| 2022-06-16T10:40:08
|
https://github.com/huggingface/datasets/issues/4507
| null |
liyucheng09
| false
|
[
"Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.",
"@albertvillanova Thanks! I can't believe I didn't know this feature till now."
] |
1,272,516,895
| 4,506
|
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
|
closed
| 2022-06-15T17:11:31
| 2023-02-16T03:14:32
| 2022-06-28T13:23:05
|
https://github.com/huggingface/datasets/issues/4506
| null |
DrMatters
| false
|
[
"Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`",
"@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake",
"Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```",
"installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment",
"This has been fixed in https://github.com/huggingface/datasets/pull/4516, we will do a new release soon to include the fix :)"
] |
1,272,477,226
| 4,505
|
Fix double dots in data files
|
closed
| 2022-06-15T16:31:04
| 2022-06-15T17:15:58
| 2022-06-15T17:05:53
|
https://github.com/huggingface/datasets/pull/4505
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4505",
"html_url": "https://github.com/huggingface/datasets/pull/4505",
"diff_url": "https://github.com/huggingface/datasets/pull/4505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4505.patch",
"merged_at": "2022-06-15T17:05:53"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)"
] |
1,272,418,480
| 4,504
|
Can you please add the Stanford dog dataset?
|
closed
| 2022-06-15T15:39:35
| 2024-12-09T15:44:11
| 2023-10-18T18:55:30
|
https://github.com/huggingface/datasets/issues/4504
| null |
dgrnd4
| false
|
[
"would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)",
"@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n",
"Hi! The [ADD NEW DATASET](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) instructions are indeed the best place to start. It's also perfectly fine to add a dataset if it's public, even if it's not yours. Let me know if you need some additional pointers.",
"If no one is working on this, I could take this up!",
"@khushmeeet this is the [link](https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset) where I added the dataset already. If you can I would ask you to do this:\r\n1) The dataset it's all in TRAINING SET: can you please divide it in Training,Test and Validation Set? If you can for each class, take the 80% for the Training set and the 10% for Test and 10% Validation\r\n2) The images has different size, can you please resize all the images in 224,224,3? Look even at the last dimension \"3\" because some images has dimension 4!\r\n\r\nThank you!!",
"Hi @khushmeeet! Thanks for the interest. You can self-assign the issue by commenting `#self-assign` on it. \r\n\r\nAlso, I think we can skip @dgrnd4's steps as we try to avoid any custom processing on top of raw data. One can later copy the script and override `_post_process` in it to perform such processing on the generated dataset.",
"Thanks @mariosasko \r\n\r\n@dgrnd4 As dataset is there on Hub, and preprocessing is not recommended. I am not sure if there is any other task to do. However, I can't seem to find relevant `.py` files for this dataset in GitHub repo.",
"@khushmeeet @mariosasko The point is that the images must be processed and must have the same size in order to can be used for things for example \"Training\". ",
"@dgrnd4 Yes, but this can be done after loading (`map` to resize images and `train_test_split` to create extra splits)\r\n\r\n@khushmeeet The linked version is implemented as a no-code dataset and is generated directly from the ZIP archive, but our \"GitHub\" datasets (these are datasets without a user/org namespace on the Hub) need a generation script, and you can find one [here](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/image_classification/stanford_dogs.py). `datasets` started as a fork of TFDS, so we share similar script structure, which makes it trivial to adapt it.",
"@mariosasko The point is that if I use something like this:\r\nx_train, x_test = train_test_split(dataset, test_size=0.1) \r\n\r\nto get Train 90% and Test 10%, and then to get the Validation Set (10% of the whole 100%):\r\n\r\n```\r\ntrain_ratio = 0.80\r\nvalidation_ratio = 0.10\r\ntest_ratio = 0.10\r\n\r\nx_train, x_test, y_train, y_test = train_test_split(dataX, dataY, test_size=1 - train_ratio)\r\nx_val, x_test, y_val, y_test = train_test_split(x_test, y_test, test_size=test_ratio/(test_ratio + validation_ratio)) \r\n\r\n```\r\n\r\nThe point is that the structure of the data is:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 20580\r\n })\r\n})\r\n\r\n```\r\n\r\nSo how to extract images and labels?\r\n\r\nEDIT --> Split of the dataset in Train-Test-Validation:\r\n```\r\nimport datasets\r\nfrom datasets.dataset_dict import DatasetDict\r\nfrom datasets import Dataset\r\n\r\npercentage_divison_test = int(len(dataset['train'])/100 *10) # 10% --> 2058 \r\npercentage_divison_validation = int(len(dataset['train'])/100 *20) # 20% --> 4116\r\n\r\ndataset_ = datasets.DatasetDict({\"train\": Dataset.from_dict({\r\n\r\n 'image': dataset['train'][0 : len(dataset['train']) ]['image'], \r\n 'labels': dataset['train'][0 : len(dataset['train']) ]['label'] }), \r\n \r\n \"test\": Dataset.from_dict({ #20580-4116 (validation) ,20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_validation : len(dataset['train']) - percentage_divison_test]['label'] }), \r\n \r\n \"validation\": Dataset.from_dict({ # 20580-2058 (test)\r\n 'image': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['image'], \r\n 'labels': dataset['train'][len(dataset['train']) - percentage_divison_test : len(dataset['train'])]['label'] }), \r\n })\r\n```",
"@mariosasko in order to resize images I'm trying this method: \r\n```\r\nfor i in range(0,len(dataset['train'])): #len(dataset['train'])\r\n\r\n ex = dataset['train'][i] #i\r\n image = ex['image']\r\n image = image.convert(\"RGB\") # <class 'PIL.Image.Image'> <PIL.Image.Image image mode=RGB size=500x333 at 0x7F84F1948150>\r\n image_resized = image.resize(size_to_resize) # <PIL.Image.Image image mode=RGB size=224x224 at 0x7F84F17885D0>\r\n\r\n dataset['train'][i]['image'] = image_resized \r\n```\r\n\r\nBecause the DatasetDict is formed by arrows that are immutable, the changing assignment in the last line of code, doesn't work!\r\nDo you have any idea in order to get a valid result?",
"#self-assign",
"I have raised PR for adding stanford-dog dataset. I have not added any data preprocessing code. Only dataset generation script is there. Let me know any changes required, or anything to add to README.",
"Is this issue still open, i am new to open source thus want to take this one as my start.",
"@zutarich This issue should have been closed since the dataset in question is available on the Hub [here](https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset).",
"I didn't know about this issue until now but i added my version of the dataset on the hub **with the bboxes** :\r\nhttps://huggingface.co/datasets/Alanox/stanford-dogs\r\n\r\nAlthough I could have made it cleaner and built the splits from the .txt files + put into the coco format.\r\nThere is a [stanford-dogs.py](https://huggingface.co/datasets/Alanox/stanford-dogs/blob/main/stanford-dogs.py) file if you want to help adding these missing metadatas.\r\nHope this helps"
] |
1,272,367,055
| 4,503
|
Refactor and add metadata to fever dataset
|
closed
| 2022-06-15T14:59:47
| 2022-07-06T11:54:15
| 2022-07-06T11:41:30
|
https://github.com/huggingface/datasets/pull/4503
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4503",
"html_url": "https://github.com/huggingface/datasets/pull/4503",
"diff_url": "https://github.com/huggingface/datasets/pull/4503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4503.patch",
"merged_at": "2022-07-06T11:41:30"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"But this is somehow fever v3 dataset (see this link https://fever.ai/ under the dropdown menu called Datasets). Our fever dataset already contains v1 and v2 configs. Then, I added this as if v3 config (but named feverous instead of v3 to align with the original naming by data owners).",
"In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever/feverous\".",
"> In any case, if you really think this should be a new dataset, then I would propose to create it on the Hub instead, as \"fever/feverous\".\r\n\r\nYea makes sense ! thanks :) let's push more datasets on the hub rather than on github from now on",
"I have added \"feverous\" dataset to the Hub: https://huggingface.co/datasets/fever/feverous\r\n\r\nI change the name of this PR accordingly, as now it only:\r\n- Refactors code and include for both Fever v1.0 and v2.0 specific:\r\n - Descriptions\r\n - Citations\r\n - Homepages\r\n- Updates documentation card aligned with above:\r\n - It was missing v2.0 description and citation.\r\n- Update metadata JSON"
] |
1,272,353,700
| 4,502
|
Logic bug in arrow_writer?
|
closed
| 2022-06-15T14:50:00
| 2022-06-18T15:15:51
| 2022-06-18T15:15:51
|
https://github.com/huggingface/datasets/issues/4502
| null |
changjonathanc
| false
|
[
"Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.",
"Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.",
"> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.",
"Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.",
"Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```",
"Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.",
"> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`",
"Great thanks for the response! So I'll just add that regression test and remove the current if-statement.",
"Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages/datasets/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```",
"> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema"
] |
1,272,300,646
| 4,501
|
Corrected broken links in doc
|
closed
| 2022-06-15T14:12:17
| 2022-06-15T15:11:05
| 2022-06-15T15:00:56
|
https://github.com/huggingface/datasets/pull/4501
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4501",
"html_url": "https://github.com/huggingface/datasets/pull/4501",
"diff_url": "https://github.com/huggingface/datasets/pull/4501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4501.patch",
"merged_at": "2022-06-15T15:00:56"
}
|
clefourrier
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,272,281,992
| 4,500
|
Add `concatenate_datasets` for iterable datasets
|
closed
| 2022-06-15T13:58:50
| 2022-06-28T21:25:39
| 2022-06-28T21:15:04
|
https://github.com/huggingface/datasets/pull/4500
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4500",
"html_url": "https://github.com/huggingface/datasets/pull/4500",
"diff_url": "https://github.com/huggingface/datasets/pull/4500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4500.patch",
"merged_at": "2022-06-28T21:15:04"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! I addressed your comments :)\r\n\r\n> There is a slight difference in concatenate_datasets between the version for map-style datasets and the one for iterable datasets\r\n\r\nIndeed, here is what I did to fix this:\r\n\r\n- axis 0: fill missing columns with None.\r\n(I first iterate over the input datasets to infer their columns from the first examples, then I set the features of the resulting dataset to be the merged features)\r\nThis is consistent with non-streaming concatenation\r\n\r\n- axis 1: **fill the missing rows with None**, for consistency with axis 0\r\n(but let me know what you think, I can still revert this behavior and raise an error when one of the dataset runs out of examples)\r\nWe might have to align the non-streaming concatenation with this behavior though, for consistency. What do you think ?",
"Added more comments as suggested, and some typing\r\n\r\nWhile factorizing _apply_features_types for both IterableDataset and TypedExamplesIterable, I fixed a missing `token_per_repo_id` that was not passed to TypedExamplesIteable\r\n\r\nLet me know what you think now @mariosasko "
] |
1,272,118,162
| 4,499
|
fix ETT m1/m2 test/val dataset
|
closed
| 2022-06-15T11:51:02
| 2022-06-15T14:55:56
| 2022-06-15T14:45:13
|
https://github.com/huggingface/datasets/pull/4499
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4499",
"html_url": "https://github.com/huggingface/datasets/pull/4499",
"diff_url": "https://github.com/huggingface/datasets/pull/4499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4499.patch",
"merged_at": "2022-06-15T14:45:12"
}
|
kashif
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits",
"ah yes!"
] |
1,272,100,549
| 4,498
|
WER and CER > 1
|
closed
| 2022-06-15T11:35:12
| 2022-06-15T16:38:05
| 2022-06-15T16:38:05
|
https://github.com/huggingface/datasets/issues/4498
| null |
sadrasabouri
| false
|
[
"WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0"
] |
1,271,964,338
| 4,497
|
Re-add download_manager module in utils
|
closed
| 2022-06-15T09:44:33
| 2022-06-15T10:33:28
| 2022-06-15T10:23:44
|
https://github.com/huggingface/datasets/pull/4497
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4497",
"html_url": "https://github.com/huggingface/datasets/pull/4497",
"diff_url": "https://github.com/huggingface/datasets/pull/4497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4497.patch",
"merged_at": "2022-06-15T10:23:44"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.",
"It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```",
"As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it",
"Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later"
] |
1,271,945,704
| 4,496
|
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
|
closed
| 2022-06-15T09:29:16
| 2022-07-07T17:06:51
| 2022-07-07T16:55:48
|
https://github.com/huggingface/datasets/pull/4496
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4496",
"html_url": "https://github.com/huggingface/datasets/pull/4496",
"diff_url": "https://github.com/huggingface/datasets/pull/4496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4496.patch",
"merged_at": "2022-07-07T16:55:48"
}
|
alvarobartt
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!"
] |
1,271,851,025
| 4,495
|
Fix patching module that doesn't exist
|
closed
| 2022-06-15T08:17:50
| 2022-06-15T16:40:49
| 2022-06-15T08:54:09
|
https://github.com/huggingface/datasets/pull/4495
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4495",
"html_url": "https://github.com/huggingface/datasets/pull/4495",
"diff_url": "https://github.com/huggingface/datasets/pull/4495.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4495.patch",
"merged_at": "2022-06-15T08:54:09"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,271,850,599
| 4,494
|
Patching fails for modules that are not installed or don't exist
|
closed
| 2022-06-15T08:17:29
| 2022-06-15T08:54:09
| 2022-06-15T08:54:09
|
https://github.com/huggingface/datasets/issues/4494
| null |
lhoestq
| false
|
[] |
1,271,306,385
| 4,493
|
Add `@transmit_format` in `flatten`
|
closed
| 2022-06-14T20:09:09
| 2022-09-27T11:37:25
| 2022-09-27T10:48:54
|
https://github.com/huggingface/datasets/pull/4493
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4493",
"html_url": "https://github.com/huggingface/datasets/pull/4493",
"diff_url": "https://github.com/huggingface/datasets/pull/4493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4493.patch",
"merged_at": null
}
|
alvarobartt
| true
|
[
"@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! 🤗 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this.",
"Oops, I thought this PR was already merged and deleted from the source repository, I'll be creating a new branch out of `main` so as to re-create this PR... My bad :weary:"
] |
1,271,112,497
| 4,492
|
Pin the revision in imagenet download links
|
closed
| 2022-06-14T17:15:17
| 2022-06-14T17:35:13
| 2022-06-14T17:25:45
|
https://github.com/huggingface/datasets/pull/4492
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4492",
"html_url": "https://github.com/huggingface/datasets/pull/4492",
"diff_url": "https://github.com/huggingface/datasets/pull/4492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4492.patch",
"merged_at": "2022-06-14T17:25:45"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,270,803,822
| 4,491
|
Dataset Viewer issue for Pavithree/test
|
closed
| 2022-06-14T13:23:10
| 2022-06-14T14:37:21
| 2022-06-14T14:34:33
|
https://github.com/huggingface/datasets/issues/4491
| null |
Pavithree
| false
|
[
"This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset."
] |
1,270,719,074
| 4,490
|
Use `torch.nested_tensor` for arrays of varying length in torch formatter
|
open
| 2022-06-14T12:19:40
| 2023-07-07T13:02:58
| null |
https://github.com/huggingface/datasets/issues/4490
| null |
mariosasko
| false
|
[
"What's the current behavior?",
"Currently, we return a list of Torch tensors if their shapes don't match. If they do, we consolidate them into a single Torch tensor."
] |
1,270,706,195
| 4,489
|
Add SV-Ident dataset
|
closed
| 2022-06-14T12:09:00
| 2022-06-20T08:48:26
| 2022-06-20T08:37:27
|
https://github.com/huggingface/datasets/pull/4489
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4489",
"html_url": "https://github.com/huggingface/datasets/pull/4489",
"diff_url": "https://github.com/huggingface/datasets/pull/4489.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4489.patch",
"merged_at": null
}
|
e-tornike
| true
|
[
"Hi @e-tornike, thanks a lot for adding this interesting dataset.\r\n\r\nRecently at Hugging Face, we have decided to give priority to adding datasets directly on the Hub. Would you mind to transfer your loading script to the Hub? You could create a dedicated org namespace, so that your dataset would be accessible using `vadis/sv_ident` or `sdproc/sv_ident` or `coling/sv_ident` (as you prefer).\r\n\r\nYou have an example here: https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus",
"Additionally, please feel free to ping us if you need assistance/help in creating this dataset.\r\n\r\nYou could put the link to your Hub dataset here in this Issue discussion page, so that we can follow the progress. :)",
"Hi @albertvillanova, thanks for the feedback! Uploading via the Hub is a lot easier! \r\n\r\nI've uploaded the dataset here: https://huggingface.co/datasets/vadis/sv-ident, but it's reporting a \"Status400Error\". Is there any way to see the logs of the dataset script and what is causing the error?",
"Hi @e-tornike, good job at https://huggingface.co/datasets/vadis/sv-ident.\r\n\r\nNormally, you can run locally the loading of the dataset by passing `streaming=True` (as the previewer does):\r\n```python\r\nds = load_dataset(\"path/to/sv_ident.py, split=\"train\", streaming=True)\r\nitem = next(iter(ds))\r\nitem\r\n```\r\n\r\nLet me have a look and open a discussion on your Hub repo! ;)",
"I've opened an Issue: \r\n- #4527 "
] |
1,270,613,857
| 4,488
|
Update PASS dataset version
|
closed
| 2022-06-14T10:47:14
| 2022-06-14T16:41:55
| 2022-06-14T16:32:28
|
https://github.com/huggingface/datasets/pull/4488
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4488",
"html_url": "https://github.com/huggingface/datasets/pull/4488",
"diff_url": "https://github.com/huggingface/datasets/pull/4488.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4488.patch",
"merged_at": "2022-06-14T16:32:28"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,270,525,163
| 4,487
|
Support streaming UDHR dataset
|
closed
| 2022-06-14T09:33:33
| 2022-06-15T05:09:22
| 2022-06-15T04:59:49
|
https://github.com/huggingface/datasets/pull/4487
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4487",
"html_url": "https://github.com/huggingface/datasets/pull/4487",
"diff_url": "https://github.com/huggingface/datasets/pull/4487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4487.patch",
"merged_at": "2022-06-15T04:59:49"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,269,518,084
| 4,486
|
Add CCAgT dataset
|
closed
| 2022-06-13T14:20:19
| 2022-07-04T14:37:03
| 2022-07-04T14:25:45
|
https://github.com/huggingface/datasets/pull/4486
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4486",
"html_url": "https://github.com/huggingface/datasets/pull/4486",
"diff_url": "https://github.com/huggingface/datasets/pull/4486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4486.patch",
"merged_at": null
}
|
johnnv1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Excellent job @johnnv1! There were typos/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?",
"I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?",
"Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation",
"I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)",
"Ok, I can do that in the next few days. I will create a `lapix` organization, and I will add this dataset and also #4565",
"So, I think we can close this PR? I have already moved these files there.\r\n\r\nThe link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT\r\n\r\n🤗 ",
"Woohoo awesome !\r\n\r\nclosing this PR :)"
] |
1,269,463,054
| 4,485
|
Fix cast to null
|
closed
| 2022-06-13T13:44:32
| 2022-06-14T13:43:54
| 2022-06-14T13:34:14
|
https://github.com/huggingface/datasets/pull/4485
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4485",
"html_url": "https://github.com/huggingface/datasets/pull/4485",
"diff_url": "https://github.com/huggingface/datasets/pull/4485.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4485.patch",
"merged_at": "2022-06-14T13:34:14"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,269,383,811
| 4,484
|
Better ImportError message when a dataset script dependency is missing
|
closed
| 2022-06-13T12:44:37
| 2022-07-08T14:30:44
| 2022-06-13T13:50:47
|
https://github.com/huggingface/datasets/pull/4484
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4484",
"html_url": "https://github.com/huggingface/datasets/pull/4484",
"diff_url": "https://github.com/huggingface/datasets/pull/4484.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4484.patch",
"merged_at": "2022-06-13T13:50:47"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Discussed offline with @mariosasko, merging :)",
"Fwiw, i think this same issue is occurring on the datasets website page, where preview isn't available due to the `bigbench` import error",
"For the preview of BigBench datasets, we're just waiting for bigbench to have a stable version on PyPI, instead of the one hosted on GCS ;)"
] |
1,269,253,840
| 4,483
|
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
|
closed
| 2022-06-13T10:47:52
| 2022-06-14T13:34:14
| 2022-06-14T13:34:14
|
https://github.com/huggingface/datasets/issues/4483
| null |
sanderland
| false
|
[
"Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\n# explicitly say that the \"label\" type is int64, even though it contains only null values\r\nds = ds.cast_column(\"label\", Sequence(Value(\"int64\")))\r\n\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\n\r\nds_mapped = ds.map(mapper,batched=True)\r\n```"
] |
1,269,237,447
| 4,482
|
Test that TensorFlow is not imported on startup
|
closed
| 2022-06-13T10:33:49
| 2023-10-12T06:31:39
| 2023-10-11T09:11:56
|
https://github.com/huggingface/datasets/pull/4482
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4482",
"html_url": "https://github.com/huggingface/datasets/pull/4482",
"diff_url": "https://github.com/huggingface/datasets/pull/4482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4482.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Should we close this PR?",
"I'm closing this PR. Feel free to reopen it if necessary."
] |
1,269,187,792
| 4,481
|
Fix iwslt2017
|
closed
| 2022-06-13T09:51:21
| 2022-10-26T09:09:31
| 2022-06-13T10:40:18
|
https://github.com/huggingface/datasets/pull/4481
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4481",
"html_url": "https://github.com/huggingface/datasets/pull/4481",
"diff_url": "https://github.com/huggingface/datasets/pull/4481.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4481.patch",
"merged_at": "2022-06-13T10:40:18"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"CI fails are just abut missing tags in the dataset card, merging !",
"FYI, \r\n\r\nThe checksums have not been edited from the changes of .tgz to .zip files, and as a result a `ExpectedMoreDownloadedFiles` error occurs. Updating them in the `dataset_infos.json` should fix the error. ",
"Thanks for reporting and sorry for the delay, I opened https://huggingface.co/datasets/iwslt2017/discussions/2 to fix this"
] |
1,268,921,567
| 4,480
|
Bigbench tensorflow GPU dependency
|
closed
| 2022-06-13T05:24:06
| 2022-06-14T19:45:24
| 2022-06-14T19:45:23
|
https://github.com/huggingface/datasets/issues/4480
| null |
cceyda
| false
|
[
"Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https://github.com/google/BIG-bench) as well regarding the `AttributeError`",
"I'm on vacation for the next week, so won't be able to do much debugging at the moment. Sorry for the inconvenience.\r\nBut I did quickly take a look:\r\n\r\n**pypi**:\r\nI managed to reproduce the above error with the pypi version begin out of date. \r\nThe version on `https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` should be up to date, but it was my understanding that there was some issue with the pypi upload, so I don't even understand why there is a version [on pypi from April 1](https://pypi.org/project/bigbench/0.0.1/). Perhaps @ethansdyer, who's handling the pypi upload, knows the answer to that?\r\n\r\n**OOM error**:\r\nBut, I'm unable to reproduce the OOM error in a google colab with GPU enabled.\r\nThis is what I ran:\r\n```\r\n!pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n!pip install datasets\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"bigbench\",\"swedish_to_german_proverbs\")\r\n``` \r\nThe `swedish_to_german_proverbs`task is only 72 examples, so I don't understand what could be causing the OOM error. Loading the task has no effect on the RAM for me. @cceyda Can you confirm that this does not occur in a [colab](https://colab.research.google.com/)?\r\nIf the GPU is somehow causing issues on your system, disabling the GPU from TF might be an option too\r\n```\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Solved.\r\nYes it works on colab, and somehow magically on my machine too now. hmm not sure what was wrong before I had used a fresh venv both times with just the dataloading code, and tried multiple times. (maybe just a wrong tensorflow version got mixed up somehow) The tensorflow call seems to come from the bigbench side anyway.\r\n\r\nabout bigbench pypi version update, I opened an issue over there https://github.com/google/BIG-bench/issues/846\r\n\r\nanyway closing this now. If anyone else has the same problem can re-open."
] |
1,268,558,237
| 4,479
|
Include entity positions as feature in ReCoRD
|
closed
| 2022-06-12T11:56:28
| 2022-08-19T23:23:02
| 2022-08-19T13:23:48
|
https://github.com/huggingface/datasets/pull/4479
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4479",
"html_url": "https://github.com/huggingface/datasets/pull/4479",
"diff_url": "https://github.com/huggingface/datasets/pull/4479.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4479.patch",
"merged_at": "2022-08-19T13:23:48"
}
|
richarddwang
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the reply @lhoestq !\r\n\r\nI have sucessed on `datasets-cli test ./datasets/super_glue --name record --save_infos`,\r\nBut as you can see, the check ran into `FAILED tests/test_dataset_cards.py::test_changed_dataset_card[super_glue] - V...`.\r\nHow can we solve it?",
"That would be neat! Let me implement it."
] |
1,268,358,213
| 4,478
|
Dataset slow during model training
|
open
| 2022-06-11T19:40:19
| 2022-06-14T12:04:31
| null |
https://github.com/huggingface/datasets/issues/4478
| null |
lehrig
| false
|
[
"Hi ! cc @Rocketknight1 maybe you know better ?\r\n\r\nI'm not too familiar with `tf.data.experimental.save`. Note that `datasets` uses memory mapping, so depending on your hardware and the disk you are using you can expect performance differences with a dataset loaded in RAM",
"Hi @lehrig, I suspect what's happening here is that our `to_tf_dataset()` method has some performance issues when streaming samples. This is usually not a problem, but they become apparent when streaming a vision dataset into a very small vision model, which will need a lot of sample throughput to saturate the GPU.\r\n\r\nWhen you save a `tf.data.Dataset` with `tf.data.experimental.save`, all of the samples from the dataset (which are, in this case, batches of images), are saved to disk. When you load this saved dataset, you're effectively bypassing `to_tf_dataset()` entirely, which alleviates this performance bottleneck.\r\n\r\n`to_tf_dataset()` is something we're actively working on overhauling right now - particularly for image datasets, we want to make it possible to access the underlying images with `tf.data` without going through the current layer of indirection with `Arrow`, which should massively improve simplicity and performance. \r\n\r\nHowever, if you just want this to work quickly but without needing your save/load hack, my advice would be to simply load the dataset into memory if it's small enough to fit. Since all your samples have the same dimensions, you can do this simply with:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ndataset = dataset.with_format(\"numpy\")\r\ndata_in_memory = dataset[:]\r\n```\r\n\r\nThen you can simply do something like:\r\n\r\n```\r\nmodel.fit(data_in_memory[\"pixel_values\"], data_in_memory[\"labels\"])\r\n```",
"Thanks for the information! \r\n\r\nI have now updated the training code like so:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ntrain_dataset = dataset[\"train\"][:]\r\nvalidation_dataset = dataset[\"dev\"][:]\r\n\r\n...\r\n\r\nmodel.fit(\r\n train_dataset[\"pixel_values\"],\r\n train_dataset[\"label\"],\r\n epochs=epochs,\r\n validation_data=(\r\n validation_dataset[\"pixel_values\"],\r\n validation_dataset[\"label\"]\r\n ),\r\n callbacks=[earlyStopping, mcp_save, reduce_lr_loss]\r\n)\r\n```\r\n\r\n- Creating the in-memory dataset is quite quick\r\n- But: There is now a long wait (~4-5 Minutes) before the training starts (why?)\r\n- And: Training times have improved but the very first epoch leaves me wondering why it takes so long (why?)\r\n\r\n**Epoch Breakdown:**\r\n- Epoch 1/10\r\n78s 12s/step - loss: 3.1307 - accuracy: 0.0737 - val_loss: 2.2827 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 2/10\r\n1s 168ms/step - loss: 2.3616 - accuracy: 0.2350 - val_loss: 2.2679 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 3/10\r\n1s 189ms/step - loss: 2.0221 - accuracy: 0.3180 - val_loss: 2.2670 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 4/10\r\n0s 67ms/step - loss: 1.8895 - accuracy: 0.3548 - val_loss: 2.2771 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 5/10\r\n0s 67ms/step - loss: 1.7846 - accuracy: 0.3963 - val_loss: 2.2860 - val_accuracy: 0.1455 - lr: 0.0010\r\n- Epoch 6/10\r\n0s 65ms/step - loss: 1.5946 - accuracy: 0.4516 - val_loss: 2.2938 - val_accuracy: 0.1636 - lr: 0.0010\r\n- Epoch 7/10\r\n0s 63ms/step - loss: 1.4217 - accuracy: 0.5115 - val_loss: 2.2968 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 8/10\r\n0s 67ms/step - loss: 1.3089 - accuracy: 0.5438 - val_loss: 2.2842 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 9/10\r\n1s 184ms/step - loss: 1.2480 - accuracy: 0.5806 - val_loss: 2.2652 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 10/10\r\n0s 65ms/step - loss: 1.2699 - accuracy: 0.5622 - val_loss: 2.2670 - val_accuracy: 0.2000 - lr: 0.0010\r\n\r\n",
"Regarding the new long ~5 min. wait introduced by the in-memory dataset update: this might be causing it? https://datascience.stackexchange.com/questions/33364/why-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the\r\n\r\nFor now, my save/load hack is still more performant, even though having more boiler-plate code :/ ",
"That 5 minute wait is quite surprising! I don't have a good explanation for why it's happening, but it can't be an issue with `datasets` or `tf.data` because you're just fitting directly on Numpy arrays at this point. All I can suggest is seeing if you can isolate the issue - for example, does fitting on a smaller dataset containing only 10% of the original data reduce the wait? This might indicate the delay is caused by your data being copied or converted somehow. Alternatively, you could try removing things like callbacks and seeing if you could isolate the issue there."
] |
1,268,308,986
| 4,477
|
Dataset Viewer issue for fgrezes/WIESP2022-NER
|
closed
| 2022-06-11T15:49:17
| 2022-07-18T13:07:33
| 2022-07-18T13:07:33
|
https://github.com/huggingface/datasets/issues/4477
| null |
AshTayade
| false
|
[
"https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ",
"Apparently it finds `scoring-scripts/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc."
] |
1,267,987,499
| 4,476
|
`to_pandas` doesn't take into account format.
|
closed
| 2022-06-10T20:25:31
| 2022-06-15T17:41:41
| 2022-06-15T17:41:41
|
https://github.com/huggingface/datasets/issues/4476
| null |
Dref360
| false
|
[
"Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`",
"Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.",
"Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```",
"Ahhhh Thank you!\r\n\r\nclosing then :)"
] |
1,267,798,451
| 4,475
|
Improve error message for missing pacakges from inside dataset script
|
closed
| 2022-06-10T16:59:36
| 2022-10-06T13:46:26
| 2022-06-13T13:16:43
|
https://github.com/huggingface/datasets/pull/4475
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4475",
"html_url": "https://github.com/huggingface/datasets/pull/4475",
"diff_url": "https://github.com/huggingface/datasets/pull/4475.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4475.patch",
"merged_at": null
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer",
"Closing in favor of #4484. "
] |
1,267,767,541
| 4,474
|
[Docs] How to use with PyTorch page
|
closed
| 2022-06-10T16:25:49
| 2022-06-14T14:40:32
| 2022-06-14T14:04:33
|
https://github.com/huggingface/datasets/pull/4474
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4474",
"html_url": "https://github.com/huggingface/datasets/pull/4474",
"diff_url": "https://github.com/huggingface/datasets/pull/4474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4474.patch",
"merged_at": "2022-06-14T14:04:32"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,267,555,994
| 4,473
|
Add SST-2 dataset
|
closed
| 2022-06-10T13:37:26
| 2022-06-13T14:11:34
| 2022-06-13T14:01:09
|
https://github.com/huggingface/datasets/pull/4473
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4473",
"html_url": "https://github.com/huggingface/datasets/pull/4473",
"diff_url": "https://github.com/huggingface/datasets/pull/4473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4473.patch",
"merged_at": "2022-06-13T14:01:09"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"on the hub this dataset is referenced as `sst-2` not `sst2` – is there a canonical orthography? If not, could we name it `sst-2`?",
"@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2",
"Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already – but they're less popular: https://huggingface.co/models?datasets=sst2)",
"OK, I'm taking care of the subsequent PRs on models to align with this dataset name."
] |
1,267,488,523
| 4,472
|
Fix 401 error for unauthticated requests to non-existing repos
|
closed
| 2022-06-10T12:38:11
| 2022-06-10T13:05:11
| 2022-06-10T12:55:57
|
https://github.com/huggingface/datasets/pull/4472
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4472",
"html_url": "https://github.com/huggingface/datasets/pull/4472",
"diff_url": "https://github.com/huggingface/datasets/pull/4472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4472.patch",
"merged_at": "2022-06-10T12:55:56"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,267,475,268
| 4,471
|
CI error with repo lhoestq/_dummy
|
closed
| 2022-06-10T12:26:06
| 2022-06-10T13:24:53
| 2022-06-10T13:24:53
|
https://github.com/huggingface/datasets/issues/4471
| null |
albertvillanova
| false
|
[
"fixed by https://github.com/huggingface/datasets/pull/4472"
] |
1,267,470,051
| 4,470
|
Reorder returned validation/test splits in script template
|
closed
| 2022-06-10T12:21:13
| 2022-06-10T18:04:10
| 2022-06-10T17:54:50
|
https://github.com/huggingface/datasets/pull/4470
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4470",
"html_url": "https://github.com/huggingface/datasets/pull/4470",
"diff_url": "https://github.com/huggingface/datasets/pull/4470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4470.patch",
"merged_at": "2022-06-10T17:54:50"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,267,213,849
| 4,469
|
Replace data URLs in wider_face dataset once hosted on the Hub
|
closed
| 2022-06-10T08:13:25
| 2022-06-10T16:42:08
| 2022-06-10T16:32:46
|
https://github.com/huggingface/datasets/pull/4469
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4469",
"html_url": "https://github.com/huggingface/datasets/pull/4469",
"diff_url": "https://github.com/huggingface/datasets/pull/4469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4469.patch",
"merged_at": "2022-06-10T16:32:46"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,266,715,742
| 4,468
|
Generalize tutorials for audio and vision
|
closed
| 2022-06-09T22:00:44
| 2022-06-14T16:22:02
| 2022-06-14T16:12:00
|
https://github.com/huggingface/datasets/pull/4468
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4468",
"html_url": "https://github.com/huggingface/datasets/pull/4468",
"diff_url": "https://github.com/huggingface/datasets/pull/4468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4468.patch",
"merged_at": "2022-06-14T16:12:00"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,266,218,358
| 4,467
|
Transcript string 'null' converted to [None] by load_dataset()
|
closed
| 2022-06-09T14:26:00
| 2023-07-04T02:18:39
| 2022-06-09T16:29:02
|
https://github.com/huggingface/datasets/issues/4467
| null |
mbarnig
| false
|
[
"Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\n‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.\r\n```\r\n(see \"null\" in the last position in the above list).\r\n\r\nIn order to prevent `pandas` from performing that automatic conversion from the string \"null\" to a NaN value, you should pass the `pandas` parameter `keep_default_na=False`:\r\n```python\r\nIn [2]: dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}, keep_default_na=False)\r\nIn [3]: dataset[\"train\"][0][\"transcript\"]\r\nOut[3]: 'null'\r\n```",
"Thanks for the quick answer.",
"@albertvillanova I also ran into this issue, it had me scratching my head for a while! In my case it was tripped by a literal \"NA\" comment collected from a user-facing form (e.g., this question does not apply to me). Thankfully this answer was here, but I feel it is such a common trap that it deserves to be noted in the official docs, maybe [here](https://huggingface.co/docs/datasets/loading#csv)? \r\n\r\nI'm happy to submit a PR if you agree!"
] |
1,266,159,920
| 4,466
|
Optimize contiguous shard and select
|
closed
| 2022-06-09T13:45:39
| 2022-06-14T16:04:30
| 2022-06-14T15:54:45
|
https://github.com/huggingface/datasets/pull/4466
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4466",
"html_url": "https://github.com/huggingface/datasets/pull/4466",
"diff_url": "https://github.com/huggingface/datasets/pull/4466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4466.patch",
"merged_at": "2022-06-14T15:54:45"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) // num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n<img width=\"574\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/172864881-f1db489a-2305-47f2-a07f-7d3df610b1b8.png\">\r\n",
"I addressed your comments @albertvillanova , let me know what you think :)"
] |
1,265,754,479
| 4,465
|
Fix bigbench config names
|
closed
| 2022-06-09T08:06:19
| 2022-06-09T14:38:36
| 2022-06-09T14:29:19
|
https://github.com/huggingface/datasets/pull/4465
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4465",
"html_url": "https://github.com/huggingface/datasets/pull/4465",
"diff_url": "https://github.com/huggingface/datasets/pull/4465.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4465.patch",
"merged_at": "2022-06-09T14:29:18"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,265,682,931
| 4,464
|
Extend support for streaming datasets that use xml.dom.minidom.parse
|
closed
| 2022-06-09T06:58:25
| 2022-06-09T08:43:24
| 2022-06-09T08:34:16
|
https://github.com/huggingface/datasets/pull/4464
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4464",
"html_url": "https://github.com/huggingface/datasets/pull/4464",
"diff_url": "https://github.com/huggingface/datasets/pull/4464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4464.patch",
"merged_at": "2022-06-09T08:34:15"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,265,093,211
| 4,463
|
Use config_id to check split sizes instead of config name
|
closed
| 2022-06-08T17:45:24
| 2023-09-24T10:03:00
| 2022-06-09T08:06:37
|
https://github.com/huggingface/datasets/pull/4463
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4463",
"html_url": "https://github.com/huggingface/datasets/pull/4463",
"diff_url": "https://github.com/huggingface/datasets/pull/4463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4463.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"closing in favor of https://github.com/huggingface/datasets/pull/4465"
] |
1,265,079,347
| 4,462
|
BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter
|
open
| 2022-06-08T17:31:24
| 2022-07-05T07:39:55
| null |
https://github.com/huggingface/datasets/issues/4462
| null |
lhoestq
| false
|
[
"Why not adding `max_examples` as part of the config name?",
"Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463",
"Hi @lhoestq,\r\n\r\nThank you for taking a look at this issue, and proposing a solution. \r\nUnfortunately, after trying the fix in #4465 I still see the same issue.\r\n\r\nI think there is some subtlety where the config name gets overwritten somewhere when `BUILDER_CONFIGS`[(link)](https://github.com/huggingface/datasets/blob/master/datasets/bigbench/bigbench.py#L126) is defined. \r\n\r\nIf I print out the `self.config.name` in the current version (with the fix in #4465), I see just the task name, but if I comment out `BUILDER_CONFIGS`, the `num_shots` and `max_examples` gets appended as was meant by #4465.\r\n\r\nI haven't managed to track down where this happens, but I thought you might know? \r\n\r\n(Another comment on your fix: the `name` variable is used to fetch the task from the bigbench API, so modifying it causes an error if it's actually called. This can easily be fixed by having `config_name` variable in addition to the `task_name`)\r\n\r\n\r\n"
] |
1,264,800,451
| 4,461
|
AttributeError: module 'datasets' has no attribute 'load_dataset'
|
closed
| 2022-06-08T13:59:20
| 2024-03-25T12:58:29
| 2022-06-08T14:41:00
|
https://github.com/huggingface/datasets/issues/4461
| null |
AlexNLP
| false
|
[
"I'm having the same issue,Can you tell me how to solve it?",
"I have the same issue, can you tell me how to solve it? Thanks",
"I had a folder named 'datasets' so this is why it can't find the import, it's looking in the wrong place",
"@briandw your comment saved my day 👍 "
] |
1,264,644,205
| 4,460
|
Drop Python 3.6 support
|
closed
| 2022-06-08T12:10:18
| 2022-07-26T19:16:39
| 2022-07-26T19:04:21
|
https://github.com/huggingface/datasets/pull/4460
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4460",
"html_url": "https://github.com/huggingface/datasets/pull/4460",
"diff_url": "https://github.com/huggingface/datasets/pull/4460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4460.patch",
"merged_at": "2022-07-26T19:04:21"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I've disabled the `test_dummy_dataset_serialize_s3` tests in the Linux CI to avoid the failures (these tests only fail on Windows in 3.6). These failures are unrelated to this PR's changes, and I would like to address this in a new PR.",
"[This comment](https://github.com/pytorch/audio/issues/2363#issuecomment-1179089175) explains the issue with MP3 decoding in `torchaudio` in the latest release (supports Python 3.7+). I fixed CI by pinning `torchaudio` to `<0.12.0`. Another way to fix this issue is by installing `ffmpeg` with conda or using the unofficial GH action. But I don't think it's worth making CI more complex, considering we can wait for the soundfile release, which should bring MP3 decoding, and drop the `torchaudio` dependency then.",
"Yay for dropping Python 3.6!",
"I think we can merge in this state. Also, if an env has Python version < 3.7 installed, we raise a warning, so I don't think we even need to create (and pin) an issue to notify the contributors of this change."
] |
1,264,636,481
| 4,459
|
Add and fix language tags for udhr dataset
|
closed
| 2022-06-08T12:03:42
| 2022-06-08T12:36:24
| 2022-06-08T12:27:13
|
https://github.com/huggingface/datasets/pull/4459
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4459",
"html_url": "https://github.com/huggingface/datasets/pull/4459",
"diff_url": "https://github.com/huggingface/datasets/pull/4459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4459.patch",
"merged_at": "2022-06-08T12:27:13"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,263,531,911
| 4,457
|
First draft of the docs for TF + Datasets
|
closed
| 2022-06-07T16:06:48
| 2022-06-14T16:08:41
| 2022-06-14T15:59:08
|
https://github.com/huggingface/datasets/pull/4457
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4457",
"html_url": "https://github.com/huggingface/datasets/pull/4457",
"diff_url": "https://github.com/huggingface/datasets/pull/4457.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4457.patch",
"merged_at": "2022-06-14T15:59:08"
}
|
Rocketknight1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Some links are still missing I think :)",
"This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestions feel free.",
"Thanks ! We plan to make a new release later today for `to_tf_dataset` FYI, so I think we can merge it soon and include this documentation in the new release"
] |
1,263,241,449
| 4,456
|
Workflow for Tabular data
|
open
| 2022-06-07T12:48:22
| 2023-03-06T08:53:55
| null |
https://github.com/huggingface/datasets/issues/4456
| null |
lhoestq
| false
|
[
"I use below to load a dataset:\r\n```\r\ndataset = datasets.load_dataset(\"scikit-learn/auto-mpg\")\r\ndf = pd.DataFrame(dataset[\"train\"])\r\n```\r\nTBH as said, tabular folk split their own dataset, they sometimes have two splits, sometimes three. Maybe somehow avoiding it for tabular datasets might be good for later. (it's just UX improvement) ",
"is very slow batch access of a dataset (tabular, csv) with many columns to be expected?",
"Define \"many\" ? x)",
"~20k! I was surprised batch loading with as few as 32 samples was really slow. I was speculating the columnar format was the cause -- or do you see good performance with this approx size of tabular data?",
"20k can be a lot for a columnar format but maybe we can optimize a few things.\r\n\r\nIt would be cool to profile the code to see if there's an unoptimized part of the code that slows everything down.\r\n\r\n(it's also possible to kill the job when it accesses the batch, it often gives you the traceback at the location where the code was running)",
"FWIW I've worked with tabular data with 540k columns.",
"thats awesome, whats your secret? would love to see an example!",
"@wconnell I'm not sure what you mean by my secret, I load them into a numpy array 😁 \r\n\r\nAn example dataset is [here](https://portal.gdc.cancer.gov/repository?facetTab=files&filters=%7B%22content%22%3A%5B%7B%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-CESC%22%5D%7D%2C%22op%22%3A%22in%22%7D%2C%7B%22content%22%3A%7B%22field%22%3A%22files.data_category%22%2C%22value%22%3A%5B%22DNA%20Methylation%22%5D%7D%2C%22op%22%3A%22in%22%7D%5D%2C%22op%22%3A%22and%22%7D&searchTableTab=files) which is a dataset of DNA methylation reads. This dataset is about 950 rows and 450k columns. "
] |
1,263,089,067
| 4,455
|
Update data URLs in fever dataset
|
closed
| 2022-06-07T10:40:54
| 2022-06-08T07:24:54
| 2022-06-08T07:16:17
|
https://github.com/huggingface/datasets/pull/4455
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4455",
"html_url": "https://github.com/huggingface/datasets/pull/4455",
"diff_url": "https://github.com/huggingface/datasets/pull/4455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4455.patch",
"merged_at": "2022-06-08T07:16:16"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,262,674,973
| 4,454
|
Dataset Viewer issue for Yaxin/SemEval2015
|
closed
| 2022-06-07T03:31:46
| 2022-06-07T11:53:11
| 2022-06-07T11:53:11
|
https://github.com/huggingface/datasets/issues/4454
| null |
WithYouTo
| false
|
[
"Closing since it's a duplicate of https://github.com/huggingface/datasets/issues/4453"
] |
1,262,674,105
| 4,453
|
Dataset Viewer issue for Yaxin/SemEval2015
|
closed
| 2022-06-07T03:30:08
| 2022-06-09T08:34:16
| 2022-06-09T08:34:16
|
https://github.com/huggingface/datasets/issues/4453
| null |
WithYouTo
| false
|
[
"I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```",
"`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps://huggingface.co/datasets/Yaxin/SemEval2015/discussions/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !",
"Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`."
] |
1,262,529,654
| 4,452
|
Trying to load FEVER dataset results in NonMatchingChecksumError
|
closed
| 2022-06-06T23:13:15
| 2022-12-15T13:36:40
| 2022-06-08T07:16:16
|
https://github.com/huggingface/datasets/issues/4452
| null |
santhnm2
| false
|
[
"Thanks for reporting @santhnm2. We are fixing it.\r\n\r\nData owners updated their URLs recently. We have to align with them, otherwise you do not download anything (that is why ignore_verifications does not work).",
"Hello! Is there any update on this? I am having the same issue 6 months later."
] |
1,262,103,323
| 4,451
|
Use newer version of multi-news with fixes
|
closed
| 2022-06-06T16:57:08
| 2022-06-07T17:40:01
| 2022-06-07T17:14:44
|
https://github.com/huggingface/datasets/pull/4451
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4451",
"html_url": "https://github.com/huggingface/datasets/pull/4451",
"diff_url": "https://github.com/huggingface/datasets/pull/4451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4451.patch",
"merged_at": "2022-06-07T17:14:44"
}
|
JohnGiorgi
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome thanks @mariosasko!"
] |
1,261,878,324
| 4,450
|
Update README.md of fquad
|
closed
| 2022-06-06T13:52:41
| 2022-06-06T14:51:49
| 2022-06-06T14:43:03
|
https://github.com/huggingface/datasets/pull/4450
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4450",
"html_url": "https://github.com/huggingface/datasets/pull/4450",
"diff_url": "https://github.com/huggingface/datasets/pull/4450.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4450.patch",
"merged_at": "2022-06-06T14:43:03"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,261,262,326
| 4,449
|
Rj
|
closed
| 2022-06-06T02:24:32
| 2022-06-06T15:44:50
| 2022-06-06T15:44:50
|
https://github.com/huggingface/datasets/issues/4449
| null |
Aeckard45
| false
|
[] |
1,260,966,129
| 4,448
|
New Preprocessing Feature - Deduplication [Request]
|
open
| 2022-06-05T05:32:56
| 2023-12-12T07:52:40
| null |
https://github.com/huggingface/datasets/issues/4448
| null |
yuvalkirstain
| false
|
[
"Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the native PyArrow ops.\r\n\r\n(Btw, this is a duplicate of https://github.com/huggingface/datasets/issues/2514)",
"Here is an example using the [datasets_sql](https://github.com/mariosasko/datasets_sql) mentioned \r\n\r\n```python \r\nfrom datasets_sql import query\r\n\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\n\r\n# If you dont have an id column just add one by enumerating\r\ndataset=dataset.add_column(\"id\", range(len(dataset)))\r\n\r\nid_column='id'\r\nunique_column='text'\r\n\r\n# always selects min id\r\nunique_dataset = query(f\"SELECT dataset.* FROM dataset JOIN (SELECT MIN({id_column}) as unique_id FROM dataset group by {unique_column}) ON unique_id=dataset.{id_column}\")\r\n```\r\nNot ideal for large datasets but good enough for basic cases.\r\nSure would be nice to have in the library 🤗 "
] |
1,260,041,805
| 4,447
|
Minor fixes/improvements in `scene_parse_150` card
|
closed
| 2022-06-03T15:22:34
| 2022-06-06T15:50:25
| 2022-06-06T15:41:37
|
https://github.com/huggingface/datasets/pull/4447
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4447",
"html_url": "https://github.com/huggingface/datasets/pull/4447",
"diff_url": "https://github.com/huggingface/datasets/pull/4447.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4447.patch",
"merged_at": "2022-06-06T15:41:37"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,260,028,995
| 4,446
|
Add missing kwargs to docstrings
|
closed
| 2022-06-03T15:10:27
| 2022-06-03T16:10:09
| 2022-06-03T16:01:29
|
https://github.com/huggingface/datasets/pull/4446
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4446",
"html_url": "https://github.com/huggingface/datasets/pull/4446",
"diff_url": "https://github.com/huggingface/datasets/pull/4446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4446.patch",
"merged_at": "2022-06-03T16:01:29"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,259,947,568
| 4,445
|
Fix missing args in docstring of load_dataset_builder
|
closed
| 2022-06-03T13:55:50
| 2022-06-03T14:35:32
| 2022-06-03T14:27:09
|
https://github.com/huggingface/datasets/pull/4445
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4445",
"html_url": "https://github.com/huggingface/datasets/pull/4445",
"diff_url": "https://github.com/huggingface/datasets/pull/4445.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4445.patch",
"merged_at": "2022-06-03T14:27:09"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,259,738,209
| 4,444
|
Fix kwargs in docstrings
|
closed
| 2022-06-03T10:29:02
| 2022-06-03T11:01:28
| 2022-06-03T10:52:46
|
https://github.com/huggingface/datasets/pull/4444
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4444",
"html_url": "https://github.com/huggingface/datasets/pull/4444",
"diff_url": "https://github.com/huggingface/datasets/pull/4444.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4444.patch",
"merged_at": "2022-06-03T10:52:46"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,259,606,334
| 4,443
|
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
|
open
| 2022-06-03T08:17:16
| 2023-09-25T12:15:08
| null |
https://github.com/huggingface/datasets/issues/4443
| null |
ZYMXIXI
| false
|
[
"If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.",
"I'm having a look.",
"Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L260\r\n - \"test key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```",
"Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km",
"Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ",
"I've opened a Discussion page, so that we can ask/answer and propose fixes until the script works properly: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/discussions/1\r\n\r\nCC: @julien-c @jacobbieker ",
"can we close this issue and followup in the discussion?"
] |
1,258,589,276
| 4,442
|
Dataset Viewer issue for amazon_polarity
|
closed
| 2022-06-02T19:18:38
| 2022-06-07T18:50:37
| 2022-06-07T18:50:37
|
https://github.com/huggingface/datasets/issues/4442
| null |
lewtun
| false
|
[
"Thanks, looking at it",
"Not sure what happened 😬, but it's fixed"
] |
1,258,568,656
| 4,441
|
Dataset Viewer issue for aeslc
|
closed
| 2022-06-02T18:57:12
| 2022-06-07T18:50:55
| 2022-06-07T18:50:55
|
https://github.com/huggingface/datasets/issues/4441
| null |
lewtun
| false
|
[
"Not sure what happened 😬, but it's fixed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.