id
int64
953M
3.35B
number
int64
2.72k
7.75k
title
stringlengths
1
290
state
stringclasses
2 values
created_at
timestamp[s]date
2021-07-26 12:21:17
2025-08-23 00:18:43
updated_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-23 12:34:39
closed_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-20 16:35:55
html_url
stringlengths
49
51
pull_request
dict
user_login
stringlengths
3
26
is_pull_request
bool
2 classes
comments
listlengths
0
30
1,234,496,289
4,339
Dataset loader for the MSLR2022 shared task
closed
2022-05-12T21:23:41
2022-07-18T17:19:27
2022-07-18T16:58:34
https://github.com/huggingface/datasets/pull/4339
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4339", "html_url": "https://github.com/huggingface/datasets/pull/4339", "diff_url": "https://github.com/huggingface/datasets/pull/4339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4339.patch", "merged_at": null }
JohnGiorgi
true
[ "I think the underlying issue is in https://github.com/huggingface/datasets/blob/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec/src/datasets/commands/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines within the rows of a file. I'm happy to make a PR to change how this handling works, or make the change within this PR. \r\n\r\nWe should figure out:\r\n1. Does this dummy data need to be generated more than once? (It looks like no)\r\n2. Should this be fixed generally? (needs a HF person to weigh in here)\r\n3. What is the right way for such a fix to exist permanently here; the [Contributing document](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) doesn't provide guidance on any tests. Writing a test is several times more effort than fixing the underlying issue. (again needs a HF person)", "Would someone from HF mind taking a look at this PR? (@lhoestq)", "Hi ! Sorry for the delay in responding :)\r\n\r\nI don't think there's a big need to fix this in the general case for now, feel free to just generate the dummy data for this specific dataset :)\r\n\r\nThe `datasets-cli dummy_data datasets/mslr2022` command should tell you what dummy files to generate. In each dummy file you just need to include enough data to generate 4 or 5 examples", "_The documentation is not available anymore as the PR was closed or merged._", "Awesome! Generated the dummy data and the tests now pass. @jayded thanks for your help! If you and @lucylw are happy with this I think it's ready to be merged. @lhoestq this is ready for another look :)", "Hi @lhoestq, is there anything blocking this from being merged that I can address?", "Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n\r\nI think this dataset can be under the AllenAI page here: https://huggingface.co/allenai What do you think ?\r\nFeel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n\r\nOnce the dataset is under the AllenAI org, we can close this PR\r\n", "> Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n> \r\n> I think this dataset can be under the AllenAI page here: https://huggingface.co/allenai What do you think ? Feel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n> \r\n> Once the dataset is under the AllenAI org, we can close this PR\r\n\r\nSweet! It is uploaded here: https://huggingface.co/datasets/allenai/mslr2022", "Nice ! Thanks :)\r\n\r\nI think we can close this PR then.\r\n\r\nI noticed that the dataset preview is not available on this dataset, this is because we require datasets to work in streaming mode to show a preview. However TAR archives don't work well in streaming mode (you can't know in advance what files are inside a TAR archive without reading it completely). This can be fixed by using a ZIP archive instead.\r\n\r\nLet me know if you have questions or if I can help." ]
1,234,478,851
4,338
Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full
closed
2022-05-12T21:02:08
2022-05-16T15:51:02
2022-05-16T15:42:59
https://github.com/huggingface/datasets/pull/4338
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4338", "html_url": "https://github.com/huggingface/datasets/pull/4338", "diff_url": "https://github.com/huggingface/datasets/pull/4338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4338.patch", "merged_at": "2022-05-16T15:42:59" }
sashavor
true
[ "Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,234,470,083
4,337
Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR
closed
2022-05-12T20:52:02
2022-05-16T16:26:19
2022-05-16T16:18:30
https://github.com/huggingface/datasets/pull/4337
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4337", "html_url": "https://github.com/huggingface/datasets/pull/4337", "diff_url": "https://github.com/huggingface/datasets/pull/4337.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4337.patch", "merged_at": "2022-05-16T16:18:30" }
sashavor
true
[ "Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,234,446,174
4,336
Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment
closed
2022-05-12T20:24:45
2022-05-16T16:25:00
2022-05-16T16:24:59
https://github.com/huggingface/datasets/pull/4336
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4336", "html_url": "https://github.com/huggingface/datasets/pull/4336", "diff_url": "https://github.com/huggingface/datasets/pull/4336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4336.patch", "merged_at": "2022-05-16T16:24:59" }
sashavor
true
[ "Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n", "The CI errors about missing content in the dataset cards can be ignored in this PR btw", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4336). All of your documentation changes will be reflected on that endpoint." ]
1,234,157,123
4,335
Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech
closed
2022-05-12T15:28:16
2022-05-16T16:31:10
2022-05-16T16:23:09
https://github.com/huggingface/datasets/pull/4335
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4335", "html_url": "https://github.com/huggingface/datasets/pull/4335", "diff_url": "https://github.com/huggingface/datasets/pull/4335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4335.patch", "merged_at": "2022-05-16T16:23:08" }
sashavor
true
[ "Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it is empty.\r\n- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags\r\n- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'\r\n- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty", "And yes we can ignore all the CI errors related to missing content in the dataset cards, these issues can be fixed in other PRs", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,234,103,477
4,334
Adding eval metadata for billsum
closed
2022-05-12T14:49:08
2023-09-24T10:02:46
2022-05-12T14:49:24
https://github.com/huggingface/datasets/pull/4334
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4334", "html_url": "https://github.com/huggingface/datasets/pull/4334", "diff_url": "https://github.com/huggingface/datasets/pull/4334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4334.patch", "merged_at": null }
sashavor
true
[]
1,234,038,705
4,333
Adding eval metadata for Banking 77
closed
2022-05-12T14:05:05
2022-05-12T21:03:32
2022-05-12T21:03:31
https://github.com/huggingface/datasets/pull/4333
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4333", "html_url": "https://github.com/huggingface/datasets/pull/4333", "diff_url": "https://github.com/huggingface/datasets/pull/4333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4333.patch", "merged_at": "2022-05-12T21:03:31" }
sashavor
true
[ "@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)" ]
1,234,021,188
4,332
Adding eval metadata for arabic speech corpus
closed
2022-05-12T13:51:38
2022-05-12T21:03:21
2022-05-12T21:03:20
https://github.com/huggingface/datasets/pull/4332
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4332", "html_url": "https://github.com/huggingface/datasets/pull/4332", "diff_url": "https://github.com/huggingface/datasets/pull/4332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4332.patch", "merged_at": "2022-05-12T21:03:20" }
sashavor
true
[]
1,234,016,110
4,331
Adding eval metadata to Amazon Polarity
closed
2022-05-12T13:47:59
2022-05-12T21:03:14
2022-05-12T21:03:13
https://github.com/huggingface/datasets/pull/4331
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4331", "html_url": "https://github.com/huggingface/datasets/pull/4331", "diff_url": "https://github.com/huggingface/datasets/pull/4331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4331.patch", "merged_at": "2022-05-12T21:03:13" }
sashavor
true
[]
1,233,992,681
4,330
Adding eval metadata to Allociné dataset
closed
2022-05-12T13:31:39
2022-05-12T21:03:05
2022-05-12T21:03:05
https://github.com/huggingface/datasets/pull/4330
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4330", "html_url": "https://github.com/huggingface/datasets/pull/4330", "diff_url": "https://github.com/huggingface/datasets/pull/4330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4330.patch", "merged_at": "2022-05-12T21:03:05" }
sashavor
true
[]
1,233,991,207
4,329
Adding eval metadata for AG News
closed
2022-05-12T13:30:32
2022-05-12T21:02:41
2022-05-12T21:02:40
https://github.com/huggingface/datasets/pull/4329
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4329", "html_url": "https://github.com/huggingface/datasets/pull/4329", "diff_url": "https://github.com/huggingface/datasets/pull/4329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4329.patch", "merged_at": "2022-05-12T21:02:40" }
sashavor
true
[]
1,233,856,690
4,328
Fix and clean Apache Beam functionality
closed
2022-05-12T11:41:07
2022-05-24T13:43:11
2022-05-24T13:34:32
https://github.com/huggingface/datasets/pull/4328
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4328", "html_url": "https://github.com/huggingface/datasets/pull/4328", "diff_url": "https://github.com/huggingface/datasets/pull/4328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4328.patch", "merged_at": "2022-05-24T13:34:32" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,233,840,020
4,327
`wikipedia` pre-processed datasets
closed
2022-05-12T11:25:42
2022-08-31T08:26:57
2022-08-31T08:26:57
https://github.com/huggingface/datasets/issues/4327
null
vpj
false
[ "Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 228.58 MiB, generated: 224.18 MiB, post-processed: Unknown size, total: 452.76 MiB) to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.66k/1.66k [00:00<00:00, 1.02MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 235M/235M [00:02<00:00, 82.8MB/s]\r\nDataset wikipedia downloaded and prepared to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 290.75it/s]\r\n\r\nreal\t0m9.693s\r\nuser\t0m6.002s\r\nsys\t0m3.260s\r\n```\r\n\r\nCould you please check your environment info, as requested when opening this issue?\r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nMaybe you are using an old version of `datasets`...", "Downloading and processing `wikipedia simple` dataset completed in under 11sec on M1 Mac. Could you please check `dataset` version as mentioned by @albertvillanova? Also check system specs, if system is under load processing could take some time I guess." ]
1,233,818,489
4,326
Fix type hint and documentation for `new_fingerprint`
closed
2022-05-12T11:05:08
2022-06-01T13:04:45
2022-06-01T12:56:18
https://github.com/huggingface/datasets/pull/4326
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4326", "html_url": "https://github.com/huggingface/datasets/pull/4326", "diff_url": "https://github.com/huggingface/datasets/pull/4326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4326.patch", "merged_at": "2022-06-01T12:56:18" }
fxmarty
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,233,812,191
4,325
Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance
closed
2022-05-12T10:59:08
2022-05-13T10:57:15
2022-05-13T10:57:02
https://github.com/huggingface/datasets/issues/4325
null
leondz
false
[ "Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n", "Yes, it's related. The backend behind the dataset viewer is currently under too much load, and these datasets are still in the jobs queue. We're actively working on this issue, and we expect to fix the issue permanently soon. Thanks for your patience 🙏  ", "Thanks @severo and no worries! - a suggestion for a UI usability thing maybe is to indicate that the dataset processing is in the job queue (rather than no data?)", "Thanks, these are working great now (including @domenicrosati 's, afaics!)" ]
1,233,780,870
4,324
Support >1 PWC dataset per dataset card
open
2022-05-12T10:29:07
2022-05-13T11:25:29
null
https://github.com/huggingface/datasets/issues/4324
null
leondz
false
[ "Hi @leondz, I agree it would be nice. We'll see what we can do ;)" ]
1,233,634,928
4,323
Audio can not find value["bytes"]
closed
2022-05-12T08:31:58
2022-07-07T13:16:08
2022-07-07T13:16:08
https://github.com/huggingface/datasets/issues/4323
null
YooSungHyun
false
[ "![image](https://user-images.githubusercontent.com/34292279/168063684-fff5c12a-8b1e-4c65-b18b-36100ab8a1af.png)\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecause we have path and bytes already", "> but i have some confused why path prior is higher than bytes?\r\n\r\nIf the audio file is already available locally, we don't need to store the bytes again.\r\n\r\nIf you don't specify a \"path\" to a local file, then the bytes are stored. You can set \"path\" to None for example.\r\n\r\n> if you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\n> because we have path and bytes already\r\n\r\nIt's useful to pass both \"path\" and \"bytes\" in `_generate_examples`:\r\n- when the dataset has been downloaded, then the \"path\" to the audio files are stored and we can ignore \"bytes\" in order to save disk space.\r\n- when the dataset is loaded in streaming mode, the audio files are not available on your disk and therefore we use the \"bytes\" ", "@lhoestq \r\nFirst of all, thx for reply\r\n\r\nbut, if i put in \"bytes\" and \"path\"\r\nex) {\"bytes\":\"blah blah~\", \"path\":\"blah blah~\"}\r\n\r\nthat source working that my bytes to empty first,\r\nand then, re-calculate my bytes!\r\n![image](https://user-images.githubusercontent.com/34292279/168534687-1fb60d8c-d369-47d2-a4bb-db68f95194b4.png)\r\n\r\nif you have some pcm file, pcm is can read bytes.\r\nso, i put in bytes and paths.\r\nbut bytes is been None why encode_example func make None\r\nand then, on decode_example func, we no have bytes. so, calculate bytes to path.\r\npcm is not support librosa or soundfile, error occured!\r\n\r\nthe most important thing is not announced anywhere this situation can be reproduced\r\n\r\nis that truly right process flow?", "I don't think we support PCM files, feel free to convert your data to WAV for now.\r\n\r\nIt would be awesome to support PCM files though, let me know if you'd like to contribute this feature, I'd be happy to help", "@lhoestq oh, how can i contribute?", "You can clone the repository (see the guide on [how to contribute](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-create-a-pull-request)) and see how we can make the `Image.encode_example` method work with PCM data.\r\n\r\nThere might be other ways to approach this problem, but here is what I think is a reasonable one:\r\n\r\nI think `Image.encode_example` should be able to take PCM bytes as input and the sampling rate, and return the WAV bytes (built by combining the PCM bytes and the sampling rate info), so that `Image.decode_example` can read it.\r\n\r\nTo check if the input bytes are PCM data, you can just check if the extension of the `path` is \".pcm\".\r\n", "maybe i can start to contribute on this sunday!\r\n@lhoestq ", "@lhoestq plz check my pr #4409 \r\n\r\nam i wrong somting?", "Thanks, I reviewed your PR :)" ]
1,233,596,947
4,322
Added stratify option to train_test_split function.
closed
2022-05-12T08:00:31
2022-11-22T14:53:55
2022-05-25T20:43:51
https://github.com/huggingface/datasets/pull/4322
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4322", "html_url": "https://github.com/huggingface/datasets/pull/4322", "diff_url": "https://github.com/huggingface/datasets/pull/4322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4322.patch", "merged_at": "2022-05-25T20:43:51" }
nandwalritik
true
[ "> Nice thank you ! This will be super useful :)\r\n> \r\n> Could you also add some tests in test_arrow_dataset.py and add an example of usage in the `Example:` section of the `train_test_split` docstring ?\r\n\r\nI will try to do it, is there any documentation for adding test cases? I have never done it before.", "Thanks for the changes !\r\n\r\n> I will try to do it, is there any documentation for adding test cases? I have never done it before.\r\n\r\nYou can just add a function `test_train_test_split_startify` in `test_arrow_dataset.py`.\r\n\r\nIn this function you can define a dataset and make sure that `train_test_split` with the `stratify` argument works as expected.\r\n\r\nYou can do `pytest tests/test_arrow_dataset.py::test_train_test_split_startify` to run your test.\r\n\r\nFeel free to get some inspiration from other tests like `test_interleave_datasets` for example", "I have added tests for stratified train_test_split in `test_arrow_dataset.py` file inside `test_train_test_split_startify` function. I have also added example usage with `stratify` arg in `Example:` section of the `train_test_split` docstring.\r\nResults of tests:\r\n```\r\n(data) nandwalritik@hp:~/datasets$ pytest tests/test_arrow_dataset.py::test_train_test_split_startify -W ignore\r\n============================================================================ test session starts ============================================================================\r\nplatform linux -- Python 3.9.5, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /home/nandwalritik/datasets\r\nplugins: datadir-1.3.1, forked-1.4.0, xdist-2.5.0\r\ncollected 1 item \r\n\r\ntests/test_arrow_dataset.py . [100%]\r\n\r\n============================================================================= 1 passed in 0.12s =============================================================================\r\n\r\n```", "Thanks a lot !\r\n\r\n`utils/stratify.py` sounds good yes :)\r\n\r\nAlso feel free to merge `master` into your branch to fix the CI ;)", "Added all the changes as were suggested and rebased with `main`.", "_The documentation is not available anymore as the PR was closed or merged._", "Hi, I encounter an error when I try to specify the stratify_by_column. However, I have a columns which specific the label of the row as a string. But an error showed when I try to do it. \"ValueError: Stratifying by column is only supported for ClassLabel column, and column code is Value.\".", "Hi @Damon03 , you can change the type of your column to ClassLabel using\r\n```python\r\nds = ds.class_encode_column(column_name)\r\n```\r\nthen you'll be free to use `stratify` :)", "Thank you so much. It worked." ]
1,233,273,351
4,321
Adding dataset enwik8
closed
2022-05-11T23:25:02
2022-06-01T14:27:30
2022-06-01T14:04:06
https://github.com/huggingface/datasets/pull/4321
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4321", "html_url": "https://github.com/huggingface/datasets/pull/4321", "diff_url": "https://github.com/huggingface/datasets/pull/4321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4321.patch", "merged_at": "2022-06-01T14:04:06" }
HallerPatrick
true
[ "@lhoestq Thank you for the great feedback! Looks like all tests are passing now :)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,233,208,864
4,320
Multi-news dataset loader attempts to strip wrong character from beginning of summaries
closed
2022-05-11T21:36:41
2022-05-16T13:52:10
2022-05-16T13:52:10
https://github.com/huggingface/datasets/issues/4320
null
JohnGiorgi
false
[ "Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character", "Cool! I made a PR." ]
1,232,982,023
4,319
Adding eval metadata for ade v2
closed
2022-05-11T17:36:20
2022-05-12T13:29:51
2022-05-12T13:22:19
https://github.com/huggingface/datasets/pull/4319
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4319", "html_url": "https://github.com/huggingface/datasets/pull/4319", "diff_url": "https://github.com/huggingface/datasets/pull/4319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4319.patch", "merged_at": "2022-05-12T13:22:19" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,232,905,488
4,318
Don't check f.loc in _get_extraction_protocol_with_magic_number
closed
2022-05-11T16:27:09
2022-05-11T16:57:02
2022-05-11T16:46:31
https://github.com/huggingface/datasets/pull/4318
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4318", "html_url": "https://github.com/huggingface/datasets/pull/4318", "diff_url": "https://github.com/huggingface/datasets/pull/4318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4318.patch", "merged_at": "2022-05-11T16:46:31" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,232,737,401
4,317
Fix cnn_dailymail (dm stories were ignored)
closed
2022-05-11T14:25:25
2022-05-11T16:00:09
2022-05-11T15:52:37
https://github.com/huggingface/datasets/pull/4317
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4317", "html_url": "https://github.com/huggingface/datasets/pull/4317", "diff_url": "https://github.com/huggingface/datasets/pull/4317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4317.patch", "merged_at": "2022-05-11T15:52:37" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,232,681,207
4,316
Support passing config_kwargs to CLI run_beam
closed
2022-05-11T13:53:37
2022-05-11T14:36:49
2022-05-11T14:28:31
https://github.com/huggingface/datasets/pull/4316
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4316", "html_url": "https://github.com/huggingface/datasets/pull/4316", "diff_url": "https://github.com/huggingface/datasets/pull/4316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4316.patch", "merged_at": "2022-05-11T14:28:31" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,232,549,330
4,315
Fix CLI run_beam namespace
closed
2022-05-11T12:21:00
2022-05-11T13:13:00
2022-05-11T13:05:08
https://github.com/huggingface/datasets/pull/4315
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4315", "html_url": "https://github.com/huggingface/datasets/pull/4315", "diff_url": "https://github.com/huggingface/datasets/pull/4315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4315.patch", "merged_at": "2022-05-11T13:05:08" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,232,326,726
4,314
Catch pull error when mirroring
closed
2022-05-11T09:38:35
2022-05-11T12:54:07
2022-05-11T12:46:42
https://github.com/huggingface/datasets/pull/4314
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4314", "html_url": "https://github.com/huggingface/datasets/pull/4314", "diff_url": "https://github.com/huggingface/datasets/pull/4314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4314.patch", "merged_at": "2022-05-11T12:46:42" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,231,764,100
4,313
Add API code examples for Builder classes
closed
2022-05-10T22:22:32
2022-05-12T17:02:43
2022-05-12T12:36:57
https://github.com/huggingface/datasets/pull/4313
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4313", "html_url": "https://github.com/huggingface/datasets/pull/4313", "diff_url": "https://github.com/huggingface/datasets/pull/4313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4313.patch", "merged_at": "2022-05-12T12:36:57" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,231,662,775
4,312
added TR-News dataset
closed
2022-05-10T20:33:00
2022-10-03T09:36:45
2022-10-03T09:36:45
https://github.com/huggingface/datasets/pull/4312
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4312", "html_url": "https://github.com/huggingface/datasets/pull/4312", "diff_url": "https://github.com/huggingface/datasets/pull/4312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4312.patch", "merged_at": null }
batubayk
true
[ "Thanks for your contribution, @batubayk.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nI would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
1,231,369,438
4,311
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
closed
2022-05-10T15:52:15
2022-05-10T17:19:42
2022-05-10T17:11:47
https://github.com/huggingface/datasets/pull/4311
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4311", "html_url": "https://github.com/huggingface/datasets/pull/4311", "diff_url": "https://github.com/huggingface/datasets/pull/4311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4311.patch", "merged_at": "2022-05-10T17:11:47" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it" ]
1,231,319,815
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
closed
2022-05-10T15:12:53
2022-05-11T16:46:31
2022-05-11T16:46:31
https://github.com/huggingface/datasets/issues/4310
null
milmin
false
[]
1,231,232,935
4,309
[WIP] Add TEDLIUM dataset
closed
2022-05-10T14:12:47
2022-06-17T12:54:40
2022-06-17T11:44:01
https://github.com/huggingface/datasets/pull/4309
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4309", "html_url": "https://github.com/huggingface/datasets/pull/4309", "diff_url": "https://github.com/huggingface/datasets/pull/4309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4309.patch", "merged_at": null }
sanchit-gandhi
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium/release1 to /home/sanchitgandhi/cache/tedlium/release1/1.0.1/5a9fcb97b4b52d5a1c9dc7bde4b1d5994cd89c4a3425ea36c789bf6096fee4f0...\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/load.py\", line 1703, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 1240, in _download_and_prepare\r\n raise MissingBeamOptions(\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n `load_dataset('tedlium', 'release1', beam_runner='DirectRunner')`\r\n```\r\nSpecifying the `beam_runner='DirectRunner'` works:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache', beam_runner='DirectRunner')\r\n```", "Extra Python imports/Linux packages:\r\n```\r\npip install pydub\r\nsudo apt install ffmpeg\r\n```", "Script heavily inspired by the TF datasets script at: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/tedlium.py\r\n\r\nThe TF datasets script uses the module AudioSegment from the package `pydub` (https://github.com/jiaaro/pydub), which is used to to open the audio files (stored in .sph format):\r\nhttps://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L167-L170\r\nThis package requires the pip install of `pydub` and the system installation of `ffmpeg`: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nThe TF datasets script also uses `_build_pcollection`:\r\nhttps://github.com/huggingface/datasets/blob/8afbbb6fe66b40d05574e2e72e65e974c72ae769/datasets/tedlium/tedlium.py#L200-L206\r\nHowever, I was advised against using `beam` logic. Thus, I have reverted to generating the examples file-by-file: https://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L112-L138\r\n\r\nI am now able to generate examples by running the `load_dataset` command:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\nHere, generating examples is **extremely** slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?", "> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nIt's ok, windows users will have have a bad time but I'm not sure we can do much about it.\r\n\r\n> Here, generating examples is extremely slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?\r\n\r\nNot at the moment. For such cases we advise hosting the dataset ourselves in a processed format. The license doesn't allow this since the license is \"NoDerivatives\". Currently the only way to parallelize it is by keeping is as a beam dataset and let users pay Google Dataflow to process it (or use spark or whatever).", "Thanks for your super speedy reply @lhoestq!\r\n\r\nI’ve uploaded the script and README.md to the org here: https://huggingface.co/datasets/LIUM/tedlium\r\nIs any modification of the script required to be able to use it from the Hub? When I run:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntedlium = load_dataset(\"LIUM/tedlium\", \"release1\") # for Release 1\r\n```\r\nI get the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 load_dataset(\"LIUM/tedlium\", \"release1\")\r\n\r\nFile ~/datasets/src/datasets/load.py:1676, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1673 ignore_verifications = ignore_verifications or save_infos\r\n 1675 # Create a dataset builder\r\n-> 1676 builder_instance = load_dataset_builder(\r\n 1677 path=path,\r\n 1678 name=name,\r\n 1679 data_dir=data_dir,\r\n 1680 data_files=data_files,\r\n 1681 cache_dir=cache_dir,\r\n 1682 features=features,\r\n 1683 download_config=download_config,\r\n 1684 download_mode=download_mode,\r\n 1685 revision=revision,\r\n 1686 use_auth_token=use_auth_token,\r\n 1687 **config_kwargs,\r\n 1688 )\r\n 1690 # Return iterable dataset in case of streaming\r\n 1691 if streaming:\r\n\r\nFile ~/datasets/src/datasets/load.py:1502, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1500 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1501 download_config.use_auth_token = use_auth_token\r\n-> 1502 dataset_module = dataset_module_factory(\r\n 1503 path,\r\n 1504 revision=revision,\r\n 1505 download_config=download_config,\r\n 1506 download_mode=download_mode,\r\n 1507 data_dir=data_dir,\r\n 1508 data_files=data_files,\r\n 1509 )\r\n 1511 # Get dataset builder class from the processing script\r\n 1512 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:1254, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1249 if isinstance(e1, FileNotFoundError):\r\n 1250 raise FileNotFoundError(\r\n 1251 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1252 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1253 ) from None\r\n-> 1254 raise e1 from None\r\n 1255 else:\r\n 1256 raise FileNotFoundError(\r\n 1257 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory.\"\r\n 1258 )\r\n\r\nFile ~/datasets/src/datasets/load.py:1227, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1225 raise e\r\n 1226 if filename in [sibling.rfilename for sibling in dataset_info.siblings]:\r\n-> 1227 return HubDatasetModuleFactoryWithScript(\r\n 1228 path,\r\n 1229 revision=revision,\r\n 1230 download_config=download_config,\r\n 1231 download_mode=download_mode,\r\n 1232 dynamic_modules_path=dynamic_modules_path,\r\n 1233 ).get_module()\r\n 1234 else:\r\n 1235 return HubDatasetModuleFactoryWithoutScript(\r\n 1236 path,\r\n 1237 revision=revision,\r\n (...)\r\n 1241 download_mode=download_mode,\r\n 1242 ).get_module()\r\n\r\nFile ~/datasets/src/datasets/load.py:940, in HubDatasetModuleFactoryWithScript.get_module(self)\r\n 938 def get_module(self) -> DatasetModule:\r\n 939 # get script and other files\r\n--> 940 local_path = self.download_loading_script()\r\n 941 dataset_infos_path = self.download_dataset_infos_file()\r\n 942 imports = get_imports(local_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:918, in HubDatasetModuleFactoryWithScript.download_loading_script(self)\r\n 917 def download_loading_script(self) -> str:\r\n--> 918 file_path = hf_hub_url(path=self.name, name=self.name.split(\"/\")[1] + \".py\", revision=self.revision)\r\n 919 download_config = self.download_config.copy()\r\n 920 if download_config.download_desc is None:\r\n\r\nTypeError: hf_hub_url() got an unexpected keyword argument 'name'\r\n```\r\n\r\nNote that I am able to load the dataset from the `datasets` repo with the following lines of code:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```", "What version of `datasets` do you have ?\r\nUpdating `datasets` should fix the error ;)\r\n", "> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\n`soundfile`, which is a required audio dependency, should also work with `.sph` files, no?", "> `soundfile`, which is a required audio dependency, should also work with `.sph` files, no?\r\n\r\nAwesome, thanks for the pointer @mariosasko! Switched `pydub` to `soundfile`, and having specifying the `dtype` argument in `soundfile.read` as `np.int16`, the arrays match with those from `pydub` ✅\r\n\r\nI also did some heavy optimising of the script with the processing of the `.stm` and `.sph` files - it now runs 2000x faster than before, so there probably isn't a need to upload the data to the Hub @lhoestq. The total processing time is just ~2mins now 🚀\r\n", "TEDLIUM completed and uploaded to the HF Hub: https://huggingface.co/datasets/LIUM/tedlium", "Awesome !" ]
1,231,217,783
4,308
Remove unused multiprocessing args from test CLI
closed
2022-05-10T14:02:15
2022-05-11T12:58:25
2022-05-11T12:50:43
https://github.com/huggingface/datasets/pull/4308
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4308", "html_url": "https://github.com/huggingface/datasets/pull/4308", "diff_url": "https://github.com/huggingface/datasets/pull/4308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4308.patch", "merged_at": "2022-05-11T12:50:42" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,231,175,639
4,307
Add packaged builder configs to the documentation
closed
2022-05-10T13:34:19
2022-05-10T14:03:50
2022-05-10T13:55:54
https://github.com/huggingface/datasets/pull/4307
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4307", "html_url": "https://github.com/huggingface/datasets/pull/4307", "diff_url": "https://github.com/huggingface/datasets/pull/4307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4307.patch", "merged_at": "2022-05-10T13:55:54" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,231,137,204
4,306
`load_dataset` does not work with certain filename.
closed
2022-05-10T13:14:04
2022-05-10T18:58:36
2022-05-10T18:58:09
https://github.com/huggingface/datasets/issues/4306
null
whatever60
false
[ "Never mind. It is because of the caching of datasets..." ]
1,231,099,934
4,305
Fixes FrugalScore
open
2022-05-10T12:44:06
2022-09-22T16:42:06
null
https://github.com/huggingface/datasets/pull/4305
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4305", "html_url": "https://github.com/huggingface/datasets/pull/4305", "diff_url": "https://github.com/huggingface/datasets/pull/4305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4305.patch", "merged_at": null }
moussaKam
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4305). All of your documentation changes will be reflected on that endpoint.", "> predictions and references are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.\r\n\r\nWhat is the order of magnitude of the difference ? Do you know what causes this ?\r\n\r\n> I switched to dynamic padding that was was used in the training, forcing the padding to max_length introduces errors for some reason that I ignore.\r\n\r\nWhat error ?" ]
1,231,047,051
4,304
Language code search does direct matches
open
2022-05-10T11:59:16
2022-05-10T12:38:42
null
https://github.com/huggingface/datasets/issues/4304
null
leondz
false
[ "Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now." ]
1,230,867,728
4,303
Fix: Add missing comma
closed
2022-05-10T09:21:38
2022-05-11T08:50:15
2022-05-11T08:50:14
https://github.com/huggingface/datasets/pull/4303
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4303", "html_url": "https://github.com/huggingface/datasets/pull/4303", "diff_url": "https://github.com/huggingface/datasets/pull/4303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4303.patch", "merged_at": "2022-05-11T08:50:14" }
mrm8488
true
[ "The CI failure is unrelated to this PR and fixed on master, merging :)" ]
1,230,651,117
4,302
Remove hacking license tags when mirroring datasets on the Hub
closed
2022-05-10T05:52:46
2022-05-20T09:48:30
2022-05-20T09:40:20
https://github.com/huggingface/datasets/pull/4302
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4302", "html_url": "https://github.com/huggingface/datasets/pull/4302", "diff_url": "https://github.com/huggingface/datasets/pull/4302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4302.patch", "merged_at": null }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters.", "Ok, let me rename the bad config names :) I think I can also keep backward compatibility with a warning", "Almost done with it btw, will submit a PR that shows all the configuration name changes (from a bit more than 20 datasets)", "Please, let me know when the renaming of configs is done. If not enough bandwidth, I can take care of it...", "Will focus on this this afternoon ;)", "I realized when renaming all the configurations with dots in https://github.com/huggingface/datasets/pull/4365 that it's not ideal for certain cases. For example:\r\n- many configurations have a version like \"1.0.0\" in their names\r\n- to avoid breaking changes we need to replace dots with underscores in the user input and show a warning, which hurts the experience\r\n- our second most downloaded dataset at the moment is affected: `newsgroup`\r\n- if we disallow dots, then we'll never be able to make the [allenai/c4](https://huggingface.co/datasets/allenai/c4) work with its different configurations since they contain dots, and we can't rename them because they are the official download links\r\n\r\nI was thinking of other alternatives:\r\n1. just stop separating tags per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway\r\n2. use another YAML structure to avoid having config names as keys, such as\r\n```yaml\r\nlanguages:\r\n- config: 20220301_en\r\n values:\r\n - en\r\n```\r\n\r\nI'm down for 1, to keep things simple", "@lhoestq I agree:\r\n- better not changing config names (so that we do not introduce any braking change)\r\n- therefore, we should not use them as keys\r\n\r\nIn relation with the proposed solutions, I have no strong opinion:\r\n- option 1 is simpler and aligns better with current usage on the Hub (configs are ignored)\r\n- however:\r\n - we will lose all the information per config we already have (for those datasets containing config keys; contributors made an effort to put that information per config)\r\n - and this information might be useful on the Hub in the future, in case we would like to enrich the search feature with more granularity; this is only applicable if this feature could eventually make sense\r\n\r\nSo, no strong opinion...", "Closing in favor of https://github.com/huggingface/datasets/pull/4367" ]
1,230,401,256
4,301
Add ImageNet-Sketch dataset
closed
2022-05-09T23:38:45
2022-05-23T18:14:14
2022-05-23T18:05:29
https://github.com/huggingface/datasets/pull/4301
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4301", "html_url": "https://github.com/huggingface/datasets/pull/4301", "diff_url": "https://github.com/huggingface/datasets/pull/4301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4301.patch", "merged_at": "2022-05-23T18:05:29" }
nateraw
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data.\r\n\r\nI think it's fine to upload the dataset as soon as we mention explicitly that the images may be subject to copyright." ]
1,230,272,761
4,300
Add API code examples for loading methods
closed
2022-05-09T21:30:26
2022-05-25T16:23:15
2022-05-25T09:20:13
https://github.com/huggingface/datasets/pull/4300
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4300", "html_url": "https://github.com/huggingface/datasets/pull/4300", "diff_url": "https://github.com/huggingface/datasets/pull/4300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4300.patch", "merged_at": "2022-05-25T09:20:12" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,230,236,782
4,299
Remove manual download from imagenet-1k
closed
2022-05-09T20:49:18
2022-05-25T14:54:59
2022-05-25T14:46:16
https://github.com/huggingface/datasets/pull/4299
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4299", "html_url": "https://github.com/huggingface/datasets/pull/4299", "diff_url": "https://github.com/huggingface/datasets/pull/4299.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4299.patch", "merged_at": "2022-05-25T14:46:16" }
mariosasko
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the reviews @apsdehal and @lhoestq! As suggested by @lhoestq, I'll separate the train/val/test splits, apply the validation split fixes and shuffle the images files to simplify the script and make streaming faster.", "@apsdehal I dismissed your review as it's no longer relevant after the data files changes suggested by @lhoestq. " ]
1,229,748,006
4,298
Normalise license names
closed
2022-05-09T13:51:32
2022-05-20T09:51:50
2022-05-20T09:51:50
https://github.com/huggingface/datasets/issues/4298
null
leondz
false
[ "we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")", "Fixed by #4367." ]
1,229,735,498
4,297
Datasets YAML tagging space is down
closed
2022-05-09T13:45:05
2022-05-09T14:44:25
2022-05-09T14:44:25
https://github.com/huggingface/datasets/issues/4297
null
leondz
false
[ "@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess", "Thanks for reporting, fixing it now", "It's up again :)" ]
1,229,554,645
4,296
Fix URL query parameters in compression hop path when streaming
open
2022-05-09T11:18:22
2022-07-06T15:19:53
null
https://github.com/huggingface/datasets/pull/4296
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4296", "html_url": "https://github.com/huggingface/datasets/pull/4296", "diff_url": "https://github.com/huggingface/datasets/pull/4296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4296.patch", "merged_at": null }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4296). All of your documentation changes will be reflected on that endpoint." ]
1,229,527,283
4,295
Fix missing lz4 dependency for tests
closed
2022-05-09T10:53:20
2022-05-09T11:21:22
2022-05-09T11:13:44
https://github.com/huggingface/datasets/pull/4295
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4295", "html_url": "https://github.com/huggingface/datasets/pull/4295", "diff_url": "https://github.com/huggingface/datasets/pull/4295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4295.patch", "merged_at": "2022-05-09T11:13:44" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,229,455,582
4,294
Fix CLI run_beam save_infos
closed
2022-05-09T09:47:43
2022-05-10T07:04:04
2022-05-10T06:56:10
https://github.com/huggingface/datasets/pull/4294
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4294", "html_url": "https://github.com/huggingface/datasets/pull/4294", "diff_url": "https://github.com/huggingface/datasets/pull/4294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4294.patch", "merged_at": "2022-05-10T06:56:10" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,228,815,477
4,293
Fix wrong map parameter name in cache docs
closed
2022-05-08T07:27:46
2022-06-14T16:49:00
2022-06-14T16:07:00
https://github.com/huggingface/datasets/pull/4293
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4293", "html_url": "https://github.com/huggingface/datasets/pull/4293", "diff_url": "https://github.com/huggingface/datasets/pull/4293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4293.patch", "merged_at": "2022-06-14T16:07:00" }
h4iku
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,228,216,788
4,292
Add API code examples for remaining main classes
closed
2022-05-06T18:15:31
2022-05-25T18:05:13
2022-05-25T17:56:36
https://github.com/huggingface/datasets/pull/4292
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4292", "html_url": "https://github.com/huggingface/datasets/pull/4292", "diff_url": "https://github.com/huggingface/datasets/pull/4292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4292.patch", "merged_at": "2022-05-25T17:56:36" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,227,777,500
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
closed
2022-05-06T12:03:27
2022-05-09T08:25:58
2022-05-09T08:25:58
https://github.com/huggingface/datasets/issues/4291
null
leondz
false
[ "Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.", "Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)" ]
1,227,592,826
4,290
Update paper link in medmcqa dataset card
closed
2022-05-06T08:52:51
2022-09-30T11:51:28
2022-09-30T11:49:07
https://github.com/huggingface/datasets/pull/4290
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4290", "html_url": "https://github.com/huggingface/datasets/pull/4290", "diff_url": "https://github.com/huggingface/datasets/pull/4290.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4290.patch", "merged_at": "2022-09-30T11:49:07" }
monk1337
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova Kindly check :)" ]
1,226,821,732
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
closed
2022-05-05T15:21:49
2022-05-10T12:55:06
2022-05-10T12:09:48
https://github.com/huggingface/datasets/pull/4288
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4288", "html_url": "https://github.com/huggingface/datasets/pull/4288", "diff_url": "https://github.com/huggingface/datasets/pull/4288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4288.patch", "merged_at": "2022-05-10T12:09:48" }
alvarobartt
true
[]
1,226,806,652
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
closed
2022-05-05T15:09:45
2022-05-10T13:53:19
2022-05-10T13:53:19
https://github.com/huggingface/datasets/issues/4287
null
alvarobartt
false
[ "So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L249 when trying to `ds_with_embeddings.add_faiss_index(column='embeddings', device=0)` with the code above.\r\n\r\nAs it seems that the `@staticmethod` doesn't recognize the `import faiss` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L261, so whenever the value of `device` is not None in https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L438, that exception is triggered.\r\n\r\nSo on, adding `import faiss` inside https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L305 right after the check of `device`'s value, solves the issue and lets you calculate the indices in GPU.\r\n\r\nI'll add the code in a PR linked to this issue in case you want to merge it!", "Adding here the complete error traceback!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/alvarobartt/lol.py\", line 12, in <module>\r\n ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3656, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 478, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=True)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index = self._faiss_index_to_device(index, self.device)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 327, in _faiss_index_to_device\r\n faiss_res = faiss.StandardGpuResources()\r\nNameError: name 'faiss' is not defined\r\n```", "Closed as https://github.com/huggingface/datasets/pull/4288 already merged! :hugs:" ]
1,226,758,621
4,286
Add Lahnda language tag
closed
2022-05-05T14:34:20
2022-05-10T12:10:04
2022-05-10T12:02:38
https://github.com/huggingface/datasets/pull/4286
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4286", "html_url": "https://github.com/huggingface/datasets/pull/4286", "diff_url": "https://github.com/huggingface/datasets/pull/4286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4286.patch", "merged_at": "2022-05-10T12:02:37" }
mariosasko
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,226,374,831
4,285
Update LexGLUE README.md
closed
2022-05-05T08:36:50
2022-05-05T13:39:04
2022-05-05T13:33:35
https://github.com/huggingface/datasets/pull/4285
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4285", "html_url": "https://github.com/huggingface/datasets/pull/4285", "diff_url": "https://github.com/huggingface/datasets/pull/4285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4285.patch", "merged_at": "2022-05-05T13:33:35" }
iliaschalkidis
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,226,200,727
4,284
Issues in processing very large datasets
closed
2022-05-05T05:01:09
2023-07-25T15:12:38
2023-07-25T15:12:38
https://github.com/huggingface/datasets/issues/4284
null
sajastu
false
[ "Hi ! `datasets` doesn't load the dataset in memory. Instead it uses memory mapping to load your dataset from your disk (it is stored as arrow files). Do you know at what point you have RAM issues exactly ?\r\n\r\nHow big are your graph_data_train dictionaries btw ?", "Closing this issue due to inactivity." ]
1,225,686,988
4,283
Fix filesystem docstring
closed
2022-05-04T17:42:42
2022-05-06T16:32:02
2022-05-06T06:22:17
https://github.com/huggingface/datasets/pull/4283
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4283", "html_url": "https://github.com/huggingface/datasets/pull/4283", "diff_url": "https://github.com/huggingface/datasets/pull/4283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4283.patch", "merged_at": "2022-05-06T06:22:17" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,225,616,545
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
closed
2022-05-04T16:37:01
2022-05-06T10:43:58
2022-05-06T10:37:00
https://github.com/huggingface/datasets/pull/4282
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4282", "html_url": "https://github.com/huggingface/datasets/pull/4282", "diff_url": "https://github.com/huggingface/datasets/pull/4282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4282.patch", "merged_at": "2022-05-06T10:37:00" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?", "Right ! Good catch, thanks, I updated the message to say \"will raise an error in a future major version\"" ]
1,225,556,939
4,281
Remove a copy-paste sentence in dataset cards
closed
2022-05-04T15:41:55
2022-05-06T08:38:03
2022-05-04T18:33:16
https://github.com/huggingface/datasets/pull/4281
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4281", "html_url": "https://github.com/huggingface/datasets/pull/4281", "diff_url": "https://github.com/huggingface/datasets/pull/4281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4281.patch", "merged_at": "2022-05-04T18:33:16" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests have nothing to do with this PR." ]
1,225,446,844
4,280
Add missing features to commonsense_qa dataset
closed
2022-05-04T14:24:26
2022-05-06T14:23:57
2022-05-06T14:16:46
https://github.com/huggingface/datasets/pull/4280
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4280", "html_url": "https://github.com/huggingface/datasets/pull/4280", "diff_url": "https://github.com/huggingface/datasets/pull/4280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4280.patch", "merged_at": "2022-05-06T14:16:46" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova it adds question_concept and id which is great. I suppose we'll talk about staying true to the format on another PR. ", "Yes, let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the dataset feature structure." ]
1,225,300,273
4,279
Update minimal PyArrow version warning
closed
2022-05-04T12:26:09
2022-05-05T08:50:58
2022-05-05T08:43:47
https://github.com/huggingface/datasets/pull/4279
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4279", "html_url": "https://github.com/huggingface/datasets/pull/4279", "diff_url": "https://github.com/huggingface/datasets/pull/4279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4279.patch", "merged_at": "2022-05-05T08:43:47" }
mariosasko
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,225,122,123
4,278
Add missing features to openbookqa dataset for additional config
closed
2022-05-04T09:22:50
2022-05-06T13:13:20
2022-05-06T13:06:01
https://github.com/huggingface/datasets/pull/4278
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4278", "html_url": "https://github.com/huggingface/datasets/pull/4278", "diff_url": "https://github.com/huggingface/datasets/pull/4278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4278.patch", "merged_at": "2022-05-06T13:06:01" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the data feature structure." ]
1,225,002,286
4,277
Enable label alignment for token classification datasets
closed
2022-05-04T07:15:16
2022-05-06T15:42:15
2022-05-06T15:36:31
https://github.com/huggingface/datasets/pull/4277
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4277", "html_url": "https://github.com/huggingface/datasets/pull/4277", "diff_url": "https://github.com/huggingface/datasets/pull/4277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4277.patch", "merged_at": "2022-05-06T15:36:31" }
lewtun
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm, not sure why the Windows tests are failing with:\r\n\r\n```\r\nDid not find path entry C:\\tools\\miniconda3\\bin\r\nC:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n```\r\n\r\nEdit: running the CI again fixed the problem 🙃 ", "> One last nit and we can merge then\r\n\r\nThanks, done!" ]
1,224,949,252
4,276
OpenBookQA has missing and inconsistent field names
closed
2022-05-04T05:51:52
2022-10-11T17:11:53
2022-10-05T13:50:03
https://github.com/huggingface/datasets/issues/4276
null
vblagoje
false
[ "Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ", "Ok, awesome @albertvillanova How about #4275 ?", "On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.\r\n\r\nFor example, other datasets also flatten \"question.stem\" into \"question\":\r\n- ai2_arc:\r\n ```python\r\n question = data[\"question\"][\"stem\"]\r\n choices = data[\"question\"][\"choices\"]\r\n text_choices = [choice[\"text\"] for choice in choices]\r\n label_choices = [choice[\"label\"] for choice in choices]\r\n yield id_, {\r\n \"id\": id_,\r\n \"answerKey\": answerkey,\r\n \"question\": question,\r\n \"choices\": {\"text\": text_choices, \"label\": label_choices},\r\n }\r\n ```\r\n- commonsense_qa:\r\n ```python\r\n question = data[\"question\"]\r\n stem = question[\"stem\"]\r\n yield id_, {\r\n \"answerKey\": answerkey,\r\n \"question\": stem,\r\n \"choices\": {\"label\": labels, \"text\": texts},\r\n }\r\n ```\r\n- cos_e:\r\n ```python\r\n \"question\": cqa[\"question\"][\"stem\"],\r\n ```\r\n- qasc\r\n- quartz\r\n- wiqa\r\n\r\nExceptions:\r\n- exams\r\n\r\nI think we should agree on a CONVENIENT format for QA and use always CONSISTENTLY the same.", "@albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just because we think something makes more sense. I am in that position now (downloading original data rather than using HF Datasets) and undoubtedly it hinders HF Datasets' widespread use and adoption. Missing fields like in the case of #4275 is definitely bad and not even up for a discussion IMHO! cc @lhoestq ", "I'm opening a PR that adds the missing fields.\r\n\r\nLet's agree on the feature structure: @lhoestq @mariosasko @polinaeterna ", "IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case).", "I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibility. Users who relied on the old format will update their code with either the util method for a quick fix or slightly more elaborate for the new. ", "I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.\r\n\r\nThere is always the tension between:\r\n- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),\r\n- and on the other hand performing some kind of standardization/harmonization depending on the task (this has the advantage that once learnt, the same structure applies to all datasets; this has been done for e.g. POS tagging: all datasets have been adapted to a certain \"standard\" structure).\r\n - Another advantage: datasets can easily be interchanged (or joined) to be used by the same model\r\n\r\nRecently, in the BigScience BioMedical hackathon, they adopted a different approach:\r\n- they implement a \"source\" config, respecting the original structure as much as possible\r\n- they implement additional config for each task, with a \"standard\" nested structure per task, which is most useful for users.", "@albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once all the data is there, and users can create lambda functions to create whatever structure serves them best. ", "Datasets are not tracked in this repository anymore. I think we must move this thread to the [discussions tab of the dataset](https://huggingface.co/datasets/openbookqa/discussions)", "Indeed @osbm thanks. I'm closing this issue if it's fine for you all then" ]
1,224,943,414
4,275
CommonSenseQA has missing and inconsistent field names
open
2022-05-04T05:38:59
2022-05-04T11:41:18
null
https://github.com/huggingface/datasets/issues/4275
null
vblagoje
false
[ "Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. " ]
1,224,740,303
4,274
Add API code examples for IterableDataset
closed
2022-05-03T22:44:17
2022-05-04T16:29:32
2022-05-04T16:22:04
https://github.com/huggingface/datasets/pull/4274
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4274", "html_url": "https://github.com/huggingface/datasets/pull/4274", "diff_url": "https://github.com/huggingface/datasets/pull/4274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4274.patch", "merged_at": "2022-05-04T16:22:04" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,224,681,036
4,273
leadboard info added for TNE
closed
2022-05-03T21:35:41
2022-05-05T13:25:24
2022-05-05T13:18:13
https://github.com/huggingface/datasets/pull/4273
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4273", "html_url": "https://github.com/huggingface/datasets/pull/4273", "diff_url": "https://github.com/huggingface/datasets/pull/4273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4273.patch", "merged_at": "2022-05-05T13:18:13" }
yanaiela
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,224,635,660
4,272
Fix typo in logging docs
closed
2022-05-03T20:47:57
2022-05-04T15:42:27
2022-05-04T06:58:36
https://github.com/huggingface/datasets/pull/4272
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4272", "html_url": "https://github.com/huggingface/datasets/pull/4272", "diff_url": "https://github.com/huggingface/datasets/pull/4272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4272.patch", "merged_at": "2022-05-04T06:58:35" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "> This PR fixes #4271.\r\n\r\nThings have not changed when searching \"tqdm\" in the Dataset document. The second result still performs as \"Enable\".", "Hi @jiangwy99, the fix will appear on the `main` version of the docs:\r\n\r\n![Screen Shot 2022-05-04 at 8 38 29 AM](https://user-images.githubusercontent.com/59462357/166718225-6848ab91-87d1-4572-9912-40a909af6cb9.png)\r\n", "Fixed now, thanks." ]
1,224,404,403
4,271
A typo in docs of datasets.disable_progress_bar
closed
2022-05-03T17:44:56
2022-05-04T06:58:35
2022-05-04T06:58:35
https://github.com/huggingface/datasets/issues/4271
null
jiangwangyi
false
[ "Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)" ]
1,224,244,460
4,270
Fix style in openbookqa dataset
closed
2022-05-03T15:21:34
2022-05-06T08:38:06
2022-05-03T16:20:52
https://github.com/huggingface/datasets/pull/4270
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4270", "html_url": "https://github.com/huggingface/datasets/pull/4270", "diff_url": "https://github.com/huggingface/datasets/pull/4270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4270.patch", "merged_at": "2022-05-03T16:20:52" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,223,865,145
4,269
Add license and point of contact to big_patent dataset
closed
2022-05-03T09:24:07
2022-05-06T08:38:09
2022-05-03T11:16:19
https://github.com/huggingface/datasets/pull/4269
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4269", "html_url": "https://github.com/huggingface/datasets/pull/4269", "diff_url": "https://github.com/huggingface/datasets/pull/4269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4269.patch", "merged_at": "2022-05-03T11:16:19" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,223,331,964
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
closed
2022-05-02T20:34:25
2022-05-06T15:53:30
2022-05-03T11:23:48
https://github.com/huggingface/datasets/issues/4268
null
i-am-neo
false
[ "It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɜːd/\r\n([General American](https://en.wikipedia.org/wiki/General_American)) [enPR](https://en.wiktionary.org/wiki/Appendix:English_pronunciation): wûrd, [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɝd/", "Hi @i-am-neo, thanks for reporting.\r\n\r\nNormally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.\r\n\r\nAlso note that last commit \"Add metadata\" (https://huggingface.co/datasets/bigscience-catalogue-lm-data/lm_en_wiktionary_filtered/commit/dc2f458dab50e00f35c94efb3cd4009996858609) introduced buggy data files (`data/file-01.jsonl.gz.lock`, `data/file-01.jsonl.gz.lock.lock`). The same bug appears in other datasets as well.\r\n\r\n@i-am-neo, please note that in the near future we are planning to make public all datasets used for the BigScience project (at least all of them whose license allows to do that). Once public, they will be accessible for all the NLP community.", "Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that!", "All datasets are private now. \r\n\r\nRe:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`)", "Thanks a lot, @cakiki.\r\n\r\n@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. ", "Thanks for letting me know, @albertvillanova @cakiki.\r\nAny chance of having a subset alpha version in the meantime? \r\nI only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.\r\n\r\nWould like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issues/13162#issuecomment-1096881290) decoding, cc @patrickvonplaten. \r\n\r\n(Patrick, possible to email you so as not to litter github with comments? I have some observations after experiments training hubert on some YT AMI-like data (11.44% wer). Also wonder if a robust ASR is on your/HG's roadmap). Thanks!", "Hey @i-am-neo,\r\n\r\nCool to hear that you're working on Robust ASR! Feel free to drop me a mail :-)", "@i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)\r\nYou're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-content.json.gz) file", "thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it!", "thanks @patrickvonplaten. will do - getting my observations together." ]
1,223,214,275
4,267
Replace data URL in SAMSum dataset within the same repository
closed
2022-05-02T18:38:08
2022-05-06T08:38:13
2022-05-02T19:03:49
https://github.com/huggingface/datasets/pull/4267
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4267", "html_url": "https://github.com/huggingface/datasets/pull/4267", "diff_url": "https://github.com/huggingface/datasets/pull/4267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4267.patch", "merged_at": "2022-05-02T19:03:49" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,223,116,436
4,266
Add HF Speech Bench to Librispeech Dataset Card
closed
2022-05-02T16:59:31
2022-05-05T08:47:20
2022-05-05T08:40:09
https://github.com/huggingface/datasets/pull/4266
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4266", "html_url": "https://github.com/huggingface/datasets/pull/4266", "diff_url": "https://github.com/huggingface/datasets/pull/4266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4266.patch", "merged_at": "2022-05-05T08:40:09" }
sanchit-gandhi
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,222,723,083
4,263
Rename imagenet2012 -> imagenet-1k
closed
2022-05-02T10:26:21
2022-05-02T17:50:46
2022-05-02T16:32:57
https://github.com/huggingface/datasets/pull/4263
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4263", "html_url": "https://github.com/huggingface/datasets/pull/4263", "diff_url": "https://github.com/huggingface/datasets/pull/4263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4263.patch", "merged_at": "2022-05-02T16:32:57" }
lhoestq
true
[ "> Later we can add imagenet-21k as a new dataset if we want.\r\n\r\nisn't it what models refer to as `imagenet` already?", "> isn't it what models refer to as imagenet already?\r\n\r\nI wasn't sure, but it looks like it indeed. Therefore having a dataset `imagenet` for ImageNet 21k makes sense actually.\r\n\r\nEDIT: actually not all `imagenet` tag refer to ImageNet 21k - we will need to correct some of them", "_The documentation is not available anymore as the PR was closed or merged._", "should we remove the repo mirror on the hub side or will you do it?" ]
1,222,130,749
4,262
Add YAML tags to Dataset Card rotten tomatoes
closed
2022-05-01T11:59:08
2022-05-03T14:27:33
2022-05-03T14:20:35
https://github.com/huggingface/datasets/pull/4262
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4262", "html_url": "https://github.com/huggingface/datasets/pull/4262", "diff_url": "https://github.com/huggingface/datasets/pull/4262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4262.patch", "merged_at": "2022-05-03T14:20:35" }
mo6zes
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,221,883,779
4,261
data leakage in `webis/conclugen` dataset
closed
2022-04-30T17:43:37
2022-05-03T06:04:26
2022-05-03T06:04:26
https://github.com/huggingface/datasets/issues/4261
null
xflashxx
false
[ "Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.", "i'd suggest just pinging the authors here in the issue if possible?", "Thanks for reporting this @xflashxx. I'll have a look and get back to you on this.", "Hi @xflashxx and @albertvillanova,\r\n\r\nI have updated the files with de-duplicated splits. Apparently the debate portals from which part of the examples were sourced had unique timestamps for some examples (up to 6%; updated counts in the README) without any actual content updated that lead to \"new\" items. The length of `ids_validation` and `ids_testing` is zero.\r\n\r\nRegarding impact on scores:\r\n1. We employed automatic evaluation (on a separate set of 1000 examples) only to justify the exclusion of the smaller models for manual evaluation (due to budget constraints). I am confident the ranking still stands (unsurprisingly, the bigger models doing better than those trained on the smaller splits). We also highlight this in the paper. \r\n\r\n2. The examples used for manual evaluation have no overlap with any splits (also because they do not have any ground truth as we applied the trained models on an unlabeled sample to test its practical usage). I've added these two files to the dataset repository.\r\n\r\nHope this helps!", "Thanks @shahbazsyed for your fast fix.\r\n\r\nAs a side note:\r\n- Your email appearing as Point of Contact in the dataset README has a typo: @uni.leipzig.de instead of @uni-leipzig.de\r\n- Your commits on the Hub are not linked to your profile on the Hub: this is because we use the email address to make this link; the email address used in your commit author and the email address set on your Hub account settings." ]
1,221,830,292
4,260
Add mr_polarity movie review sentiment classification
closed
2022-04-30T13:19:33
2022-04-30T14:16:25
2022-04-30T14:16:25
https://github.com/huggingface/datasets/pull/4260
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4260", "html_url": "https://github.com/huggingface/datasets/pull/4260", "diff_url": "https://github.com/huggingface/datasets/pull/4260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4260.patch", "merged_at": null }
mo6zes
true
[ "whoops just found https://huggingface.co/datasets/rotten_tomatoes" ]
1,221,768,025
4,259
Fix bug in choices labels in openbookqa dataset
closed
2022-04-30T07:41:39
2022-05-04T06:31:31
2022-05-03T15:14:21
https://github.com/huggingface/datasets/pull/4259
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4259", "html_url": "https://github.com/huggingface/datasets/pull/4259", "diff_url": "https://github.com/huggingface/datasets/pull/4259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4259.patch", "merged_at": "2022-05-03T15:14:21" }
manandey
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,221,637,727
4,258
Fix/start token mask issue and update documentation
closed
2022-04-29T22:42:44
2022-05-02T16:33:20
2022-05-02T16:26:12
https://github.com/huggingface/datasets/pull/4258
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4258", "html_url": "https://github.com/huggingface/datasets/pull/4258", "diff_url": "https://github.com/huggingface/datasets/pull/4258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4258.patch", "merged_at": "2022-05-02T16:26:12" }
TristanThrush
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Good catch ! Thanks :)\r\n> \r\n> Next time can you describe your fix in the Pull Request description please ?\r\n\r\nThanks. Also whoops, sorry about not being very descriptive. I updated the pull request description, and will keep this in mind for future PRs." ]
1,221,393,137
4,257
Create metric card for Mahalanobis Distance
closed
2022-04-29T18:37:27
2022-05-02T14:50:18
2022-05-02T14:43:24
https://github.com/huggingface/datasets/pull/4257
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4257", "html_url": "https://github.com/huggingface/datasets/pull/4257", "diff_url": "https://github.com/huggingface/datasets/pull/4257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4257.patch", "merged_at": "2022-05-02T14:43:24" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,221,379,625
4,256
Create metric card for MSE
closed
2022-04-29T18:21:22
2022-05-02T14:55:42
2022-05-02T14:48:47
https://github.com/huggingface/datasets/pull/4256
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4256", "html_url": "https://github.com/huggingface/datasets/pull/4256", "diff_url": "https://github.com/huggingface/datasets/pull/4256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4256.patch", "merged_at": "2022-05-02T14:48:47" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,221,142,899
4,255
No google drive URL for pubmed_qa
closed
2022-04-29T15:55:46
2022-04-29T16:24:55
2022-04-29T16:18:56
https://github.com/huggingface/datasets/pull/4255
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4255", "html_url": "https://github.com/huggingface/datasets/pull/4255", "diff_url": "https://github.com/huggingface/datasets/pull/4255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4255.patch", "merged_at": "2022-04-29T16:18:56" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI is failing because some sections are missing in the dataset card, but this is unrelated to this PR - Merging !" ]
1,220,204,395
4,254
Replace data URL in SAMSum dataset and support streaming
closed
2022-04-29T08:21:43
2022-05-06T08:38:16
2022-04-29T16:26:09
https://github.com/huggingface/datasets/pull/4254
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4254", "html_url": "https://github.com/huggingface/datasets/pull/4254", "diff_url": "https://github.com/huggingface/datasets/pull/4254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4254.patch", "merged_at": "2022-04-29T16:26:08" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,219,286,408
4,253
Create metric cards for mean IOU
closed
2022-04-28T20:58:27
2022-04-29T17:44:47
2022-04-29T17:38:06
https://github.com/huggingface/datasets/pull/4253
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4253", "html_url": "https://github.com/huggingface/datasets/pull/4253", "diff_url": "https://github.com/huggingface/datasets/pull/4253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4253.patch", "merged_at": "2022-04-29T17:38:06" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,219,151,100
4,252
Creating metric card for MAE
closed
2022-04-28T19:04:33
2022-04-29T16:59:11
2022-04-29T16:52:30
https://github.com/huggingface/datasets/pull/4252
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4252", "html_url": "https://github.com/huggingface/datasets/pull/4252", "diff_url": "https://github.com/huggingface/datasets/pull/4252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4252.patch", "merged_at": "2022-04-29T16:52:30" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,219,116,354
4,251
Metric card for the XTREME-S dataset
closed
2022-04-28T18:32:19
2022-04-29T16:46:11
2022-04-29T16:38:46
https://github.com/huggingface/datasets/pull/4251
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4251", "html_url": "https://github.com/huggingface/datasets/pull/4251", "diff_url": "https://github.com/huggingface/datasets/pull/4251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4251.patch", "merged_at": "2022-04-29T16:38:46" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,219,093,830
4,250
Bump PyArrow Version to 6
closed
2022-04-28T18:10:50
2022-05-04T09:36:52
2022-05-04T09:29:46
https://github.com/huggingface/datasets/pull/4250
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4250", "html_url": "https://github.com/huggingface/datasets/pull/4250", "diff_url": "https://github.com/huggingface/datasets/pull/4250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4250.patch", "merged_at": "2022-05-04T09:29:46" }
dnaveenr
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Updated meta.yaml as well. Thanks.", "I'm OK with bumping PyArrow to version 6 to match the version in Colab, but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.", "> but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.\r\n\r\nI agree, not much attention has been payed to extension arrays in the latest developments of Arrow anyway.\r\n\r\nLet's not use them more that what we do right now, and try to remove them at one point" ]
1,218,524,424
4,249
Support streaming XGLUE dataset
closed
2022-04-28T10:27:23
2022-05-06T08:38:21
2022-04-28T16:08:03
https://github.com/huggingface/datasets/pull/4249
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4249", "html_url": "https://github.com/huggingface/datasets/pull/4249", "diff_url": "https://github.com/huggingface/datasets/pull/4249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4249.patch", "merged_at": "2022-04-28T16:08:03" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,218,460,444
4,248
conll2003 dataset loads original data.
closed
2022-04-28T09:33:31
2022-07-18T07:15:48
2022-07-18T07:15:48
https://github.com/huggingface/datasets/issues/4248
null
sue991
false
[ "Thanks for reporting @sue99.\r\n\r\nUnfortunately. I'm not able to reproduce your problem:\r\n```python\r\nIn [1]: import datasets\r\n ...: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"conll2003\")\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 14042\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3251\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 3454\r\n })\r\n})\r\n\r\nIn [3]: dataset[\"train\"][0]\r\nOut[3]: \r\n{'id': '0',\r\n 'tokens': ['EU',\r\n 'rejects',\r\n 'German',\r\n 'call',\r\n 'to',\r\n 'boycott',\r\n 'British',\r\n 'lamb',\r\n '.'],\r\n 'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],\r\n 'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],\r\n 'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0]}\r\n```\r\n\r\nJust guessing: might be the case that you are calling `load_dataset` from a working directory that contains a local folder named `conll2003` (containing the raw data files)? If that is the case, `datasets` library gives precedence to the local folder over the dataset on the Hub. " ]
1,218,320,882
4,247
The data preview of XGLUE
closed
2022-04-28T07:30:50
2022-04-29T08:23:28
2022-04-28T16:08:03
https://github.com/huggingface/datasets/issues/4247
null
czq1999
false
[ "![image](https://user-images.githubusercontent.com/49108847/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n", "Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (as the error message points out): this must be refactored to support streaming. ", "Fixed, thanks @albertvillanova !\r\n\r\nhttps://huggingface.co/datasets/xglue\r\n\r\n<img width=\"824\" alt=\"Capture d’écran 2022-04-29 à 10 23 14\" src=\"https://user-images.githubusercontent.com/1676121/165909391-9f98d98a-665a-4e57-822d-8baa2dc9b7c9.png\">\r\n" ]
1,218,320,293
4,246
Support to load dataset with TSV files by passing only dataset name
closed
2022-04-28T07:30:15
2022-05-06T08:38:28
2022-05-06T08:14:07
https://github.com/huggingface/datasets/pull/4246
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4246", "html_url": "https://github.com/huggingface/datasets/pull/4246", "diff_url": "https://github.com/huggingface/datasets/pull/4246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4246.patch", "merged_at": "2022-05-06T08:14:07" }
albertvillanova
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,217,959,400
4,245
Add code examples for DatasetDict
closed
2022-04-27T22:52:22
2022-04-29T18:19:34
2022-04-29T18:13:03
https://github.com/huggingface/datasets/pull/4245
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4245", "html_url": "https://github.com/huggingface/datasets/pull/4245", "diff_url": "https://github.com/huggingface/datasets/pull/4245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4245.patch", "merged_at": "2022-04-29T18:13:03" }
stevhliu
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,217,732,221
4,244
task id update
closed
2022-04-27T18:28:14
2022-05-04T10:43:53
2022-05-04T10:36:37
https://github.com/huggingface/datasets/pull/4244
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4244", "html_url": "https://github.com/huggingface/datasets/pull/4244", "diff_url": "https://github.com/huggingface/datasets/pull/4244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4244.patch", "merged_at": "2022-05-04T10:36:37" }
nazneenrajani
true
[ "Reverted the multi-input-text-classification tag from task_categories and added it as task_ids @lhoestq ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,217,689,909
4,243
WIP: Initial shades loading script and readme
closed
2022-04-27T17:45:43
2022-10-03T09:36:35
2022-10-03T09:36:35
https://github.com/huggingface/datasets/pull/4243
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4243", "html_url": "https://github.com/huggingface/datasets/pull/4243", "diff_url": "https://github.com/huggingface/datasets/pull/4243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4243.patch", "merged_at": null }
shayne-longpre
true
[ "Thanks for your contribution, @shayne-longpre.\r\n\r\nAre you still interested in adding this dataset? As we are transferring the dataset scripts from this GitHub repo, we would recommend you to add this to the Hugging Face Hub: https://huggingface.co/datasets" ]
1,217,665,960
4,242
Update auth when mirroring datasets on the hub
closed
2022-04-27T17:22:31
2022-04-27T17:37:04
2022-04-27T17:30:42
https://github.com/huggingface/datasets/pull/4242
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4242", "html_url": "https://github.com/huggingface/datasets/pull/4242", "diff_url": "https://github.com/huggingface/datasets/pull/4242.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4242.patch", "merged_at": "2022-04-27T17:30:42" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,217,423,686
4,241
NonMatchingChecksumError when attempting to download GLUE
closed
2022-04-27T14:14:21
2022-04-28T07:45:27
2022-04-28T07:45:27
https://github.com/huggingface/datasets/issues/4241
null
drussellmrichie
false
[ "Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"glue\", \"rte\")\r\n```", "This appears to work. Thank you!\n\nOn Wed, Apr 27, 2022, 1:18 PM Steven Liu ***@***.***> wrote:\n\n> Hi :)\n>\n> I think your issue may be related to the older nlp library. I was able to\n> download glue with the latest version of datasets. Can you try updating\n> with:\n>\n> pip install -U datasets\n>\n> Then you can download:\n>\n> from datasets import load_datasetds = load_dataset(\"glue\", \"rte\")\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/4241#issuecomment-1111267650>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACJUEKLUP2EL7ES3RRWJRPTVHFZHBANCNFSM5UPJBYXA>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n" ]
1,217,287,594
4,240
Fix yield for crd3
closed
2022-04-27T12:31:36
2022-04-29T12:41:41
2022-04-29T12:41:41
https://github.com/huggingface/datasets/pull/4240
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4240", "html_url": "https://github.com/huggingface/datasets/pull/4240", "diff_url": "https://github.com/huggingface/datasets/pull/4240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4240.patch", "merged_at": "2022-04-29T12:41:41" }
shanyas10
true
[ "I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str\r\n```\r\n\r\nDo you know what could cause this ? If I understand correctly, `turn` is supposed to be a list of dictionaries right ?", "> ``` \r\n> \r\n> Do you know what could cause this ? If I understand correctly, turn is supposed to be a list of dictionaries right ?\r\n> ```\r\n\r\nThis is strange. Let me look into this. As per https://github.com/RevanthRameshkumar/CRD3/blob/master/data/aligned%20data/c%3D2/C1E001_2_0.json TURNS is a list of dictionaries. So when we iterate over `row[\"TURNS]\"` each `turn` is essentially a dictionary. Not sure why it's being considered a tuple here." ]
1,217,269,689
4,239
Small fixes in ROC AUC docs
closed
2022-04-27T12:15:50
2022-05-02T13:28:57
2022-05-02T13:22:03
https://github.com/huggingface/datasets/pull/4239
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4239", "html_url": "https://github.com/huggingface/datasets/pull/4239", "diff_url": "https://github.com/huggingface/datasets/pull/4239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4239.patch", "merged_at": "2022-05-02T13:22:03" }
wschella
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,217,168,123
4,238
Dataset caching policy
closed
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
https://github.com/huggingface/datasets/issues/4238
null
loretoparisi
false
[ "Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that your dataset loads without any problem for me:\r\n```python\r\nIn [2]: ds = load_dataset(\"loretoparisi/tatoeba-sentences\", data_files={\"train\": \"train.csv\", \"test\": \"test.csv\"}, delimiter=\"\\t\", column_names=['label', 'text'])\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 8256449\r\n })\r\n test: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 2061204\r\n })\r\n})\r\n``` ", "@albertvillanova thank you, it seems it still does not work using:\r\n\r\n```python\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\"\r\n)\r\n```\r\n[This](https://colab.research.google.com/drive/1EA6FWo5pHxU8rPHHRn24NlHqRPiOlPTr?usp=sharing) is my notebook!\r\n\r\nThe problem is that the download file's revision for `test.csv` is not correctly parsed\r\n\r\n![Schermata 2022-04-27 alle 18 09 41](https://user-images.githubusercontent.com/163333/165563507-0be53eb6-8f61-49b0-b959-306e59281de3.png)\r\n\r\nIf you download that file `test.csv` from the repo, the line `\\\\N` is not there anymore (it was there at the first file upload).\r\n\r\nMy impression is that the Apache Arrow file is still cached - so server side, despite of enabling a forced download. For what I can see I get those two arrow files, but I cannot grep the bad line (`\\\\N`) since are binary files:\r\n\r\n```\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/csv-test.arrow\r\n!head /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/dataset_info.json\r\n```\r\n", "SOLVED! The problem was the with the file itself, using caching parameter helped indeed.\r\nThanks for helping!" ]
1,217,121,044
4,237
Common Voice 8 doesn't show datasets viewer
closed
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
https://github.com/huggingface/datasets/issues/4237
null
patrickvonplaten
false
[ "Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.9k/10.9k [00:00<00:00, 10.9MB/s]\r\nDownloading extra modules: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 3.36MB/s]\r\nDownloading extra modules: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 53.1k/53.1k [00:00<00:00, 650kB/s]\r\nNo config specified, defaulting to: common_voice/en\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 153, in _split_generators\r\n self._log_download(self.config.name, bundle_version, hf_auth_token)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 139, in _log_download\r\n email = HfApi().whoami(auth_token)[\"email\"]\r\nKeyError: 'email'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```", "Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.\r\n\r\nUnfortunately I'm not able to reproduce the error.\r\n\r\nI think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/blob/main/common_voice_8_0.py#L137-L139\r\n```python\r\nfrom huggingface_hub import HfApi, HfFolder\r\n\r\nif isinstance(auth_token, bool):\r\n email = HfApi().whoami(auth_token)\r\nemail = HfApi().whoami(auth_token)[\"email\"]\r\n```\r\n\r\nCould you please verify the previous code with the `auth_token` you pass to `load_dataset(..., use_auth_token=auth_token,...`?", "OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!\r\n\r\n```python\r\n>>> from huggingface_hub import HfApi, HfFolder\r\n>>> auth_token = \"hf_app_******\"\r\n>>> t = HfApi().whoami(auth_token)\r\n>>> t\r\n{'type': 'app', 'name': 'dataset-preview-backend'}\r\n>>> t[\"email\"]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nKeyError: 'email'\r\n```\r\n\r\nNote also that the doc (https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.whoami) does not state that `whoami` should return an `email` key.\r\n\r\n@SBrandeis @julien-c: do you think the app token should have an email associated, like the users?", "We can workaround this with\r\n```python\r\nemail = HfApi().whoami(auth_token).get(\"email\", \"system@huggingface.co\")\r\n```\r\nin the common voice scripts", "Hmmm, does this mean that any person who downloads the common voice dataset will be logged as \"system@huggingface.co\"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right?", "I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.\r\n\r\nAdditionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nCC: @patrickvonplaten @lhoestq @SBrandeis @julien-c ", "Hmm I don't agree here. \r\n\r\nAnybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the \"correct\" email but to just whatever and it would work.\r\n\r\nNote that someone only has visibility on the code after having \"signed\" the access-mechanism so I think we can expect the users to have agreed to not do anything malicious. \r\n\r\nI'm fine with both @lhoestq's solution or we find a way that forces the user to be logged in + being able to load the data for the datasets viewer. Wdyt @lhoestq @severo @albertvillanova ?", "> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nYes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they only require sending an email (no auth is inplace on their side, contrary to what I had previously thought). Therefore, currently we impose stronger requirements than them: we require the user having logged in and accepted the access mechanism.\r\n\r\nCurrently the script as it is already requires the user being logged in:\r\n```python\r\nHfApi().whoami(auth_token)\r\n```\r\nthrows an exception if None/invalid auth_token is passed.\r\n\r\nOn the other hand, we should agree on the way to allow the viewer to stream the data.", "The preview is back now, thanks !" ]