id
int64
953M
3.35B
number
int64
2.72k
7.75k
title
stringlengths
1
290
state
stringclasses
2 values
created_at
timestamp[s]date
2021-07-26 12:21:17
2025-08-23 00:18:43
updated_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-23 12:34:39
closed_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-20 16:35:55
html_url
stringlengths
49
51
pull_request
dict
user_login
stringlengths
3
26
is_pull_request
bool
2 classes
comments
listlengths
0
30
1,031,673,115
3,121
Use huggingface_hub.HfApi to list datasets/metrics
closed
2021-10-20T17:48:29
2021-11-05T11:45:08
2021-11-05T09:48:36
https://github.com/huggingface/datasets/pull/3121
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3121", "html_url": "https://github.com/huggingface/datasets/pull/3121", "diff_url": "https://github.com/huggingface/datasets/pull/3121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3121.patch", "merged_at": "2021-11-05T09:48:35" }
mariosasko
true
[]
1,031,574,511
3,120
Correctly update metadata to preserve features when concatenating datasets with axis=1
closed
2021-10-20T15:54:58
2021-10-22T08:28:51
2021-10-21T14:50:21
https://github.com/huggingface/datasets/pull/3120
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3120", "html_url": "https://github.com/huggingface/datasets/pull/3120", "diff_url": "https://github.com/huggingface/datasets/pull/3120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3120.patch", "merged_at": "2021-10-21T14:50:21" }
mariosasko
true
[]
1,031,328,044
3,119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
closed
2021-10-20T12:05:07
2021-10-22T19:00:52
2021-10-22T08:30:22
https://github.com/huggingface/datasets/issues/3119
null
tyrius02
false
[ "Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files." ]
1,031,309,549
3,118
Fix CI error at each release commit
closed
2021-10-20T11:44:38
2021-10-20T13:02:36
2021-10-20T13:02:36
https://github.com/huggingface/datasets/pull/3118
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3118", "html_url": "https://github.com/huggingface/datasets/pull/3118", "diff_url": "https://github.com/huggingface/datasets/pull/3118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3118.patch", "merged_at": "2021-10-20T13:02:35" }
albertvillanova
true
[]
1,031,308,083
3,117
CI error at each release commit
closed
2021-10-20T11:42:53
2021-10-20T13:02:35
2021-10-20T13:02:35
https://github.com/huggingface/datasets/issues/3117
null
albertvillanova
false
[]
1,031,270,611
3,116
Update doc links to point to new docs
closed
2021-10-20T11:00:47
2021-10-22T08:29:28
2021-10-22T08:26:45
https://github.com/huggingface/datasets/pull/3116
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3116", "html_url": "https://github.com/huggingface/datasets/pull/3116", "diff_url": "https://github.com/huggingface/datasets/pull/3116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3116.patch", "merged_at": "2021-10-22T08:26:45" }
mariosasko
true
[]
1,030,737,524
3,115
Fill in dataset card for NCBI disease dataset
closed
2021-10-19T20:57:05
2021-10-22T08:25:07
2021-10-22T08:25:07
https://github.com/huggingface/datasets/pull/3115
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3115", "html_url": "https://github.com/huggingface/datasets/pull/3115", "diff_url": "https://github.com/huggingface/datasets/pull/3115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3115.patch", "merged_at": "2021-10-22T08:25:07" }
edugp
true
[]
1,030,693,130
3,114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
closed
2021-10-19T20:01:45
2022-02-14T14:00:28
2022-02-14T14:00:28
https://github.com/huggingface/datasets/issues/3114
null
francisco-perez-sorrosal
false
[ "Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.", "Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` once I update arrow to 6.0.0.\r\n\r\nThanks!" ]
1,030,667,547
3,113
Loading Data from HDF files
closed
2021-10-19T19:26:46
2025-08-19T13:28:54
2025-08-19T13:28:54
https://github.com/huggingface/datasets/issues/3113
null
FeryET
false
[ "I'm currently working on bringing [Ecoset](https://www.pnas.org/doi/10.1073/pnas.2011417118) to huggingface datasets and I would second this request...", "I would also like this support or something similar. Geospatial datasets come in netcdf which is derived from hdf5, or zarr. I've gotten zarr stores to work with datasets and streaming, but it takes awhile to convert the data to zarr if it's not stored in that natively. ", "@mariosasko , I would like to contribute on this \"good second issue\" . Is there anything in the works for this Issue or can I go ahead ? \r\n", "Hi @VijayKalmath! As far as I know, nobody is working on it, so feel free to take over. Also, before you start, I suggest you comment `#self-assign` on this issue to assign it to yourself.", "#self-assign", "Hey @mariosasko can you assign this issue to me !!", "So basically, we just need to load HDF5 files to Parquet?\r\n\r\ne.g. Like this? https://stackoverflow.com/questions/46157709/converting-hdf5-to-parquet-without-loading-into-memory", "#self-assign\nHi! I'd love to take a shot at implementing this feature to allow loading HDF5 datasets via Hugging Face Datasets. Will keep you updated here!", "Hey @mariosasko,\nI've opened a PR to address this: #7625\nIt adds a new h5folder loader to support .h5 files via load_dataset(\"h5folder\", data_dir=...).\nFeedback welcome!" ]
1,030,613,083
3,112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
open
2021-10-19T18:21:41
2021-10-19T18:52:29
null
https://github.com/huggingface/datasets/issues/3112
null
BenoitDalFerro
false
[ "I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.", "fixed", "Ok got it, tensor full of NaNs, cf.\r\n\r\n~\\anaconda3\\envs\\xxx\\lib\\site-packages\\datasets\\arrow_writer.py in write_examples_on_file(self)\r\n315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:", "Actually this is is a live bug, documented yet still live so reopening" ]
1,030,598,983
3,111
concatenate_datasets removes ClassLabel typing.
closed
2021-10-19T18:05:31
2021-10-21T14:50:21
2021-10-21T14:50:21
https://github.com/huggingface/datasets/issues/3111
null
Dref360
false
[ "Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1" ]
1,030,558,484
3,110
Stream TAR-based dataset using iter_archive
closed
2021-10-19T17:16:24
2021-11-05T17:48:49
2021-11-05T17:48:48
https://github.com/huggingface/datasets/pull/3110
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3110", "html_url": "https://github.com/huggingface/datasets/pull/3110", "diff_url": "https://github.com/huggingface/datasets/pull/3110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3110.patch", "merged_at": "2021-11-05T17:48:48" }
lhoestq
true
[ "I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first", "The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR" ]
1,030,543,284
3,109
Update BibTeX entry
closed
2021-10-19T16:59:31
2021-10-19T17:13:28
2021-10-19T17:13:27
https://github.com/huggingface/datasets/pull/3109
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3109", "html_url": "https://github.com/huggingface/datasets/pull/3109", "diff_url": "https://github.com/huggingface/datasets/pull/3109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3109.patch", "merged_at": "2021-10-19T17:13:27" }
albertvillanova
true
[]
1,030,405,618
3,108
Add Google BLEU (aka GLEU) metric
closed
2021-10-19T14:48:38
2021-10-25T14:07:04
2021-10-25T14:07:04
https://github.com/huggingface/datasets/pull/3108
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3108", "html_url": "https://github.com/huggingface/datasets/pull/3108", "diff_url": "https://github.com/huggingface/datasets/pull/3108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3108.patch", "merged_at": "2021-10-25T14:07:04" }
slowwavesleep
true
[]
1,030,357,527
3,107
Add paper BibTeX citation
closed
2021-10-19T14:08:11
2021-10-19T14:26:22
2021-10-19T14:26:21
https://github.com/huggingface/datasets/pull/3107
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3107", "html_url": "https://github.com/huggingface/datasets/pull/3107", "diff_url": "https://github.com/huggingface/datasets/pull/3107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3107.patch", "merged_at": "2021-10-19T14:26:21" }
albertvillanova
true
[]
1,030,112,473
3,106
Fix URLs in blog_authorship_corpus dataset
closed
2021-10-19T10:06:05
2021-10-19T12:50:40
2021-10-19T12:50:39
https://github.com/huggingface/datasets/pull/3106
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3106", "html_url": "https://github.com/huggingface/datasets/pull/3106", "diff_url": "https://github.com/huggingface/datasets/pull/3106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3106.patch", "merged_at": "2021-10-19T12:50:39" }
albertvillanova
true
[]
1,029,098,843
3,105
download_mode=`force_redownload` does not work on removed datasets
open
2021-10-18T13:12:38
2021-10-22T09:36:10
null
https://github.com/huggingface/datasets/issues/3105
null
severo
false
[]
1,029,080,412
3,104
Missing Zenodo 1.13.3 release
closed
2021-10-18T12:57:18
2021-10-22T13:22:25
2021-10-22T13:22:24
https://github.com/huggingface/datasets/issues/3104
null
albertvillanova
false
[ "Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150" ]
1,029,069,310
3,103
Fix project description in PyPI
closed
2021-10-18T12:47:29
2021-10-18T12:59:57
2021-10-18T12:59:56
https://github.com/huggingface/datasets/pull/3103
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3103", "html_url": "https://github.com/huggingface/datasets/pull/3103", "diff_url": "https://github.com/huggingface/datasets/pull/3103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3103.patch", "merged_at": "2021-10-18T12:59:56" }
albertvillanova
true
[]
1,029,067,062
3,102
Unsuitable project description in PyPI
closed
2021-10-18T12:45:00
2021-10-18T12:59:56
2021-10-18T12:59:56
https://github.com/huggingface/datasets/issues/3102
null
albertvillanova
false
[]
1,028,966,968
3,101
Update SUPERB to use Audio features
closed
2021-10-18T11:05:18
2021-10-18T12:33:54
2021-10-18T12:06:46
https://github.com/huggingface/datasets/pull/3101
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3101", "html_url": "https://github.com/huggingface/datasets/pull/3101", "diff_url": "https://github.com/huggingface/datasets/pull/3101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3101.patch", "merged_at": "2021-10-18T12:06:46" }
anton-l
true
[ "Thank you! Sorry I forgot this one @albertvillanova" ]
1,028,738,180
3,100
Replace FSTimeoutError with parent TimeoutError
closed
2021-10-18T07:37:09
2021-10-18T07:51:55
2021-10-18T07:51:54
https://github.com/huggingface/datasets/pull/3100
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3100", "html_url": "https://github.com/huggingface/datasets/pull/3100", "diff_url": "https://github.com/huggingface/datasets/pull/3100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3100.patch", "merged_at": "2021-10-18T07:51:54" }
albertvillanova
true
[]
1,028,338,078
3,099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
closed
2021-10-17T14:17:47
2021-11-09T16:42:29
2021-11-09T16:42:28
https://github.com/huggingface/datasets/issues/3099
null
JTWang2000
false
[ "Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 8544\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 1101\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 2210\r\n })\r\n})\r\n```\r\n\r\nMaybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n```\r\npip install -U huggingface_hub\r\n```", "Im facing the same issue. I did run the upgrade command but that doesnt seem to resolve the issue", "Hi @aneeshjain, could you please specify which `huggingface_hub` version you are using?\r\n\r\nBesides that, please run `datasets-cli env` and copy-and-paste its output below.", "The problem seems to be with the latest version of `datasets`. After running `pip install -U datasets huggingface_hub`, I get the following: \r\n\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/data_files.py\", line 122, in <module>\r\n allowed_extensions: Optional[list] = None,\r\nAttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'\r\n````\r\nNote that pip reports the latest `datasets` version as \r\n```bash\r\n pip show datasets\r\nName: datasets\r\nVersion: 1.14.0\r\n```\r\nHowever, if I downgrade datasets with `pip install datasets==1.11.0`, things now work\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\ndvers=1.11.0\r\n````", "> Hi @JTWang2000, thanks for reporting.\r\n> \r\n> However, I cannot reproduce your reported bug:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> \r\n> >>> dataset = load_dataset(\"sst\", \"default\")\r\n> >>> dataset\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 8544\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 1101\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 2210\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Maybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n> \r\n> ```\r\n> pip install -U huggingface_hub\r\n> ```\r\n\r\nMy problem solved after updating huggingface hub command. Thanks a lot and sorry for the late reply. ", "@tjruwase, please note that versions of `datsets` and `huggingface_hub` must be compatible one with each other:\r\n- In `datasets` version `1.11.0`, the requirement on `huggingface_hub` was: `huggingface_hub<0.1.0`\r\n https://github.com/huggingface/datasets/blob/2cc00f372a96133e701275eec4d6b26d15257289/setup.py#L90\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was compatible\r\n- In `datasets` version `1.12.0`, the requirement on `huggingface_hub` was: `huggingface_hub>=0.0.14,<0.1.0`\r\n https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/setup.py#L104\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was no longer compatible \r\n- Currently, in `datasets` version `1.15.1`, the requirement on `huggingface_hub` is: `huggingface_hub>=0.1.0,<1.0.0`\r\n https://github.com/huggingface/datasets/blob/018100679d21cf27136f0eccb1c50e3a9c968ce2/setup.py#L102\r\n\r\n@JTWang2000, thanks for your answer. I close this issue then." ]
1,028,210,790
3,098
Push to hub capabilities for `Dataset` and `DatasetDict`
closed
2021-10-17T04:12:44
2021-12-08T16:04:50
2021-11-24T11:25:36
https://github.com/huggingface/datasets/pull/3098
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3098", "html_url": "https://github.com/huggingface/datasets/pull/3098", "diff_url": "https://github.com/huggingface/datasets/pull/3098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3098.patch", "merged_at": "2021-11-24T11:25:36" }
LysandreJik
true
[ "Thank you for your reviews! I should have addressed all of your comments, and I added a test to ensure that `private` datasets work correctly too. I have merged the changes in `huggingface_hub`, so the `main` branch can be installed now; and I will release v0.1.0 soon.\r\n\r\nAs blockers for this PR:\r\n- It's still waiting for #3027 to be addressed as the folder name will dictate the split name\r\n- The `self.split` name is set to `None` when the dataset dict is instantiated as follows:\r\n```py\r\nds = Dataset.from_dict({\"x\": [1, 2, 3], \"y\": [4, 5, 6]})\r\nlocal_ds = DatasetDict({\"random\": ds})\r\n\r\nlocal_ds['random'].split # returns None\r\n```\r\nIn order to remove the `split=key` I would need to know of a different way to test here as it relies on the above as a surefire way of constructing a `DatasetDict`.\r\n- Finally, the `threading` parameter is flaky on moon-staging which results in many errors server side. I propose to leave it as an argument instead of having it having it set to `True` so that users may toggle it according to their wish. ", "Currently it looks like it only saves the last split.\r\nIndeed when writing the data of one split, it deletes all the other files from the other splits\r\n```python\r\n>>> dataset.push_to_hub(\"lhoestq/squad_titles\", shard_size=50<<10) \r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 100%|█| 31/31 [00:22<00:00, 1.38\r\nPushing split validation to the Hub.\r\nThe repository already exists: the `private` keyword argument will be ignored.\r\nDeleting unused files from dataset repository: 100%|█| 31/31 [00:14<00:00, \r\nPushing dataset shards to the dataset hub: 100%|█| 4/4 [00:03<00:00, 1.18it\r\n```\r\nNote the \"Deleting\" part.", "I think this PR should fix #3035, so feel free to link it. ", "Thank you for your comments! I have rebased on `master` to have PR #3221. I've updated all tests to reflect the `-` instead of the `_` in the filenames.\r\n\r\n@lhoestq, I have fixed the issue with splits and added a corresponding test.\r\n\r\n@mariosasko I have not updated the `load_dataset` method to work differently, so I don't expect #3035 to be resolved with `push_to_hub`.\r\n\r\nOnly remaining issues before merging:\r\n- Take a good look at the `threading` and if that's something we want to keep.\r\n- As mentioned above:\r\n>The self.split name is set to None when the dataset dict is instantiated as follows:\r\n> ```\r\n> ds = Dataset.from_dict({\"x\": [1, 2, 3], \"y\": [4, 5, 6]})\r\n> local_ds = DatasetDict({\"random\": ds})\r\n> \r\n> local_ds['random'].split # returns None\r\n> ```\r\nI need to understand how to build a `DatasetDict` from some `Dataset` objects to be able to leverage the `split` parameter in `DatasetDict.push_to_hub`", "Cool thanks ! And indeed this won't solve https://github.com/huggingface/datasets/issues/3035 yet\r\n\r\n> I need to understand how to build a DatasetDict from some Dataset objects to be able to leverage the split parameter in DatasetDict.push_to_hub\r\n\r\nYou can use the key in the DatasetDict instead of the `split` attribute", "What do you think about bumping the minimum version of pyarrow to 3.0.0 ? This is the minimum required version to write parquet files, which is needed for push_to_hub. That's why our pyarrow 1 CI is failing.\r\n\r\nI think it's fine since it's been available for a long time (january 2021) and it's also the version that is installed on Google Colab.", "Pushing pyarrow to 3.0.0 is fine for me. I don’t think we need to keep a lot of backward support for pyarrow.", "Hi.\r\nI published in the forum about my experience with `DatasetDict.push_to_hub()`: here is my [post.](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/4)\r\nOn my side, there is a problem as my train and validation `Datasets` are concatenated when I do a `load_dataset()` from the `DatasetDict` I pushed to the HF datasets hub.", "Hi ! Let me respond here as well in case other people have the same issues and come here:\r\n\r\n`push_to_hub` was introduced in `datasets` 1.16, and to be able to properly load a dataset with separated splits you need to have `datasets>=1.16.0` as well. \r\n\r\nOld version of `datasets` used to concatenate everything in the `train` split." ]
1,027,750,811
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
closed
2021-10-15T19:34:38
2021-10-18T07:51:54
2021-10-18T07:51:54
https://github.com/huggingface/datasets/issues/3097
null
VictorSanh
false
[ "Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it." ]
1,027,535,685
3,096
Fix Audio feature mp3 resampling
closed
2021-10-15T15:05:19
2021-10-15T15:38:30
2021-10-15T15:38:30
https://github.com/huggingface/datasets/pull/3096
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3096", "html_url": "https://github.com/huggingface/datasets/pull/3096", "diff_url": "https://github.com/huggingface/datasets/pull/3096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3096.patch", "merged_at": "2021-10-15T15:38:29" }
albertvillanova
true
[]
1,027,453,146
3,095
`cast_column` makes audio decoding fail
closed
2021-10-15T13:36:58
2023-04-07T09:43:20
2021-10-15T15:38:30
https://github.com/huggingface/datasets/issues/3095
null
patrickvonplaten
false
[ "cc @anton-l @albertvillanova ", "Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it." ]
1,027,328,633
3,094
Support loading a dataset from SQLite files
closed
2021-10-15T10:58:41
2022-10-03T16:32:29
2022-10-03T16:32:29
https://github.com/huggingface/datasets/issues/3094
null
albertvillanova
false
[ "for reference Kaggle has a good number of open source datasets stored in sqlite\r\n\r\nAlternatively a tutorial or tool on how to convert from sqlite to parquet would be cool too", "Hello, could we leverage [`pandas.read_sql`](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) for this? \r\n\r\nThis would be basically the same as [`CSVBuilder`](https://github.com/huggingface/datasets/blob/7380140accf522a4363bb56c0b77a4190f49bed6/src/datasets/packaged_modules/csv/csv.py#L127)\r\n, but uses `pandas.read_sql(..., chunksize=1)` instead of `pandas.read_csv(..., iterator=True)`\r\n\r\nI'm happy to work on this :) \r\n\r\nself-assign" ]
1,027,262,124
3,093
Error loading json dataset with multiple splits if keys in nested dicts have a different order
closed
2021-10-15T09:33:25
2022-04-10T14:06:29
2022-04-10T14:06:29
https://github.com/huggingface/datasets/issues/3093
null
dthulke
false
[ "Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf = pd.read_json(buffer, lines=True)\r\n\r\nprint(df.shape[0]) # 0\r\n```\r\n\r\nSo we can't even fall back to Pandas in such cases.\r\n\r\nIt seems the only option is a script that recursively re-orders fields to enforce deterministic order:\r\n```python\r\nwith open(\"train.json\", \"r\") as fin:\r\n with open(\"train_reordered.json\", \"w\") as fout:\r\n for line in fin:\r\n obj_jsonl = json.loads(line.strip())\r\n fout.write(json.dumps(obj_jsonl, sort_keys=True) + \"\\n\")\r\n```", "Fixed in #3575, so I'm closing this issue." ]
1,027,260,383
3,092
Fix JNLBA dataset
closed
2021-10-15T09:31:14
2022-07-10T14:36:49
2021-10-22T08:23:57
https://github.com/huggingface/datasets/pull/3092
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3092", "html_url": "https://github.com/huggingface/datasets/pull/3092", "diff_url": "https://github.com/huggingface/datasets/pull/3092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3092.patch", "merged_at": "2021-10-22T08:23:57" }
bhavitvyamalik
true
[ "Fix #3089.", "@albertvillanova all tests are passing now. Either you or @lhoestq can review it!" ]
1,027,251,530
3,091
`blog_authorship_corpus` is broken
closed
2021-10-15T09:20:40
2021-10-19T13:06:10
2021-10-19T12:50:39
https://github.com/huggingface/datasets/issues/3091
null
fdtomasi
false
[ "Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.", "Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be accessible in our next release.\r\n\r\nIn the meantime, you can include the fix if you install the `datasets` library from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```", "Awesome thank you so much for the quick fix!" ]
1,027,100,371
3,090
Update BibTeX entry
closed
2021-10-15T05:39:27
2021-10-15T07:35:57
2021-10-15T07:35:57
https://github.com/huggingface/datasets/pull/3090
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3090", "html_url": "https://github.com/huggingface/datasets/pull/3090", "diff_url": "https://github.com/huggingface/datasets/pull/3090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3090.patch", "merged_at": "2021-10-15T07:35:57" }
albertvillanova
true
[]
1,026,973,360
3,089
JNLPBA Dataset
closed
2021-10-15T01:16:02
2021-10-22T08:23:57
2021-10-22T08:23:57
https://github.com/huggingface/datasets/issues/3089
null
sciarrilli
false
[ "# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, id=None)\r\n```\r\n\r\n", "Since I cannot create a branch here is the updated code:\r\n\r\n```python\r\n\r\n# coding=utf-8\r\n# Copyright 2020 HuggingFace Datasets Authors.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\r\n# Lint as: python3\r\n\"\"\"Introduction to the Bio-Entity Recognition Task at JNLPBA\"\"\"\r\n\r\nimport os\r\n\r\nimport datasets\r\n\r\n\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n_CITATION = \"\"\"\\\r\n@inproceedings{kim2004introduction,\r\n title={Introduction to the bio-entity recognition task at JNLPBA},\r\n author={Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},\r\n booktitle={Proceedings of the international joint workshop on natural language processing in biomedicine and its applications},\r\n pages={70--75},\r\n year={2004},\r\n organization={Citeseer}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThe data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search\r\non MEDLINE using the MeSH terms \u0018human\u0019, \u0018blood cells\u0019 and \u0018transcription factors\u0019. From this search 2,000 abstracts\r\nwere selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification.\r\nAmong the classes, 36 terminal classes were used to annotate the GENIA corpus.\r\n\"\"\"\r\n\r\n_HOMEPAGE = \"http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004\"\r\n_TRAIN_URL = \"http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Train/Genia4ERtraining.tar.gz\"\r\n_VAL_URL = 'http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Evaluation/Genia4ERtest.tar.gz'\r\n\r\n\r\n_URLS = {\r\n \"train\": _TRAIN_URL,\r\n \"val\": _VAL_URL,\r\n}\r\n\r\n_TRAIN_DIRECTORY = \"Genia4ERtraining\"\r\n_VAL_DIRECTORY = \"Genia4ERtest\"\r\n\r\n_TRAIN_FILE = \"Genia4ERtask1.iob2\"\r\n_VAL_FILE = \"Genia4EReval1.iob2\"\r\n\r\n\r\nclass JNLPBAConfig(datasets.BuilderConfig):\r\n \"\"\"BuilderConfig for JNLPBA\"\"\"\r\n\r\n def __init__(self, **kwargs):\r\n \"\"\"BuilderConfig for JNLPBA.\r\n Args:\r\n **kwargs: keyword arguments forwarded to super.\r\n \"\"\"\r\n super(JNLPBAConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass JNLPBA(datasets.GeneratorBasedBuilder):\r\n \"\"\"JNLPBA dataset.\"\"\"\r\n\r\n BUILDER_CONFIGS = [\r\n JNLPBAConfig(name=\"jnlpba\", version=datasets.Version(\"1.0.0\"), description=\"JNLPBA dataset\"),\r\n ]\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(\r\n datasets.features.ClassLabel(\r\n names=[\r\n 'O',\r\n 'B-DNA',\r\n 'I-DNA', \r\n 'B-RNA',\r\n 'I-RNA',\r\n 'B-cell_line',\r\n 'I-cell_line',\r\n 'B-cell_type',\r\n 'I-cell_type',\r\n 'B-protein',\r\n 'I-protein',\r\n ]\r\n )\r\n ),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=_HOMEPAGE,\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n \r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['train'], _TRAIN_FILE)}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['val'], _VAL_FILE)})\r\n ]\r\n \r\n\r\n def _generate_examples(self, filepath):\r\n logger.info(\"⏳ Generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n guid = 0\r\n tokens = []\r\n ner_tags = []\r\n for line in f:\r\n if line.startswith('###'):\r\n continue\r\n if line == '' or line == '\\n':\r\n if tokens:\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n guid += 1\r\n tokens = []\r\n ner_tags = []\r\n else:\r\n # tokens are tab separated\r\n splits = line.split(\"\\t\")\r\n #print(splits)\r\n #print(len(splits))\r\n if len(splits) < 2:\r\n splits = splits[0].split()\r\n tokens.append(splits[0])\r\n ner_tags.append(splits[1].rstrip())\r\n # last example\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n```" ]
1,026,920,369
3,088
Use template column_mapping to transmit_format instead of template features
closed
2021-10-14T23:49:40
2021-10-15T14:40:05
2021-10-15T10:11:04
https://github.com/huggingface/datasets/pull/3088
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3088", "html_url": "https://github.com/huggingface/datasets/pull/3088", "diff_url": "https://github.com/huggingface/datasets/pull/3088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3088.patch", "merged_at": "2021-10-15T10:11:04" }
mariosasko
true
[ "Thanks for fixing!" ]
1,026,780,469
3,087
Removing label column in a text classification dataset yields to errors
closed
2021-10-14T20:12:50
2021-10-15T10:11:04
2021-10-15T10:11:04
https://github.com/huggingface/datasets/issues/3087
null
sgugger
false
[]
1,026,481,905
3,086
Remove _resampler from Audio fields
closed
2021-10-14T14:38:50
2021-10-14T15:13:41
2021-10-14T15:13:40
https://github.com/huggingface/datasets/pull/3086
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3086", "html_url": "https://github.com/huggingface/datasets/pull/3086", "diff_url": "https://github.com/huggingface/datasets/pull/3086.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3086.patch", "merged_at": "2021-10-14T15:13:40" }
albertvillanova
true
[]
1,026,467,384
3,085
Fixes to `to_tf_dataset`
closed
2021-10-14T14:25:56
2021-10-21T15:05:29
2021-10-21T15:05:28
https://github.com/huggingface/datasets/pull/3085
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3085", "html_url": "https://github.com/huggingface/datasets/pull/3085", "diff_url": "https://github.com/huggingface/datasets/pull/3085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3085.patch", "merged_at": "2021-10-21T15:05:28" }
Rocketknight1
true
[ "Hi ! Can you give some details about why you need these changes ?", "Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I can fix that issue, because I couldn't see an obvious fix for the Numpy formatter. If you can see a quick way to fix that, though, that might be even better!" ]
1,026,428,992
3,084
VisibleDeprecationWarning when using `set_format("numpy")`
closed
2021-10-14T13:53:01
2021-10-22T16:04:14
2021-10-22T16:04:14
https://github.com/huggingface/datasets/issues/3084
null
Rocketknight1
false
[ "I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)" ]
1,026,397,062
3,083
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
closed
2021-10-14T13:23:53
2021-10-14T15:13:40
2021-10-14T15:13:40
https://github.com/huggingface/datasets/issues/3083
null
albertvillanova
false
[]
1,026,388,994
3,082
Fix error related to huggingface_hub timeout parameter
closed
2021-10-14T13:17:47
2021-10-14T14:39:52
2021-10-14T14:39:51
https://github.com/huggingface/datasets/pull/3082
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3082", "html_url": "https://github.com/huggingface/datasets/pull/3082", "diff_url": "https://github.com/huggingface/datasets/pull/3082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3082.patch", "merged_at": "2021-10-14T14:39:51" }
albertvillanova
true
[]
1,026,383,749
3,081
[Audio datasets] Adapting all audio datasets
closed
2021-10-14T13:13:45
2021-10-15T12:52:03
2021-10-15T12:22:33
https://github.com/huggingface/datasets/pull/3081
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3081", "html_url": "https://github.com/huggingface/datasets/pull/3081", "diff_url": "https://github.com/huggingface/datasets/pull/3081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3081.patch", "merged_at": "2021-10-15T12:22:33" }
patrickvonplaten
true
[ "@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise", "@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section", "Hi @patrickvonplaten ,\r\n\r\nthe data preprocessing section is not defined as a valid section in the readme validation file. After this line:\r\nhttps://github.com/huggingface/datasets/blob/568db594d51110da9e23d224abded2a976b3c8c7/src/datasets/utils/resources/readme_structure.yaml#L20\r\nfeel free to insert (correctly indented of course):\r\n```python\r\n- name: \"Dataset Preprocessing\"\r\n allow_empty: true\r\n allow_empty_text: true\r\n subsections: null\r\n```\r\nand then the tests should pass.", "Thanks a lot @albertvillanova - I've added the feature to all audio datasets and corrected the task of `covost2`" ]
1,026,380,626
3,080
Error related to timeout keyword argument
closed
2021-10-14T13:10:58
2021-10-14T14:39:51
2021-10-14T14:39:51
https://github.com/huggingface/datasets/issues/3080
null
albertvillanova
false
[]
1,026,150,362
3,077
Fix loading a metric with internal import
closed
2021-10-14T09:06:58
2021-10-14T09:14:56
2021-10-14T09:14:55
https://github.com/huggingface/datasets/pull/3077
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3077", "html_url": "https://github.com/huggingface/datasets/pull/3077", "diff_url": "https://github.com/huggingface/datasets/pull/3077.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3077.patch", "merged_at": "2021-10-14T09:14:55" }
albertvillanova
true
[]
1,026,113,484
3,076
Error when loading a metric
closed
2021-10-14T08:29:27
2021-10-14T09:14:55
2021-10-14T09:14:55
https://github.com/huggingface/datasets/issues/3076
null
albertvillanova
false
[]
1,026,103,388
3,075
Updates LexGLUE and MultiEURLEX README.md files
closed
2021-10-14T08:19:16
2021-10-18T10:13:40
2021-10-18T10:13:40
https://github.com/huggingface/datasets/pull/3075
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3075", "html_url": "https://github.com/huggingface/datasets/pull/3075", "diff_url": "https://github.com/huggingface/datasets/pull/3075.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3075.patch", "merged_at": "2021-10-18T10:13:40" }
iliaschalkidis
true
[]
1,025,940,085
3,074
add XCSR dataset
closed
2021-10-14T04:39:59
2021-11-08T13:52:36
2021-11-08T13:52:36
https://github.com/huggingface/datasets/pull/3074
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3074", "html_url": "https://github.com/huggingface/datasets/pull/3074", "diff_url": "https://github.com/huggingface/datasets/pull/3074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3074.patch", "merged_at": "2021-11-08T13:52:36" }
yangxqiao
true
[ "> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are as small as possible, however here each zip file is 70KB+. It think we can make them even smaller if we remove unnecessary files in them. In particular in the `ar` dummy data zip file, we don't need the data for all languages, but rather only the `ar` files. Could you try to remove the unnecessary files in the dummy data zip files ?\r\n\r\nHi! \r\n\r\nThank you so much for reviewing this PR. I've updated the README to briefly mention the translations and added a link to the paper, where a detailed description of the translation procedure can be found in the appendix.\r\n\r\nFor the dummy_data.zip files, is it possible to keep all the current files? I tried to remove some of the files, but the removal led to a failure in the local testing. We also think it may be better to keep the current dummy_data.zip files because all the data are useful actually. Thanks a lot!!", "Hi @lhoestq, just a gentle ping on this PR. :D " ]
1,025,718,469
3,073
Import error installing with ppc64le
closed
2021-10-13T21:37:23
2021-10-14T16:35:46
2021-10-14T16:33:28
https://github.com/huggingface/datasets/issues/3073
null
gcervantes8
false
[ "This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n" ]
1,025,233,152
3,072
Fix pathlib patches for streaming
closed
2021-10-13T13:11:15
2021-10-13T13:31:05
2021-10-13T13:31:05
https://github.com/huggingface/datasets/pull/3072
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3072", "html_url": "https://github.com/huggingface/datasets/pull/3072", "diff_url": "https://github.com/huggingface/datasets/pull/3072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3072.patch", "merged_at": "2021-10-13T13:31:05" }
lhoestq
true
[]
1,024,893,493
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
closed
2021-10-13T07:32:10
2021-10-13T08:27:04
2021-10-13T08:27:03
https://github.com/huggingface/datasets/issues/3071
null
zixiliuUSC
false
[ "Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```" ]
1,024,856,745
3,070
Fix Windows CI with FileNotFoundError when stting up s3_base fixture
closed
2021-10-13T06:49:01
2021-10-13T08:55:13
2021-10-13T06:49:48
https://github.com/huggingface/datasets/pull/3070
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3070", "html_url": "https://github.com/huggingface/datasets/pull/3070", "diff_url": "https://github.com/huggingface/datasets/pull/3070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3070.patch", "merged_at": "2021-10-13T06:49:48" }
albertvillanova
true
[ "Thanks ! Sorry for the inconvenience ^^' " ]
1,024,818,680
3,069
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
closed
2021-10-13T05:52:26
2021-10-13T08:05:49
2021-10-13T06:49:48
https://github.com/huggingface/datasets/issues/3069
null
albertvillanova
false
[]
1,024,681,264
3,068
feat: increase streaming retry config
closed
2021-10-13T02:00:50
2021-10-13T09:25:56
2021-10-13T09:25:54
https://github.com/huggingface/datasets/pull/3068
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3068", "html_url": "https://github.com/huggingface/datasets/pull/3068", "diff_url": "https://github.com/huggingface/datasets/pull/3068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3068.patch", "merged_at": "2021-10-13T09:25:54" }
borisdayma
true
[ "@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a higher number of retries is for local connections. It would allow for almost 2mn of a wifi/ethernet disconnection. In practice this should not happen very often.\r\n\r\nLet me know if you think it's too much." ]
1,024,023,185
3,067
add story_cloze
closed
2021-10-12T16:36:53
2021-10-13T13:48:13
2021-10-13T13:48:13
https://github.com/huggingface/datasets/pull/3067
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3067", "html_url": "https://github.com/huggingface/datasets/pull/3067", "diff_url": "https://github.com/huggingface/datasets/pull/3067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3067.patch", "merged_at": "2021-10-13T13:48:13" }
zaidalyafeai
true
[ "Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npretty_name: Story Cloze Test\r\n```\r\nand filling the other tags task_categories and task_ids\r\n\r\nIf the dataset doesn exist on paperswithcode, you can just leave\r\n```yaml\r\npaperswithcode_id: null\r\n```", "@lhoestq can't fix the last test fails.", "> Thanks @zaidalyafeai, the failing test is due to an issue in the master branch, that has already been fixed.\r\n> \r\n> You can include the fix:\r\n> \r\n> ```\r\n> git checkout add_story_cloze\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> ```\r\n\r\nThanks @albertvillanova, passed all the tests now. ", "Thanks Albert, I fixed the suggested comments. This dataset has no train splits, it is only used for evaluation." ]
1,024,005,311
3,066
Add iter_archive
closed
2021-10-12T16:17:16
2022-09-21T14:10:10
2021-10-18T09:12:46
https://github.com/huggingface/datasets/pull/3066
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3066", "html_url": "https://github.com/huggingface/datasets/pull/3066", "diff_url": "https://github.com/huggingface/datasets/pull/3066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3066.patch", "merged_at": "2021-10-18T09:12:46" }
lhoestq
true
[]
1,023,951,322
3,065
Fix test command after refac
closed
2021-10-12T15:23:30
2021-10-12T15:28:47
2021-10-12T15:28:46
https://github.com/huggingface/datasets/pull/3065
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3065", "html_url": "https://github.com/huggingface/datasets/pull/3065", "diff_url": "https://github.com/huggingface/datasets/pull/3065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3065.patch", "merged_at": "2021-10-12T15:28:46" }
lhoestq
true
[]
1,023,900,075
3,064
Make `interleave_datasets` more robust
open
2021-10-12T14:34:53
2022-07-30T08:47:26
null
https://github.com/huggingface/datasets/issues/3064
null
sbmaruf
false
[ "Hi @lhoestq Any response on this issue?", "Hi ! Sorry for the late response\r\n\r\nI agree `interleave_datasets` would benefit a lot from having more flexibility. If I understand correctly it would be nice to be able to define stopping strategies like `stop=\"first_exhausted\"` (default) or `stop=\"all_exhausted\"`. If you'd like to contribute this feature I'd be happy to give you some pointers :)\r\n\r\nAlso one can already set the max number of iterations per dataset by doing `dataset.take(n)` on the dataset that should only have `n` samples.\r\n\r\nRegarding the `iter_cnt` counter, I think this requires a bit more thoughts, since we might have to be able to backpropagate the the counter if `map` or other transforms have been applied after `interleave_datasets`. ", "@sbmaruf I just notice that (1)`interleave_datasets` only samples indices once and reuse for all epochs, and (2) it's limited by the smallest dataset. Do you figure out an alternative way to achieve the same purpose?" ]
1,023,588,297
3,063
Windows CI is unable to test streaming properly because of SSL issues
closed
2021-10-12T09:33:40
2022-08-24T14:59:29
2022-08-24T14:59:29
https://github.com/huggingface/datasets/issues/3063
null
lhoestq
false
[ "I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTPFileSystem, https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattribu\r\n```", "No I'm still having this issue on my windows, and so does the CI" ]
1,023,209,592
3,062
Update summary on PyPi beyond NLP
closed
2021-10-11T23:27:46
2021-10-13T08:55:54
2021-10-13T08:55:54
https://github.com/huggingface/datasets/pull/3062
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3062", "html_url": "https://github.com/huggingface/datasets/pull/3062", "diff_url": "https://github.com/huggingface/datasets/pull/3062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3062.patch", "merged_at": "2021-10-13T08:55:53" }
thomwolf
true
[]
1,023,103,119
3,061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
open
2021-10-11T20:49:49
2021-10-22T09:34:10
null
https://github.com/huggingface/datasets/issues/3061
null
BenoitDalFerro
false
[ "@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.", "Hi ! Sounds like a good idea :)\r\n\r\nAlso I think it would be better to have this as an actual parameters instead of kwargs to make it clearer" ]
1,022,936,396
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
closed
2021-10-11T17:05:27
2021-10-28T05:52:21
2021-10-28T05:52:21
https://github.com/huggingface/datasets/issues/3060
null
RylanSchaeffer
false
[ "Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.", "I close this issue for the moment. Feel free to re-open it again if the problem persists." ]
1,022,620,057
3,059
Fix task reloading from cache
closed
2021-10-11T12:03:04
2021-10-11T12:23:39
2021-10-11T12:23:39
https://github.com/huggingface/datasets/pull/3059
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3059", "html_url": "https://github.com/huggingface/datasets/pull/3059", "diff_url": "https://github.com/huggingface/datasets/pull/3059.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3059.patch", "merged_at": "2021-10-11T12:23:38" }
lhoestq
true
[]
1,022,612,664
3,058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
closed
2021-10-11T11:54:59
2022-01-19T14:03:49
2022-01-19T14:03:49
https://github.com/huggingface/datasets/issues/3058
null
hobbitlzy
false
[ "Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ?\r\n\r\nAnyway I think the issue could be that both wikipedia and bookcorpusopen have an additional \"title\" column, contrary to wikitext which only has a \"text\" column. After calling `load_dataset`, can you try doing `dataset = dataset.remove_columns(\"title\")` ?", "Removing the \"title\" column works! Thanks for your advice.\r\n\r\nMaybe I should still create an issue to `transformers' to mark this solution?" ]
1,022,508,315
3,057
Error in per class precision computation
closed
2021-10-11T10:05:19
2021-10-11T10:17:44
2021-10-11T10:16:16
https://github.com/huggingface/datasets/issues/3057
null
tidhamecha2
false
[ "Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```" ]
1,022,345,564
3,056
Fix meteor metric for version >= 3.6.4
closed
2021-10-11T07:11:44
2021-10-11T07:29:20
2021-10-11T07:29:19
https://github.com/huggingface/datasets/pull/3056
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3056", "html_url": "https://github.com/huggingface/datasets/pull/3056", "diff_url": "https://github.com/huggingface/datasets/pull/3056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3056.patch", "merged_at": "2021-10-11T07:29:19" }
albertvillanova
true
[]
1,022,319,238
3,055
CI test suite fails after meteor metric update
closed
2021-10-11T06:37:12
2021-10-11T07:30:31
2021-10-11T07:30:31
https://github.com/huggingface/datasets/issues/3055
null
albertvillanova
false
[]
1,022,108,186
3,054
Update Biosses
closed
2021-10-10T22:25:12
2021-10-13T09:04:27
2021-10-13T09:04:27
https://github.com/huggingface/datasets/pull/3054
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3054", "html_url": "https://github.com/huggingface/datasets/pull/3054", "diff_url": "https://github.com/huggingface/datasets/pull/3054.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3054.patch", "merged_at": "2021-10-13T09:04:27" }
bwang482
true
[]
1,022,076,905
3,053
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
closed
2021-10-10T19:55:21
2023-02-24T14:02:20
2023-02-24T14:02:20
https://github.com/huggingface/datasets/issues/3053
null
davidbau
false
[ "I encountered the same bug using different datasets.\r\nany suggestions?", "+1, can reproduce here!", "I get the same error\r\nPlatform: Windows 10\r\nPython: python 3.8.8\r\nPyArrow: 5.0", "I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value(\"int8\"))`, but the actual values can be well outside the max range for 8-bit integers.\r\n\r\nI worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes).", "Addressed in https://huggingface.co/datasets/the_pile_openwebtext2/discussions/4" ]
1,021,944,435
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
closed
2021-10-10T10:31:36
2021-10-11T10:57:09
2021-10-11T10:56:36
https://github.com/huggingface/datasets/issues/3052
null
BenoitDalFerro
false
[ "Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The following packages are causing the inconsistency:\r\n> \r\n> - conda-forge/noarch::datasets==1.12.1=pyhd8ed1ab_1\r\n> - conda-forge/win-64::multiprocess==0.70.12.2=py38h294d835_0\r\n> done\r\n> \r\n> Package Plan\r\n> \r\n> environment location: C:\\xxx\\anaconda3\\envs\\UnBias-94-1\r\n> \r\n> added / updated specs:\r\n> - datasets\r\n> \r\n> \r\n> The following NEW packages will be INSTALLED:\r\n> \r\n> dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0\r\n> \r\n> The following packages will be UPDATED:\r\n> \r\n> ca-certificates pkgs/main::ca-certificates-2021.9.30-~ --> conda-forge::ca-certificates-2021.10.8-h5b45459_0\r\n> certifi pkgs/main::certifi-2021.5.30-py38haa9~ --> conda-forge::certifi-2021.10.8-py38haa244fe_0\r\n> \r\n> The following packages will be SUPERSEDED by a higher-priority channel:\r\n> " ]
1,021,852,234
3,051
Non-Matching Checksum Error with crd3 dataset
closed
2021-10-10T01:32:43
2022-03-15T15:54:26
2022-03-15T15:54:26
https://github.com/huggingface/datasets/issues/3051
null
RylanSchaeffer
false
[ "I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']\r\n```", "I'm seeing the same issue as @RylanSchaeffer:\r\nPython 3.7.11, macOs 11.4\r\ndatasets==1.14.0\r\n\r\nfails on:\r\n```python\r\ndataset = datasets.load_dataset(\"multi_woz_v22\")\r\n```" ]
1,021,772,622
3,050
Fix streaming: catch Timeout error
closed
2021-10-09T18:19:20
2021-10-12T15:28:18
2021-10-11T09:35:38
https://github.com/huggingface/datasets/pull/3050
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3050", "html_url": "https://github.com/huggingface/datasets/pull/3050", "diff_url": "https://github.com/huggingface/datasets/pull/3050.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3050.patch", "merged_at": "2021-10-11T09:35:38" }
borisdayma
true
[ "I'm running a large test.\r\nLet's see if I get any error within a few days.", "This time it stopped after 8h but correctly raised `ConnectionError: Server Disconnected`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 1027, in <module> \r\n main() \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 991, in main \r\n for batch in tqdm( \r\n File \"/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__ \r\n for obj in iterable: \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 376, in data_loader_streaming\r\n for item in dataset:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in __iter__\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in <listcomp>\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 176, in __iter__\r\n for key, example in iterator:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 225, in __iter__\r\n for x in self.ex_iterable:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 99, in __iter__\r\n for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 287, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/koush/datasets/src/datasets/packaged_modules/json/json.py\", line 107, in _generate_tables\r\n batch = f.read(self.config.chunksize)\r\n File \"/home/koush/datasets/src/datasets/utils/streaming_download_manager.py\", line 136, in read_with_retries\r\n raise ConnectionError(\"Server Disconnected\")\r\nConnectionError: Server Disconnected\r\n```\r\n\r\nRight before this error, the warnings were correctly raised:\r\n\r\n```\r\n10/10/2021 06:02:26 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [1/3]\r\n10/10/2021 06:02:27 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [2/3] \r\n10/10/2021 06:02:28 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [3/3\r\n```\r\n\r\nI'm going to see what happens if I change the max retries to 20 and the interval to 5.", "Also maybe we can raise the Server Disconnected error with more info about what kind of error caused it (client error, time out, etc.)", "I have 2 runs:\r\n* [run 1](https://wandb.ai/dalle-mini/dalle-mini/runs/1nj161cl?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded) that I will remove soon because I now use the 2nd one\r\n* [run 2](https://wandb.ai/dalle-mini/dalle-mini/runs/he9rrc3q?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded-vqgan_imagenet_f16_16384)\r\n* `load_dataset(dataset_repo, data_files={'train':'data/train/*.jsonl', 'validation':'data/valid/*.jsonl'}, streaming=True)`\r\n\r\nThey have now been running by a bit more than a day for one run and 15h for the other.\r\n\r\nThe error logs are not shown in wandb because the script use `pylogging` (not sure why, I should change it) but basically so far with the new settings I had one timeout in each with successful reconnect afterwards.\r\n\r\nSo I think it's a good idea to have:\r\n* `STREAMING_READ_RETRY_INTERVAL = 5` since before my runs would get 3 errors in a row (with the default 1 second pause)\r\n* `STREAMING_READ_MAX_RETRIES` should also be increased. Since this type of error does not happen a lot, I would still have a large number (at least 10) because a stopped training run may be a big issue if checkpointing/restart is not well implemented which is not always trivial", "I agree ! Feel free to open a PR to increase both values" ]
1,021,770,008
3,049
TimeoutError during streaming
closed
2021-10-09T18:06:51
2021-10-11T09:35:38
2021-10-11T09:35:38
https://github.com/huggingface/datasets/issues/3049
null
borisdayma
false
[]
1,021,765,661
3,048
Identify which shard data belongs to
open
2021-10-09T17:46:35
2021-10-09T20:24:17
null
https://github.com/huggingface/datasets/issues/3048
null
borisdayma
false
[ "Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch." ]
1,021,360,616
3,047
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
closed
2021-10-08T18:23:11
2021-11-03T17:13:08
2021-11-03T17:13:08
https://github.com/huggingface/datasets/issues/3047
null
sgugger
false
[ "This has been fixed in 1.15, let me know if you still have this issue" ]
1,021,021,368
3,046
Fix MedDialog metadata JSON
closed
2021-10-08T12:04:40
2021-10-11T07:46:43
2021-10-11T07:46:42
https://github.com/huggingface/datasets/pull/3046
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3046", "html_url": "https://github.com/huggingface/datasets/pull/3046", "diff_url": "https://github.com/huggingface/datasets/pull/3046.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3046.patch", "merged_at": "2021-10-11T07:46:42" }
albertvillanova
true
[]
1,020,968,704
3,045
Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044
closed
2021-10-08T10:59:21
2021-10-21T16:58:32
2021-10-21T14:22:44
https://github.com/huggingface/datasets/pull/3045
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3045", "html_url": "https://github.com/huggingface/datasets/pull/3045", "diff_url": "https://github.com/huggingface/datasets/pull/3045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3045.patch", "merged_at": null }
vlievin
true
[ "Hi ! Thanks for noticing this inconsistence and suggesting a fix :)\r\n\r\nIf I understand correctly you try to pass the same fingerprint to each processed shard of the dataset. This can be an issue since each shard is actually a different dataset with different data: they shouldn't have the same fingerprint.\r\n\r\nIdeally we want the result after `map` to have this fingerprint. The result after `map` is the concatenation of all the processed shards. In this case what we can do is add the `fingerprint` parameter to `concatenate_datasets` to overwrite the fingerprint here if needed:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L3588-L3590\r\n\r\nthen you can pass the fingerprint to `concatenate_datasets` here:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L2044-L2044", "Hi @lhoestq, thanks for the pointers! Not having a unique fingerprint per shard was indeed was indeed a problem. \r\n\r\nLet me look into this. I'll be back with a fix soon.", "Alright, to clarify about my problem. I using am using `datasets` with large datasets, and want to cache a heavy and non-deterministically fingerprintable function (using `datasets.fingerprint.Hasher`). Using `Dataset.map()` as it is would cause generating a random fingerprint. To circumvent this, I am generating custom deterministic fingerprints, which I pass as an argument to `Dataset.map()`. In that way, a deterministic fingerprint is set, and caching can be used. \r\n\r\nThis approach works well when using `num_proc==1`, but not so well when using `num_proc>1`. In both cases, `dataset._fingerprint` is effectively set to `new_fingerprint` at the end of the `.map()` call. However, caching is not used when `num_proc>1`, a non deterministically fingerprintable function and `new_fingerprint != null. The reason is that caching operates within `Dataset._map_single` and `new_fingerprint` is not passed here. \r\n\r\nThis pull request implements a quick fix (+unit test) by passing `new_fingerprint=f\"{new_fingerprint}-part{rank+1}-{num_proc}\"` to each `_map_single` call. Using a separate name for each call makes sure that each worker uses a different cache file (as you mentioned above).\r\n\r\nHowever, this solution still means that using a different value for `num_proc` will require computing new partial cache files. In the long run, performing the caching within `map()` instead of within `_map_single()` would be a cleaner solution.", "Hi @vlievin,\r\n\r\nIf I understand your example correctly, you are trying to use the `new_fingerprint` param to have a deterministic fingerprint of the transform, which is not hashable due to randomness. Any particular reason why you are not using the `cache_file_name` param instead? I did run your example with the `cache_file_name` specified, and it behaves as expected based on the logs. Internally, `new_fingerprint` is needed to inject the calculated fingerprint into a method by the `fingerprint_transform` decorator, which is then used to compute the cache file name in `Dataset._get_cache_file_path` if the user hasn't specified one. ", "Hi @lhoestq, I have cleaned up the unit test (incl. styling). It should be ready to merge as such. I am using this branch in my project and everything works fine. \r\n\r\nHi @mariosasko, the argument `new_fingerprint` allowed me to deterministically cache my transformation when using `num_proc=1`, so I assumed that was the right way to go. But maybe I have misinterpreted how `new_fingerprint` should be used.\r\n\r\nBut in any case, `map()` should perform consistently with regards to `num_proc`. In my opinion, the behaviour of `Dataset.map()` should perform the same, and this without requiring the user to input `cache_file_name` when `num_proc>1` is set.\r\nBut maybe there is a more elegant way to fix this using `cache_file_name` internally for each `_single_map()` call.\r\n\r\nSo, I think this is a more high level design decision and I will leave it to the maintainers :) ", "Hi @vlievin,\r\n\r\nI appreciate your effort, but `new_fingerprint` behaves as described in the `Dataset.map` docs, and we don't have to follow some artificial consistency in regards to `num_proc`:\r\nhttps://github.com/huggingface/datasets/blob/adc5cec58dd15ee672016086fefdea34b3143e4f/src/datasets/arrow_dataset.py#L1962-L1963\r\n\r\nAdditionally, to compute the cache file name, you are using a private method (`dset._get_cache_file_path(new_fingerprint)`); prefixed with `_`), so this is a sign you may be doing something wrong because you are relying on the internals. I suggest you use cache_file_name instead and follow the suffix template docs, which explain how to compute file paths of the created cache files when `num_proc > 1`.", "Hi @mariosasko, thanks for the pointer regarding the use of the private method in then unit tests. \r\n\r\nYes, `new_fingerprint` behaves as documented. If you don't think this is an issue, feel free to close this pull request. \r\n", "Allowing the users to pass the fingerprint themselves for functions that can't be hashed would be a nice improvements. However I agree that as @mariosasko mentioned this is currently not how we want the API to behave for now - since it has to do with the internals of the library.\r\n\r\nThough we can discuss what could be the right way of doing it in https://github.com/huggingface/datasets/issues/3044 if you don't mind !" ]
1,020,869,778
3,044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
open
2021-10-08T09:07:10
2025-03-04T07:16:00
null
https://github.com/huggingface/datasets/issues/3044
null
vlievin
false
[ "Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementing a custom `__getstate__` method for example.\r\n\r\nHowever it sounds pretty complicated for a simple thing. Maybe one idea would be to have something similar to streamlit: they allow users to register the hashing of their own objects.\r\n\r\nSee the documentation about their `hash_funcs` here: https://docs.streamlit.io/library/advanced-features/caching#the-hash_funcs-parameter\r\n\r\nHere is the example they give:\r\n\r\n```python\r\nclass FileReference:\r\n def __init__(self, filename):\r\n self.filename = filename\r\n\r\ndef hash_file_reference(file_reference):\r\n filename = file_reference.filename\r\n return (filename, os.path.getmtime(filename))\r\n\r\n@st.cache(hash_funcs={FileReference: hash_file_reference})\r\ndef func(file_reference):\r\n ...\r\n```", "My solution was to generate a custom hash, and use the hash as a `new_fingerprint` argument to the `map()` method to enable caching. This works, but is quite hacky.\r\n\r\n@lhoestq, this approach is very neat, this would make the whole caching mechanic more explicit. I don't have so much time to look into this right now, but I might give it a try in the future. ", "Almost a year later and I'm in a similar boat. Using custom fingerprints and when using multiprocessing the cached datasets are saved with a template at the end of the filename (something like \"000001_of_000008\" for every process of num_proc). So if in the next time you run the script you set num_proc to a different number, the cache cannot be used.\r\n\r\nIs there any way to get around this? I am processing a huge dataset so I do the processing on one machine and then transfer the processed data to another in its cache dir but currently that's not possible due to num_proc mismatch. ", "> ## Expected results\n> In the above python example, with `num_proc=2`, the **cache file should exist in the second call** of `process_dataset_with_cache` (\"=== Cache does not exist! ====\" should not be printed). When the cache is successfully created, `map()` is called only one time.\n> \n> ## Actual results\n> In the above python example, with `num_proc=2`, the **cache does not exist in the second call** of `process_dataset_with_cache` (this results in printing \"=== Cache does not exist! ====\"). Because the cache doesn't exist, the `map()` method is executed a second time and the dataset is not loaded from the cache.\n\nIn your example\n\n`cache_path = \"~/.cache/huggingface/datasets/json/.../cache-3b163736cf4505085d8b5f9b4c266c26.arrow\"`\n\nbut\n\n```console\n$ tree~/.cache/huggingface/datasets/json/.../\n~/.cache/huggingface/datasets/json/.../\n├── cache-3b163736cf4505085d8b5f9b4c266c26_00000_of_00002.arrow\n├── cache-3b163736cf4505085d8b5f9b4c266c26_00001_of_00002.arrow\n```\n\nWhen `num_proc > 1`, the cache files are sharded and not saved under `cache_path`. Instead, a suffix appended, and so it is expected that `not os.path.exists(cache_path)` and that `\"=== Cache does not exist! ====\"`.\n\nYou can see there isn't a 2nd progress bar, also, so it is definitely using the cache on the second call to `process_dataset_with_cache` with both `num_proc=1` and `num_proc=2`." ]
1,020,252,114
3,043
Add PASS dataset
closed
2021-10-07T16:43:43
2022-01-20T16:50:47
2022-01-20T16:50:47
https://github.com/huggingface/datasets/issues/3043
null
osanseviero
false
[]
1,020,047,289
3,042
Improving elasticsearch integration
open
2021-10-07T13:28:35
2022-07-06T15:19:48
null
https://github.com/huggingface/datasets/pull/3042
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3042", "html_url": "https://github.com/huggingface/datasets/pull/3042", "diff_url": "https://github.com/huggingface/datasets/pull/3042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3042.patch", "merged_at": null }
ggdupont
true
[ "@lhoestq @albertvillanova Iwas trying to fix the failing tests in circleCI but is there a test elasticsearch instance somewhere? If not, can I launch a docker container to have one?" ]
1,018,911,385
3,041
Load private data files + use glob on ZIP archives for json/csv/etc. module inference
closed
2021-10-06T18:16:36
2021-10-12T15:25:48
2021-10-12T15:25:46
https://github.com/huggingface/datasets/pull/3041
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3041", "html_url": "https://github.com/huggingface/datasets/pull/3041", "diff_url": "https://github.com/huggingface/datasets/pull/3041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3041.patch", "merged_at": "2021-10-12T15:25:46" }
lhoestq
true
[ "I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat the `fsspec` call in `xglob`:\r\n```python\r\nfs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)\r\n```\r\n\r\nLooks like the windows CI has an SSL issue... ", "I can reproduce it on my windows machine. On linux it works fine though", "I'm just skipping the windows test for now", "The Windows CI failure seems unrelated to this PR\r\n```python\r\nERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3\r\n```" ]
1,018,782,475
3,040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
closed
2021-10-06T17:08:47
2021-11-02T15:41:08
2021-11-02T15:41:08
https://github.com/huggingface/datasets/issues/3040
null
patrickvonplaten
false
[ "Hi,\r\n\r\nthe `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.", "That works! Thansk!\r\n\r\nMight be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices mapping :-)", "I agree with @patrickvonplaten: this issue is reported recurrently, so better if we implement the `.flatten_indices()` automatically?", "That would be great indeed - I don't really see a use case where one would not like to call `.flatten_indices()` before calling `save_to_disk`", "+1 on this !" ]
1,018,219,800
3,039
Add sberquad dataset
closed
2021-10-06T12:32:02
2021-10-13T10:19:11
2021-10-13T10:16:04
https://github.com/huggingface/datasets/pull/3039
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3039", "html_url": "https://github.com/huggingface/datasets/pull/3039", "diff_url": "https://github.com/huggingface/datasets/pull/3039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3039.patch", "merged_at": "2021-10-13T10:16:04" }
Alenush
true
[]
1,018,113,499
3,038
add sberquad dataset
closed
2021-10-06T11:33:39
2021-10-06T11:58:01
2021-10-06T11:58:01
https://github.com/huggingface/datasets/pull/3038
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3038", "html_url": "https://github.com/huggingface/datasets/pull/3038", "diff_url": "https://github.com/huggingface/datasets/pull/3038.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3038.patch", "merged_at": null }
Alenush
true
[]
1,018,091,919
3,037
SberQuad
closed
2021-10-06T11:21:08
2021-10-06T11:33:08
2021-10-06T11:33:08
https://github.com/huggingface/datasets/pull/3037
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3037", "html_url": "https://github.com/huggingface/datasets/pull/3037", "diff_url": "https://github.com/huggingface/datasets/pull/3037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3037.patch", "merged_at": null }
Alenush
true
[]
1,017,687,944
3,036
Protect master branch to force contributions via Pull Requests
closed
2021-10-06T07:34:17
2021-10-07T06:51:47
2021-10-07T06:49:52
https://github.com/huggingface/datasets/issues/3036
null
albertvillanova
false
[ "It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually).\r\nDo you know if there's a way ?", "you can if you're an admin of the repo", "This is done. Now the master branch is protected:\r\n- [x] Require a pull request before merging: all commits must be made to a non-protected branch and submitted via a pull request\r\n - Required number of approvals before merging: 1 \r\n- [x] Require linear history: prevent merge commits from being pushed\r\n- [x] These requirements are not enforced for administrators\r\n- [x] Additionally, the master branch is also protected against deletion and force pushes\r\n\r\nCC: @lhoestq @julien-c @thomwolf " ]
1,016,770,071
3,035
`load_dataset` does not work with uploaded arrow file
open
2021-10-05T20:15:10
2021-10-06T17:01:37
null
https://github.com/huggingface/datasets/issues/3035
null
patrickvonplaten
false
[ "Hi ! This is not a bug, this is simply not implemented.\r\n`save_to_disk` is for on-disk serialization and was not made compatible for the Hub.\r\nThat being said, I agree we actually should make it work with the Hub x)", "cc @LysandreJik maybe we can solve this at the same time as adding `push_to_hub`" ]
1,016,759,202
3,034
Errors loading dataset using fs = a gcsfs.GCSFileSystem
open
2021-10-05T20:07:08
2021-10-05T20:26:39
null
https://github.com/huggingface/datasets/issues/3034
null
dconatha
false
[]
1,016,619,572
3,033
Actual "proper" install of ruamel.yaml in the windows CI
closed
2021-10-05T17:52:07
2021-10-05T17:54:57
2021-10-05T17:54:57
https://github.com/huggingface/datasets/pull/3033
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3033", "html_url": "https://github.com/huggingface/datasets/pull/3033", "diff_url": "https://github.com/huggingface/datasets/pull/3033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3033.patch", "merged_at": "2021-10-05T17:54:56" }
lhoestq
true
[]
1,016,488,475
3,032
Error when loading private dataset with "data_files" arg
closed
2021-10-05T15:46:27
2021-10-12T15:26:22
2021-10-12T15:25:46
https://github.com/huggingface/datasets/issues/3032
null
borisdayma
false
[ "We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !" ]
1,016,458,496
3,031
Align tqdm control with cache control
closed
2021-10-05T15:18:49
2021-10-18T15:00:21
2021-10-18T14:59:30
https://github.com/huggingface/datasets/pull/3031
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3031", "html_url": "https://github.com/huggingface/datasets/pull/3031", "diff_url": "https://github.com/huggingface/datasets/pull/3031.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3031.patch", "merged_at": "2021-10-18T14:59:30" }
mariosasko
true
[ "Could you add this function to the documentation please ?\r\n\r\nYou can add it in `main_classes.rst`, and maybe add a `Tip` section in the `map` section in the `process.rst`" ]
1,016,435,324
3,030
Add `remove_columns` to `IterableDataset`
closed
2021-10-05T14:58:33
2021-10-08T15:33:15
2021-10-08T15:31:53
https://github.com/huggingface/datasets/pull/3030
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3030", "html_url": "https://github.com/huggingface/datasets/pull/3030", "diff_url": "https://github.com/huggingface/datasets/pull/3030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3030.patch", "merged_at": "2021-10-08T15:31:53" }
changjonathanc
true
[ "Thanks ! That looks all good :)\r\n\r\nI don't think that batching would help. Indeed we're dealing with python iterators that yield elements one by one, so batched `map` needs to accumulate a batch, apply the function, and then yield examples from the batch.\r\n\r\nThough once we have parallel processing in `map`, we can reconsider it\r\n\r\nAlso feel free to check the CI failure - apparently the import of `Union` is missing", "Thanks for the review and explaining that! \r\nOn top of what you said, I think `remove_columns` is very unlikely to be a bottleneck, so it doesn't matter anyways.", "Thank you for reviewing! @mariosasko \r\n\r\nI wonder how the checking would work. Is there any checking present in `IterableDataset ` now? What if `.remove_columns()` is applied after some arbitrary `.map()`?", "> I wonder how the checking would work. Is there any checking present in IterableDataset now? What if .remove_columns() is applied after some arbitrary .map()?\r\n\r\nThat's the challenge here indeed ^^ In this case it's not trivial to know the names of the columns. Feel free to open an issue so we can discuss this" ]
1,016,389,901
3,029
Use standard open-domain validation split in nq_open
closed
2021-10-05T14:19:27
2021-10-05T14:56:46
2021-10-05T14:56:45
https://github.com/huggingface/datasets/pull/3029
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3029", "html_url": "https://github.com/huggingface/datasets/pull/3029", "diff_url": "https://github.com/huggingface/datasets/pull/3029.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3029.patch", "merged_at": "2021-10-05T14:56:45" }
craffel
true
[ "I had to run datasets-cli with --ignore_verifications the first time since it was complaining about a missing file, but now it runs without that flag fine. I moved dummy_data.zip to the new folder, but also had to modify the filename of the test file in the zip (should I not have done that?). Finally, I added the pretty name tag.", "Great, thanks for the help." ]
1,016,230,272
3,028
Properly install ruamel-yaml for windows CI
closed
2021-10-05T11:51:15
2021-10-05T14:02:12
2021-10-05T11:51:22
https://github.com/huggingface/datasets/pull/3028
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3028", "html_url": "https://github.com/huggingface/datasets/pull/3028", "diff_url": "https://github.com/huggingface/datasets/pull/3028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3028.patch", "merged_at": "2021-10-05T11:51:22" }
lhoestq
true
[ "@lhoestq I would say this does not \"properly\" install `ruamel-yaml`, but the contrary, you overwrite the previous version without desinstalling it first.\r\n\r\nAccording to `pip` docs:\r\n> This can break your system if the existing package is of a different version or was installed with a different package manager!\r\n\r\nNote that our case fulfills both conditions:\r\n- the installing version (`0.17.16`) is different from the existing one (`0.15.87`)\r\n- you are installing using `pip` (`setuptools`), whereas the exisitng version was installed using `distutils`\r\n\r\nThat is why I did not fix the issue this way, made a hotfix pinning `huggingface_hub` (#3025), while looking for a permanent solution for the issue.", "Yea I did this because we need the latest version of `huggingface_hub` for #2986 and because I didn't want to ssh to the windows worker x)\r\nMaybe it can be fixed by installing it with conda - let me try", "Oh yea it may not work since it was first installed with distutils" ]
1,016,150,117
3,027
Resolve data_files by split name
closed
2021-10-05T10:24:36
2021-11-05T17:49:58
2021-11-05T17:49:57
https://github.com/huggingface/datasets/issues/3027
null
lhoestq
false
[ "Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892). ", "From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the filename contains the split name. For example for a dataset like this:\r\n```\r\ntrain/\r\n└── data.csv\r\ntest/\r\n└── data.csv\r\n```\r\n\r\nBut IMO the default should be \r\n```\r\ndata/\r\n├── train.csv\r\n└── test.csv\r\n```\r\nbecause it allows people to have other directories if they have different subsets of their data (different configurations, not splits)", "I just created a PR for this at https://github.com/huggingface/datasets/pull/3221, let me know what you think :)" ]
1,016,067,794
3,026
added arxiv paper inswiss_judgment_prediction dataset card
closed
2021-10-05T09:02:01
2021-10-08T16:01:44
2021-10-08T16:01:24
https://github.com/huggingface/datasets/pull/3026
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3026", "html_url": "https://github.com/huggingface/datasets/pull/3026", "diff_url": "https://github.com/huggingface/datasets/pull/3026.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3026.patch", "merged_at": "2021-10-08T16:01:24" }
JoelNiklaus
true
[]
1,016,061,222
3,025
Fix Windows test suite
closed
2021-10-05T08:55:22
2021-10-05T09:58:28
2021-10-05T09:58:27
https://github.com/huggingface/datasets/pull/3025
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3025", "html_url": "https://github.com/huggingface/datasets/pull/3025", "diff_url": "https://github.com/huggingface/datasets/pull/3025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3025.patch", "merged_at": "2021-10-05T09:58:27" }
albertvillanova
true
[]
1,016,052,911
3,024
Windows test suite fails
closed
2021-10-05T08:46:46
2021-10-05T09:58:27
2021-10-05T09:58:27
https://github.com/huggingface/datasets/issues/3024
null
albertvillanova
false
[]
1,015,923,031
3,023
Fix typo
closed
2021-10-05T06:06:11
2021-10-05T11:56:55
2021-10-05T11:56:55
https://github.com/huggingface/datasets/pull/3023
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3023", "html_url": "https://github.com/huggingface/datasets/pull/3023", "diff_url": "https://github.com/huggingface/datasets/pull/3023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3023.patch", "merged_at": "2021-10-05T11:56:55" }
qqaatw
true
[]
1,015,750,221
3,022
MeDAL dataset: Add further description and update download URL
closed
2021-10-05T00:13:28
2021-10-13T09:03:09
2021-10-13T09:03:09
https://github.com/huggingface/datasets/pull/3022
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3022", "html_url": "https://github.com/huggingface/datasets/pull/3022", "diff_url": "https://github.com/huggingface/datasets/pull/3022.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3022.patch", "merged_at": "2021-10-13T09:03:09" }
xhluca
true
[ "@lhoestq I'm a bit confused by the error message. I haven't touched the YAML code at all - do you have any insight on that?", "I just added the missing `pretty_name` tag in the YAML - sorry about that ;)", "Thanks! Seems like it did the trick since the tests are passing. Let me know if there's anything else I can do in this PR!", "It's all good thank you :)\r\n\r\nmerging !" ]
1,015,444,094
3,021
Support loading dataset from multiple zipped CSV data files
closed
2021-10-04T17:33:57
2021-10-06T08:36:46
2021-10-06T08:36:45
https://github.com/huggingface/datasets/pull/3021
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3021", "html_url": "https://github.com/huggingface/datasets/pull/3021", "diff_url": "https://github.com/huggingface/datasets/pull/3021.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3021.patch", "merged_at": "2021-10-06T08:36:45" }
albertvillanova
true
[]
1,015,406,105
3,020
Add a metric for the MATH dataset (competition_math).
closed
2021-10-04T16:52:16
2021-10-22T10:29:31
2021-10-22T10:29:31
https://github.com/huggingface/datasets/pull/3020
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3020", "html_url": "https://github.com/huggingface/datasets/pull/3020", "diff_url": "https://github.com/huggingface/datasets/pull/3020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3020.patch", "merged_at": "2021-10-22T10:29:31" }
hacobe
true
[ "I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://github.com/hendrycks/math.git' for instance'\r\n\r\nIt fails along with (these fail with ImportError as well):\r\ntest_load_metric_bertscore\r\ntest_load_metric_bleurt\r\ntest_load_metric_comet\r\ntest_load_metric_coval\r\n\r\nLet me know if there is anything I need to change.", "Hi ! The script looks all good thanks :)\r\n\r\nTo fix the CI you just need to merge `master` into your branch\r\n```\r\ngit fetch upstream/master\r\ngit merge upstream/master\r\n```\r\n\r\nThen you also need to add `math_equivalence` to the list of git packages installed for the tests in `additional-tests-requirements.txt`\r\nhttps://github.com/huggingface/datasets/blob/ba831e4bcd175ae3d52afbf7d12c4f625bf541b0/additional-tests-requirements.txt#L1-L3", "I ran:\r\n\r\ngit fetch upstream\r\ngit merge upstream/master\r\n\r\nAnd I also added math_equivalence to the list of git packages installed for the tests in additional-tests-requirements.txt\r\n\r\ntests/test_metric_common.py fails with the same errors as before. tests/test_dataset_cards.py also fails, but it doesn't look related to this PR (it's an issue datasets/ami/README.md).", "@lhoestq Anything else I can do? I re-merged again and am getting the same test failures as described in the previous comment." ]