id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,394,152,728
| 5,054
|
Fix license/citation information of squadshifts dataset card
|
closed
| 2022-10-03T05:19:13
| 2022-10-03T09:26:49
| 2022-10-03T09:24:30
|
https://github.com/huggingface/datasets/pull/5054
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5054",
"html_url": "https://github.com/huggingface/datasets/pull/5054",
"diff_url": "https://github.com/huggingface/datasets/pull/5054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5054.patch",
"merged_at": "2022-10-03T09:24:30"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,393,739,882
| 5,053
|
Intermittent JSON parse error when streaming the Pile
|
open
| 2022-10-02T11:56:46
| 2022-10-04T17:59:03
| null |
https://github.com/huggingface/datasets/issues/5053
| null |
neelnanda-io
| false
|
[
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co/datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n```",
"Ah, thanks! I did get errors like that. Sad that PR wasn't merged in! \r\n\r\nI'm currently just downloading 200GB of the Pile locally to avoid streaming (I have space and it's faster anyway), but that's really useful! I can probably apply the dumb patch of just commenting out the bits that raise the JSON Parse Error lol, based on your code - if I continue the loop should it be fine?",
"Yup you can get some inspiration from this PR. It simply ignores the bad chunks (a chunk is ~a few MBs of data).\r\nWe'll try to merge this PR soon"
] |
1,393,076,765
| 5,052
|
added from_generator method to IterableDataset class.
|
closed
| 2022-09-30T22:14:05
| 2022-10-05T12:51:48
| 2022-10-05T12:10:48
|
https://github.com/huggingface/datasets/pull/5052
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5052",
"html_url": "https://github.com/huggingface/datasets/pull/5052",
"diff_url": "https://github.com/huggingface/datasets/pull/5052.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5052.patch",
"merged_at": "2022-10-05T12:10:48"
}
|
hamid-vakilzadeh
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I added a test and moved the `streaming` param from `read` to `__init_`. Then, I also decided to update the `read` method of the rest of the packaged modules to account for this param. \r\n\r\n@hamid-vakilzadeh Are you OK with these changes? ",
"@mariosasko these all look great! Thanks for the updates."
] |
1,392,559,503
| 5,051
|
Revert task removal in folder-based builders
|
closed
| 2022-09-30T14:50:03
| 2022-10-03T12:23:35
| 2022-10-03T12:21:31
|
https://github.com/huggingface/datasets/pull/5051
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5051",
"html_url": "https://github.com/huggingface/datasets/pull/5051",
"diff_url": "https://github.com/huggingface/datasets/pull/5051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5051.patch",
"merged_at": "2022-10-03T12:21:31"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,392,381,882
| 5,050
|
Restore saved format state in `load_from_disk`
|
closed
| 2022-09-30T12:40:07
| 2022-10-11T16:49:24
| 2022-10-11T16:49:24
|
https://github.com/huggingface/datasets/issues/5050
| null |
mariosasko
| false
|
[
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] |
1,392,361,381
| 5,049
|
Add `kwargs` to `Dataset.from_generator`
|
closed
| 2022-09-30T12:24:27
| 2022-10-03T11:00:11
| 2022-10-03T10:58:15
|
https://github.com/huggingface/datasets/pull/5049
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"merged_at": "2022-10-03T10:58:15"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,392,170,680
| 5,048
|
Fix bug with labels of eurlex config of lex_glue dataset
|
closed
| 2022-09-30T09:47:12
| 2022-09-30T16:30:25
| 2022-09-30T16:21:41
|
https://github.com/huggingface/datasets/pull/5048
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5048",
"html_url": "https://github.com/huggingface/datasets/pull/5048",
"diff_url": "https://github.com/huggingface/datasets/pull/5048.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5048.patch",
"merged_at": "2022-09-30T16:21:41"
}
|
iliaschalkidis
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@JamesLYC88 here is the fix! Thanks again!",
"Thanks, @albertvillanova. When do you expect that this change will take effect when someone downloads the dataset?",
"The change is immediately available now, since this change we made to our library:\r\n- #4059"
] |
1,392,088,398
| 5,047
|
Fix cats_vs_dogs
|
closed
| 2022-09-30T08:47:29
| 2022-09-30T10:23:22
| 2022-09-30T09:34:28
|
https://github.com/huggingface/datasets/pull/5047
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5047",
"html_url": "https://github.com/huggingface/datasets/pull/5047",
"diff_url": "https://github.com/huggingface/datasets/pull/5047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5047.patch",
"merged_at": "2022-09-30T09:34:28"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,391,372,519
| 5,046
|
Audiofolder creates empty Dataset if files same level as metadata
|
closed
| 2022-09-29T19:17:23
| 2022-10-28T13:05:07
| 2022-10-28T13:05:07
|
https://github.com/huggingface/datasets/issues/5046
| null |
msis
| false
|
[
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: https://colab.research.google.com/drive/1IhQzULYi0Van1xLrN_SddBX1JF7mLZZK?usp=sharing)",
"I think we can make the file name matching part more robust by replacing `file_name` with `os.path.normpath(file_name)`, to ignore \"./\" among other things, in these two places:\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L319\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L388",
"@mariosasko Some tests failed (see my PR). Any thoughts on that?",
"Yes, I mentioned the solution in my review.",
"I realized what I was doing wrong.\r\n\r\nThe documentation puts the files in a subfolder.\r\nOnce I have done that, it worked.\r\n\r\nBut l agree that this should be handled better if possible."
] |
1,391,287,609
| 5,045
|
Automatically revert to last successful commit to hub when a push_to_hub is interrupted
|
closed
| 2022-09-29T18:08:12
| 2023-10-16T13:30:49
| 2023-10-16T13:30:49
|
https://github.com/huggingface/datasets/issues/5045
| null |
jorahn
| false
|
[
"Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.",
"> Maybe push_to_hub be implemented as a single commit ? \r\n\r\nI think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with `huggingface_hub` but if there was another reason, please let me know.\r\nAbout pushing all at once, it seems to be a more and more requested feature. I have created this issue https://github.com/huggingface/huggingface_hub/issues/1085 recently but other discussions already happened in the past. The `moon-landing` team is working on it (cc @coyotte508). The `huggingface_hub` integration will come afterwards.\r\n\r\nFor now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n",
"> I think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with huggingface_hub but if there was another reason, please let me know.\r\n\r\nIdeally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. When we implemented `push_to_hub`, using `upload_file` for each shard was the only option.\r\n\r\nFor more context: for each shard to upload we do:\r\n1. load the arrow shard in memory\r\n2. convert to parquet\r\n3. upload\r\n\r\nSo to avoid OOM we need to upload the files iteratively.\r\n\r\n> For now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n\r\nLet us know if we can help !",
"> Ideally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. \r\n\r\nOh I see. So maybe this has to be done in an implementation specific to `datasets/` as it is not a very common case (upload a bunch of files on the fly).\r\n\r\nYou can maybe have a look at how `huggingface_hub` is implemented for LFS files (arrow shards are LFS anyway, right?).\r\nIn [`upload_lfs_files`](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/_commit_api.py#L164) LFS files are uploaded 1 by 1 (multithreaded) and then [the commit is pushed](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/hf_api.py#L1926) to the Hub once all files have been uploaded. This is pretty much what you need, right ?\r\n\r\nI can help you if you have questions how to do it in `datasets`. If that makes sense we could then move the implementation from `datasets` to `huggingface_hub` once it's mature. Next week I'm on holidays but feel free to start without my input.\r\n\r\n(also cc @coyotte508 and @SBrandeis who implemented LFS upload in `hfh`)",
"> Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nHere’s part of the stack trace, that I can reproduce at the moment from a photo I took (potential typos from OCR):\r\n```\r\nValueError\r\nTraceback (most recent call last)\r\n<ipython-input-4-274613b7d3f5> in <module>\r\nfrom datasets import load dataset\r\nds = load_dataset('jrahn/chessv6', use_auth_token-True)\r\n\r\n/us/local/1ib/python3.7/dist-packages/datasets/table.py in cast_table _to_schema (table, schema)\r\nLine 2005 raise ValueError()\r\n\r\nValueError: Couldn't cast \r\nfen: string \r\nmove: string \r\nres: string \r\neco: string \r\nmove_id: int64\r\nres_num: int64 to\r\n{ 'fen': Value(dtype='string', id=None), \r\n'move': Value(dtype=' string', id=None),\r\n'res': Value(dtype='string', id=None),\r\n'eco': Value(dtype='string', id=None), \r\n'hc': Value(dtype='string', id=None), \r\n'move_ id': Value(dtype='int64', id=None),\r\n'res_num': Value(dtype= 'int64' , id=None) }\r\nbecause column names don't match \r\n```\r\n\r\nThe column 'hc' was removed before the interrupted push_to_hub(). It appears in the column list in curly brackets but not in the column list above.\r\n\r\nLet me know, if I can be of any help."
] |
1,391,242,908
| 5,044
|
integrate `load_from_disk` into `load_dataset`
|
open
| 2022-09-29T17:37:12
| 2025-06-28T09:00:44
| null |
https://github.com/huggingface/datasets/issues/5044
| null |
stas00
| false
|
[
"I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it creates a cache directory to store the arrow data and the subsequent cache files for `map`.\r\n\r\n- `load_from_disk` directly returns a memory mapped dataset from the arrow file (similar to `Dataset.from_file`). It doesn't create a cache diretory, instead all the subsequent `map` calls write in the same directory as the original data. \r\n\r\nIf we want to keep the download_and_prepare step for consistency, it would unnecessarily copy the arrow data into the datasets cache. On the other hand if we don't do this step, the cache directory doesn't exist which is inconsistent.\r\n\r\nI'm curious, what would you expect to happen in this situation ?",
"Thank you for the detailed breakdown, @lhoestq \r\n\r\n> I'm curious, what would you expect to happen in this situation ?\r\n\r\n1. the simplest solution is to add a flag to the dataset saved by `save_to_disk` and have `load_dataset` check that flag - if it's set simply switch control to `load_from_disk` behind the scenes. So `load_dataset` detects it's a local filesystem, looks inside to see whether it's something it can cache or whether it should use it directly as is and continues accordingly with one of the 2 dataset-type specific APIs.\r\n\r\n2. the more evolved solution is to look at a dataset produced by `save_to_disk` as a remote resource like hub. So the first time `load_dataset` sees it, it'll take a fingerprint and create a normal cached dataset. On subsequent uses it'll again discover it as a remote resource, validate that it has it cached via the fingerprint and serve as a normal dataset. \r\n\r\nAs you said the cons of approach 2 is that if the dataset is huge it'll make 2 copies on the same machine. So it's possible that both approaches can be integrated. Say if `save_to_disc(do_not_cache=True)` is passed it'll use solution 1, otherwise solution 2. or could even symlink the huge arrow files to the cache instead? or perhaps it's more intuitive to use `load_dataset(do_not_cache=True)` instead. So that one can choose whether to make a cached copy or not for the locally saved dataset. i.e. a simple at use point user control.\r\n\r\nSurely there are other ways to handle it, this is just one possibility.\r\n",
"I think the simplest is to always memory map the local file without copy, but still have a cached directory in the cache at `~/.cache/huggingface` instead of saving `map` results next to the original data.\r\n\r\nIn practice we can even use symlinks if it makes the implementation simpler",
"Yes, so that you always have the cached entry for any dataset, but the \"payload\" doesn't have to be physically in the cache if it's already on the local filesystem. As you said a symlink will do. ",
"Any updates?",
"We haven't had the bandwidth to implement this so far. Let me know if you'd be interested in contributing this feature :)",
"@lhoestq I can jump into that. What I don't like is having functions with many parameters input. Even though they are optional, it's always harder to reason about and test such cases.\r\nIf there are more features worth to work on, feel free to ping me. It's a lot of fun to help :smile: ",
"Thanks a lot for your help @mariusz-jachimowicz-83 :)\r\n\r\nI think as a first step we could implement an Arrow dataset builder to be able to load and stream Arrow datasets locally or from Hugging Face. Maybe something similar to the Parquet builder at [src/datasets/packaged_modules/parquet/parquet.py](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py) ?\r\n\r\nAnd we can deal with the disk space optimization as a second step. What do you think ?\r\n\r\n(this issue is also related to https://github.com/huggingface/datasets/issues/3035)",
"@lhoestq I made a PR based on suggestion https://github.com/huggingface/datasets/pull/5944. Could you please review it?",
"@lhoestq Let me know if you have further recommendations or anything that you would like to add but you don't have bandwith for. ",
"Any update on this issue? It makes existing scripts and examples fall flat when provided with a customized/preprocessed dataset saved to disk.",
"This would be a really useful in terms of user experience. ",
"Is there any update on this? This would improves the clarity and consistency of the implementations.",
"Not yet ! Though we do have an Arrow loader in `load_dataset` now, so the remaining items are:\n\n1. update `load_dataset()` to support the old `save_to_disk()` structure with a Warning message that it's not the structure it generally uses and it's enabled for compatibility purposes\n\n(Q: `load_dataset()` works using a cache that contains cached Arrow files of any dataset, so if the dataset is already in Arrow we can optionally make it symlink the files in the cache instead of copying them ? for consistency I would still copy the data. Especially for cases where the dataset location is on a slow disk, it can be better to copy the data once to the fast cache)\n\n2. update `save_to_disk()` to export in a `load_dataset()` compatible structure",
"Hi! Quick update — I just opened [PR #7653](https://github.com/huggingface/datasets/pull/7653) to address this UX inconsistency.\n\nIt adds a fallback in `load_dataset()` that auto-detects when the path is a directory saved via `save_to_disk()`, and internally redirects to `load_from_disk()`, with a warning.\n\n```python\n# This now works as expected\nds = load_dataset(\"/path/to/saved_dataset\")\n````\n\nThis avoids loading `_data_files` metadata rows by mistake, which confused many users (e.g. in #7503).\n\nIt’s aligned with @lhoestq’s comment — to detect saved datasets and memory-map them directly instead of reprocessing.\n\nThe PR keeps things simple for now without introducing ArrowBuilder or new cache logic — just improves reliability where `load_dataset()` is hardcoded (like in TRL or `lighteval`).\n\nWould love feedback!"
] |
1,391,141,773
| 5,043
|
Fix `flatten_indices` with empty indices mapping
|
closed
| 2022-09-29T16:17:28
| 2022-09-30T15:46:39
| 2022-09-30T15:44:25
|
https://github.com/huggingface/datasets/pull/5043
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5043",
"html_url": "https://github.com/huggingface/datasets/pull/5043",
"diff_url": "https://github.com/huggingface/datasets/pull/5043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5043.patch",
"merged_at": "2022-09-30T15:44:25"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,390,762,877
| 5,042
|
Update swiss judgment prediction
|
closed
| 2022-09-29T12:10:02
| 2022-09-30T07:14:00
| 2022-09-29T14:32:02
|
https://github.com/huggingface/datasets/pull/5042
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5042",
"html_url": "https://github.com/huggingface/datasets/pull/5042",
"diff_url": "https://github.com/huggingface/datasets/pull/5042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5042.patch",
"merged_at": "2022-09-29T14:32:02"
}
|
JoelNiklaus
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,390,722,230
| 5,041
|
Support streaming hendrycks_test dataset.
|
closed
| 2022-09-29T11:37:58
| 2022-09-30T07:13:38
| 2022-09-29T12:07:29
|
https://github.com/huggingface/datasets/pull/5041
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5041",
"html_url": "https://github.com/huggingface/datasets/pull/5041",
"diff_url": "https://github.com/huggingface/datasets/pull/5041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5041.patch",
"merged_at": "2022-09-29T12:07:29"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,390,566,428
| 5,040
|
Fix NonMatchingChecksumError in hendrycks_test dataset
|
closed
| 2022-09-29T09:37:43
| 2022-09-29T10:06:22
| 2022-09-29T10:04:19
|
https://github.com/huggingface/datasets/pull/5040
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5040",
"html_url": "https://github.com/huggingface/datasets/pull/5040",
"diff_url": "https://github.com/huggingface/datasets/pull/5040.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5040.patch",
"merged_at": "2022-09-29T10:04:19"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,390,353,315
| 5,039
|
Hendrycks Checksum
|
closed
| 2022-09-29T06:56:20
| 2022-09-29T10:23:30
| 2022-09-29T10:04:20
|
https://github.com/huggingface/datasets/issues/5039
| null |
DanielHesslow
| false
|
[
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] |
1,389,631,122
| 5,038
|
`Dataset.unique` showing wrong output after filtering
|
closed
| 2022-09-28T16:20:35
| 2022-09-30T15:44:25
| 2022-09-30T15:44:25
|
https://github.com/huggingface/datasets/issues/5038
| null |
mxschmdt
| false
|
[
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] |
1,389,244,722
| 5,037
|
Improve CI performance speed of PackagedDatasetTest
|
closed
| 2022-09-28T12:08:16
| 2022-09-30T16:05:42
| 2022-09-30T16:03:24
|
https://github.com/huggingface/datasets/pull/5037
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5037",
"html_url": "https://github.com/huggingface/datasets/pull/5037",
"diff_url": "https://github.com/huggingface/datasets/pull/5037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5037.patch",
"merged_at": "2022-09-30T16:03:24"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There was a CI error which seemed unrelated: https://github.com/huggingface/datasets/actions/runs/3143581330/jobs/5111807056\r\n```\r\nFAILED tests/test_load.py::test_load_dataset_private_zipped_images[True] - FileNotFoundError: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/repo_zipped_img_data-16643808721979/resolve/75c3fc424a3b898a828b2b3fd84d96da4703228a/data.zip\r\n```\r\nIt disappeared after merging the main branch."
] |
1,389,094,075
| 5,036
|
Add oversampling strategy iterable datasets interleave
|
closed
| 2022-09-28T10:10:23
| 2022-09-30T12:30:48
| 2022-09-30T12:28:23
|
https://github.com/huggingface/datasets/pull/5036
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"merged_at": "2022-09-30T12:28:23"
}
|
ylacombe
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,388,914,476
| 5,035
|
Fix typos in load docstrings and comments
|
closed
| 2022-09-28T08:05:07
| 2022-09-28T17:28:40
| 2022-09-28T17:26:15
|
https://github.com/huggingface/datasets/pull/5035
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"merged_at": "2022-09-28T17:26:14"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,388,855,136
| 5,034
|
Update README.md of yahoo_answers_topics dataset
|
closed
| 2022-09-28T07:17:33
| 2022-10-06T15:56:05
| 2022-10-04T13:49:25
|
https://github.com/huggingface/datasets/pull/5034
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5034",
"html_url": "https://github.com/huggingface/datasets/pull/5034",
"diff_url": "https://github.com/huggingface/datasets/pull/5034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5034.patch",
"merged_at": null
}
|
borgr
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5034). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @borgr. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub.",
"Do you mean to edit through \"edit dataset card\" button? because it just leads to a broken page...\r\nhttps://huggingface.co/datasets/yahoo_answers_topics\r\n\r\nhttps://github.com/huggingface/datasets/tree/main/datasets/yahoo_answers_topics",
"Hi @borgr, good catch! I'm going to report the button leading to a broken link.\r\n\r\nIn the meantime, you can propose a PR to the `README.md` file using this link: https://huggingface.co/datasets/yahoo_answers_topics/blob/main/README.md"
] |
1,388,842,236
| 5,033
|
Remove redundant code from some dataset module factories
|
closed
| 2022-09-28T07:06:26
| 2022-09-28T16:57:51
| 2022-09-28T16:55:12
|
https://github.com/huggingface/datasets/pull/5033
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5033",
"html_url": "https://github.com/huggingface/datasets/pull/5033",
"diff_url": "https://github.com/huggingface/datasets/pull/5033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5033.patch",
"merged_at": "2022-09-28T16:55:12"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,388,270,935
| 5,032
|
new dataset type: single-label and multi-label video classification
|
open
| 2022-09-27T19:40:11
| 2022-11-02T19:10:13
| null |
https://github.com/huggingface/datasets/issues/5032
| null |
fcakyon
| false
|
[
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type needs\r\n\r\nalso cc @nateraw who also took a look at what we can do for video",
"@lhoestq @nateraw is there any progress on adding video classification datasets? ",
"Hi ! I think we just missing which lib we're going to use to decode the videos + which parameters must go in the `Video` type",
"Hmm. `decord` could be nice but it's no longer maintained [it seems](https://github.com/dmlc/decord/issues/214). ",
"pytorchvideo uses [pyav](https://github.com/PyAV-Org/PyAV) as the default decoder: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L37\r\n\r\nAlso it would be great if `optionally` audio can also be decoded from the video as in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L35\r\n\r\nHere are the other decoders supported in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/encoded_video.py#L17\r\n",
"@sayakpaul I did do quite a bit of work on [this PR](https://github.com/huggingface/datasets/pull/4532) a while back to add a video feature. It's outdated, but uses my `encoded_video` [package](https://github.com/nateraw/encoded-video) under the hood, which is basically a wrapper around PyAV stolen from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo/) that gets rid of the `torch` dependency. \r\n\r\nwould be really great to get something like this in...it's just a really tricky and time consuming feature to add. "
] |
1,388,201,146
| 5,031
|
Support hfh 0.10 implicit auth
|
closed
| 2022-09-27T18:37:49
| 2022-09-30T09:18:24
| 2022-09-30T09:15:59
|
https://github.com/huggingface/datasets/pull/5031
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5031",
"html_url": "https://github.com/huggingface/datasets/pull/5031",
"diff_url": "https://github.com/huggingface/datasets/pull/5031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5031.patch",
"merged_at": "2022-09-30T09:15:59"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq it is now released so you can move forward with it :) ",
"I took your comments into account @Wauplin :)\r\nI also bumped the requirement to 0.2.0 because we're using `set_access_token`\r\n\r\ncc @albertvillanova WDYT ? I edited the CI job to also check for our minimum supported version of hfh at the same time as the minimum pyarrow version",
"@lhoestq great, thanks ! :)"
] |
1,388,061,340
| 5,030
|
Fast dataset iter
|
closed
| 2022-09-27T16:44:51
| 2022-09-29T15:50:44
| 2022-09-29T15:48:17
|
https://github.com/huggingface/datasets/pull/5030
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5030",
"html_url": "https://github.com/huggingface/datasets/pull/5030",
"diff_url": "https://github.com/huggingface/datasets/pull/5030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5030.patch",
"merged_at": "2022-09-29T15:48:17"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran some benchmarks (focused on the data fetching part of `__iter__`) and it seems like the combination `table.to_reader(batch_size)` + `RecordBatch.slice` performs the best ([script](https://gist.github.com/mariosasko/0248288a2e3a7556873969717c1fe52b) with the results). I think we can choose (implicit) `batch_size=10` in the final implementation to avoid having problems with fetching large examples."
] |
1,387,600,960
| 5,029
|
Fix import in `ClassLabel` docstring example
|
closed
| 2022-09-27T11:35:29
| 2022-09-27T14:03:24
| 2022-09-27T12:27:50
|
https://github.com/huggingface/datasets/pull/5029
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5029",
"html_url": "https://github.com/huggingface/datasets/pull/5029",
"diff_url": "https://github.com/huggingface/datasets/pull/5029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5029.patch",
"merged_at": "2022-09-27T12:27:50"
}
|
alvarobartt
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,386,272,533
| 5,028
|
passing parameters to the method passed to Dataset.from_generator()
|
closed
| 2022-09-26T15:20:06
| 2022-10-03T13:00:00
| 2022-10-03T13:00:00
|
https://github.com/huggingface/datasets/issues/5028
| null |
Basir-mahmood
| false
|
[
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
] |
1,386,153,072
| 5,027
|
Fix typo in error message
|
closed
| 2022-09-26T14:10:09
| 2022-09-27T12:28:03
| 2022-09-27T12:26:02
|
https://github.com/huggingface/datasets/pull/5027
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5027",
"html_url": "https://github.com/huggingface/datasets/pull/5027",
"diff_url": "https://github.com/huggingface/datasets/pull/5027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5027.patch",
"merged_at": "2022-09-27T12:26:02"
}
|
severo
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,386,071,154
| 5,026
|
patch CI_HUB_TOKEN_PATH with Path instead of str
|
closed
| 2022-09-26T13:19:01
| 2022-09-26T14:30:55
| 2022-09-26T14:28:45
|
https://github.com/huggingface/datasets/pull/5026
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5026",
"html_url": "https://github.com/huggingface/datasets/pull/5026",
"diff_url": "https://github.com/huggingface/datasets/pull/5026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5026.patch",
"merged_at": "2022-09-26T14:28:45"
}
|
Wauplin
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,386,011,239
| 5,025
|
Custom Json Dataset Throwing Error when batch is False
|
closed
| 2022-09-26T12:38:39
| 2022-09-27T19:50:00
| 2022-09-27T19:50:00
|
https://github.com/huggingface/datasets/issues/5025
| null |
jmandivarapu1
| false
|
[
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessing for each image and text as all my data saved in cloud\r\n #For this reason I couldn't set the batch to True. \r\n encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n # drop extra dim\r\n for k in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n return encoding\r\n```",
"> Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n> \r\n> ```python\r\n> def prepare_examples(examples):\r\n> #Some preporcessing for each image and text as all my data saved in cloud\r\n> #For this reason I couldn't set the batch to True. \r\n> encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n> truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n> # drop extra dim\r\n> for k in encoding.items():\r\n> encoding[k]=encoding[k][0]\r\n> return encoding\r\n> ```\r\n\r\nThank you it did work\r\n\r\n```\r\nfor k,v in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n```"
] |
1,385,947,624
| 5,024
|
Fix string features of xcsr dataset
|
closed
| 2022-09-26T11:55:36
| 2022-09-28T07:56:18
| 2022-09-28T07:54:19
|
https://github.com/huggingface/datasets/pull/5024
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5024",
"html_url": "https://github.com/huggingface/datasets/pull/5024",
"diff_url": "https://github.com/huggingface/datasets/pull/5024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5024.patch",
"merged_at": "2022-09-28T07:54:19"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,385,881,112
| 5,023
|
Text strings are split into lists of characters in xcsr dataset
|
closed
| 2022-09-26T11:11:50
| 2022-09-28T07:54:20
| 2022-09-28T07:54:20
|
https://github.com/huggingface/datasets/issues/5023
| null |
albertvillanova
| false
|
[] |
1,385,432,859
| 5,022
|
Fix languages of X-CSQA configs in xcsr dataset
|
closed
| 2022-09-26T05:13:39
| 2022-09-26T12:27:20
| 2022-09-26T10:57:30
|
https://github.com/huggingface/datasets/pull/5022
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"merged_at": "2022-09-26T10:57:30"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4059), now fixes in all datasets are immediately accessible. You can try it:\r\n```python\r\nfrench = datasets.load_dataset(\"xcsr\", \"X-CSQA-fr\")\r\n```\r\n\r\nPlease note there is an additional fix to that dataset in progress (to be merged today):\r\n- #5024"
] |
1,385,351,250
| 5,021
|
Split is inferred from filename and overrides metadata.jsonl
|
closed
| 2022-09-26T03:22:14
| 2022-09-29T08:07:50
| 2022-09-29T08:07:50
|
https://github.com/huggingface/datasets/issues/5021
| null |
float-trip
| false
|
[
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files=\"dataset/**\")\r\n```",
"Thanks! Specifying `data_files` worked for that case.\r\n\r\nI'm new to the library, so let me try rephrasing the issue. If there's no actual bug here, sorry for the trouble.\r\n\r\nI've uploaded an example [here](https://files.catbox.moe/nfj2pd.zip) with the following files: \r\n\r\n```\r\n.\r\n├── bug.py\r\n└── imagefolder\r\n ├── test\r\n │ ├── metadata.jsonl\r\n │ ├── dog.jpg\r\n │ └── personal trainer.jpg\r\n └── train\r\n ├── metadata.jsonl\r\n ├── cat.jpg\r\n └── testing center.jpg\r\n```\r\n\r\n`bug.py`\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\")\r\n\r\nprint(dataset)\r\n# DatasetDict({\r\n# test: Dataset({\r\n# features: ['image', 'text'],\r\n# num_rows: 1\r\n# })\r\n# })\r\n\r\nfor split in dataset:\r\n print(\"Split:\", split)\r\n for n in dataset[split]:\r\n print(n['text'])\r\n\r\n\r\n# Split: test\r\n# testing center\r\n```\r\n\r\nAs far as I can tell, this conforms with the example given here: https://huggingface.co/docs/datasets/image_dataset#imagefolder. It appears to me that, even though `metadata.jsonl` is present, the inferred labels from the path are taking precedent. Does this sound like a bug/undocumented behavior?",
"This looks like a duplicate of https://github.com/huggingface/datasets/issues/4895 (the problem is explained in this comment: https://github.com/huggingface/datasets/issues/4895#issuecomment-1248269550).\r\n\r\nIn the meantime, you can do the following to fetch all the splits:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files={\"train\": \"imagefolder/train/**\", \"test\": \"imagefolder/test/**\"})\r\n```\r\n"
] |
1,384,684,078
| 5,020
|
Fix URLs of sbu_captions dataset
|
closed
| 2022-09-24T14:00:33
| 2022-09-28T07:20:20
| 2022-09-28T07:18:23
|
https://github.com/huggingface/datasets/pull/5020
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5020",
"html_url": "https://github.com/huggingface/datasets/pull/5020",
"diff_url": "https://github.com/huggingface/datasets/pull/5020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5020.patch",
"merged_at": "2022-09-28T07:18:23"
}
|
donglixp
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,384,673,718
| 5,019
|
Update swiss judgment prediction
|
closed
| 2022-09-24T13:28:57
| 2022-09-28T07:13:39
| 2022-09-28T05:48:50
|
https://github.com/huggingface/datasets/pull/5019
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5019",
"html_url": "https://github.com/huggingface/datasets/pull/5019",
"diff_url": "https://github.com/huggingface/datasets/pull/5019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5019.patch",
"merged_at": "2022-09-28T05:48:50"
}
|
JoelNiklaus
| true
|
[
"Thank you very much for the detailed review @albertvillanova!\r\n\r\nI updated the PR with the requested changes. ",
"At the end, I had to manually fix the conflict, so that CI tests are launched.\r\n\r\nPLEASE NOTE: you should first pull to incorporate the previous commit\r\n```shell\r\ngit pull\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for the detailed feedback and your time @albertvillanova! \r\nYes, thanks. My other datasets are already on the hub: https://huggingface.co/joelito\r\n"
] |
1,384,146,585
| 5,018
|
Create all YAML dataset_info
|
closed
| 2022-09-23T18:08:15
| 2023-09-24T09:33:21
| 2022-10-03T17:08:05
|
https://github.com/huggingface/datasets/pull/5018
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5018",
"html_url": "https://github.com/huggingface/datasets/pull/5018",
"diff_url": "https://github.com/huggingface/datasets/pull/5018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5018.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5018). All of your documentation changes will be reflected on that endpoint.",
"Closing since https://github.com/huggingface/datasets/pull/4974 removed all the datasets scripts.\r\n\r\nIndividual PRs must be opened on the Hugging face Hub to add the YAML metadata"
] |
1,384,022,463
| 5,017
|
xcsr: X-CSQA simply uses english for all alleged non-english data
|
closed
| 2022-09-23T16:11:54
| 2022-09-26T10:57:31
| 2022-09-26T10:57:31
|
https://github.com/huggingface/datasets/issues/5017
| null |
thesofakillers
| false
|
[
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] |
1,383,883,058
| 5,016
|
Fix tar extraction vuln
|
closed
| 2022-09-23T14:22:21
| 2022-09-29T12:42:26
| 2022-09-29T12:40:28
|
https://github.com/huggingface/datasets/pull/5016
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5016",
"html_url": "https://github.com/huggingface/datasets/pull/5016",
"diff_url": "https://github.com/huggingface/datasets/pull/5016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5016.patch",
"merged_at": "2022-09-29T12:40:28"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,383,485,558
| 5,015
|
Transfer dataset scripts to Hub
|
closed
| 2022-09-23T08:48:10
| 2022-10-05T07:15:57
| 2022-10-05T07:15:57
|
https://github.com/huggingface/datasets/issues/5015
| null |
albertvillanova
| false
|
[
"Sounds good ! Can I help with anything ?"
] |
1,383,422,639
| 5,014
|
I need to read the custom dataset in conll format
|
open
| 2022-09-23T07:49:42
| 2022-11-02T11:57:15
| null |
https://github.com/huggingface/datasets/issues/5014
| null |
shell-nlp
| false
|
[
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r\n\r\nIn the meantime, you can use `Dataset.from_generator` to create a dataset as follows:\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# 2009 version\r\nINPUT_COLUMNS = \"ID FORM LEMMA PLEMMA POS PPOS FEAT PFEAT HEAD PHEAD DEPREL PDEPREL\".split()\r\n\r\ndef read_conll(file):\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n idx = 0\r\n with open(file) as f:\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\\n\" or not line:\r\n if example[next(iter(example))]:\r\n yield idx, example\r\n idx += 1\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n else:\r\n row_cols = line.split()\r\n for i, col in enumerate(example):\r\n example[col] = row_cols[i].rstrip()\r\n\r\n# (optional) pass custom features with `features=Features(...)`\r\ndset = Dataset.from_generator(read_conll, gen_kwargs={\"file\": \"path/to/conll/file\"}) \r\n``` ",
"I think we could add a dedicated builder if you think this format is general enough.",
"\r\n\r\n\r\n> I think we could add a dedicated builder if you think this format is general enough.\r\n\r\nI think its functions are incomplete. It should have to_ Conll and from_ There are two methods of conll."
] |
1,383,415,971
| 5,013
|
would huggingface like publish cpp binding for datasets package ?
|
closed
| 2022-09-23T07:42:49
| 2023-02-24T16:20:57
| 2023-02-24T16:20:57
|
https://github.com/huggingface/datasets/issues/5013
| null |
mullerhai
| false
|
[
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingface load_model() and load_dataset() can execute in cpp env",
"If it's a viable option for you, you can check [tch-rs](https://github.com/LaurentMazare/tch-rs) to load models in Rust. Regarding datasets, you can first download them in python and then use Arrow C++ or Rust to load them",
"If you are more adventurous, another option is to embed python calls inside c++ e.g. with `pybind11`.",
"> pybind11\r\n\r\nI think it is not the best solution"
] |
1,382,851,096
| 5,012
|
Force JSON format regardless of file naming on S3
|
closed
| 2022-09-22T18:28:15
| 2023-08-16T09:58:36
| 2023-08-16T09:58:36
|
https://github.com/huggingface/datasets/issues/5012
| null |
junwang-wish
| false
|
[
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime",
"Hi,\r\nI want to make sure I understand this response. I have a set of files on S3 that are private for security reasons. Because they are not public files I cannot read those files (many are parquet) into my hf notebooks in Kaggle? That can't be correct, can it? ",
"Hi ! There is a discussion at https://github.com/huggingface/datasets/issues/5281\r\n\r\nUsing the latest `datasets` 2.11 you can try passing fsspec URLs to private buckets to `data_files` in `load_dataset()`. Though this is still experimental and undocumented, so feedback is welcome. You may not have the best experience though, since anything related to performance and caching hasn't been tested properly yet.",
"closing this one since data_files supports fsspec (still experimental/untested/undocumented for s3 though)"
] |
1,382,609,587
| 5,011
|
Audio: `encode_example` fails with IndexError
|
closed
| 2022-09-22T15:07:27
| 2022-09-23T09:05:18
| 2022-09-23T09:05:18
|
https://github.com/huggingface/datasets/issues/5011
| null |
sanchit-gandhi
| false
|
[
"Sorry bug on my part 😅 Closing "
] |
1,382,308,799
| 5,010
|
Add deprecation warning to multilingual_librispeech dataset card
|
closed
| 2022-09-22T11:41:59
| 2022-09-23T12:04:37
| 2022-09-23T12:02:45
|
https://github.com/huggingface/datasets/pull/5010
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5010",
"html_url": "https://github.com/huggingface/datasets/pull/5010",
"diff_url": "https://github.com/huggingface/datasets/pull/5010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5010.patch",
"merged_at": "2022-09-23T12:02:45"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,381,194,067
| 5,009
|
Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly
|
closed
| 2022-09-21T16:23:06
| 2022-09-29T13:07:29
| 2022-09-29T13:07:29
|
https://github.com/huggingface/datasets/issues/5009
| null |
ykl7
| false
|
[
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the features types explicitly.\r\nThen you can save the feature types inside the dataset repository, so that you won't need to specify the features in subsequent calls:\r\n```python\r\nfrom datasets import load_dataset, Features, Sequence, Value\r\nfrom datasets.info import DatasetInfosDict\r\n\r\nfeatures = Features({\r\n 'narrative': Value('string'),\r\n 'question': Value('string'),\r\n 'original_sentence_for_question': Value('string'),\r\n 'narrative_lexical_overlap': Value('float64'),\r\n 'is_ques_answerable': Value('string'),\r\n 'answer': Value('string'),\r\n 'is_ques_answerable_annotator': Value('string'),\r\n 'original_narrative_form': Sequence(Value('string')),\r\n 'question_meta': Value('string'),\r\n 'helpful_sentences': Sequence(Value('int64')),\r\n 'human_eval': Value('bool'),\r\n 'val_ann': Sequence(Value('int64')),\r\n 'gram_ann': Sequence(Value('int64'))\r\n})\r\nds = load_dataset('StonyBrookNLP/tellmewhy', features=features)\r\nDatasetInfosDict({\"default\": ds[\"train\"].info}).write_to_directory(\"path/to/local/tellmewhy\")\r\n```\r\nand then after pushing the change to the dataset repository on the Hub, `load_dataset(\"StonyBrookNLP/tellmewhy\")` will work directly`",
"(Note that specifying explicit types will be made easier with https://github.com/huggingface/datasets/pull/4926)",
"`gram_ann` and `val_ann` are annotations that only exist for part of the test set. I wanted to keep all the columns consistent across all files, so I added them to train and validation as well. I'll check if removing them from those files is still compliant with this repo. Otherwise, I will do as you suggested. Thanks @lhoestq !",
"@lhoestq I followed the exact steps you described but it seems like I'm getting the same error unfortunately. Any other ideas? Thanks in advance",
"Hi ! If you move `dataset_infos.json` from `data/` to the root of your dataset repository if should work :)",
"I tried that and pushed to the [hub](https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/tree/main). Now, there is a new error.\r\n```\r\n File \"/home/yklal95/tellmewhy/src/prepare_data.py\", line 67, in main\r\n dataset = load_dataset('StonyBrookNLP/tellmewhy')\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 775, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 33, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'/home/yklal95/tellmewhy/data/test.json', '/home/yklal95/tellmewhy/data/validation.json', '/home/yklal95/tellmewhy/data/train.json'}\r\n```\r\nNo changes were made to any of the other files and they are still on the hub. Let me know if you have any ideas @lhoestq Thanks!",
"Oh I see - the code I gave you returns local paths instead of URLs to store metadata about files to download.\r\nI opened a PR in your repo here to remove this: https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/discussions/1\r\nsorry for the inconvenience !",
"It works now! Thanks a lot @lhoestq "
] |
1,381,090,903
| 5,008
|
Re-apply input columns change
|
closed
| 2022-09-21T15:09:01
| 2022-09-22T13:57:36
| 2022-09-22T13:55:23
|
https://github.com/huggingface/datasets/pull/5008
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5008",
"html_url": "https://github.com/huggingface/datasets/pull/5008",
"diff_url": "https://github.com/huggingface/datasets/pull/5008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5008.patch",
"merged_at": "2022-09-22T13:55:23"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,381,007,607
| 5,007
|
Add some note about running the transformers ci before a release
|
closed
| 2022-09-21T14:14:25
| 2022-09-22T10:16:14
| 2022-09-22T10:14:06
|
https://github.com/huggingface/datasets/pull/5007
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5007",
"html_url": "https://github.com/huggingface/datasets/pull/5007",
"diff_url": "https://github.com/huggingface/datasets/pull/5007.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5007.patch",
"merged_at": "2022-09-22T10:14:06"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,380,968,395
| 5,006
|
Revert input_columns change
|
closed
| 2022-09-21T13:49:20
| 2022-09-21T14:14:33
| 2022-09-21T14:11:57
|
https://github.com/huggingface/datasets/pull/5006
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5006",
"html_url": "https://github.com/huggingface/datasets/pull/5006",
"diff_url": "https://github.com/huggingface/datasets/pull/5006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5006.patch",
"merged_at": "2022-09-21T14:11:57"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one and I'll check if it fixes the `transformers` CI before doing a patch release"
] |
1,380,952,960
| 5,005
|
Release 2.5.0 breaks transformers CI
|
closed
| 2022-09-21T13:39:19
| 2022-09-21T14:11:57
| 2022-09-21T14:11:57
|
https://github.com/huggingface/datasets/issues/5005
| null |
albertvillanova
| false
|
[
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] |
1,380,860,606
| 5,004
|
Remove license tag file and validation
|
closed
| 2022-09-21T12:35:14
| 2022-09-22T11:47:41
| 2022-09-22T11:45:46
|
https://github.com/huggingface/datasets/pull/5004
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5004",
"html_url": "https://github.com/huggingface/datasets/pull/5004",
"diff_url": "https://github.com/huggingface/datasets/pull/5004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5004.patch",
"merged_at": "2022-09-22T11:45:46"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,380,617,353
| 5,003
|
Fix missing use_auth_token in streaming docstrings
|
closed
| 2022-09-21T09:27:03
| 2022-09-21T16:24:01
| 2022-09-21T16:20:59
|
https://github.com/huggingface/datasets/pull/5003
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5003",
"html_url": "https://github.com/huggingface/datasets/pull/5003",
"diff_url": "https://github.com/huggingface/datasets/pull/5003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5003.patch",
"merged_at": "2022-09-21T16:20:59"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,380,589,402
| 5,002
|
Dataset Viewer issue for loubnabnl/humaneval-x
|
closed
| 2022-09-21T09:06:17
| 2022-09-21T11:49:49
| 2022-09-21T11:49:49
|
https://github.com/huggingface/datasets/issues/5002
| null |
loubnabnl
| false
|
[
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] |
1,379,844,820
| 5,001
|
Support loading XML datasets
|
open
| 2022-09-20T18:42:58
| 2024-05-22T22:13:25
| null |
https://github.com/huggingface/datasets/pull/5001
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5001",
"html_url": "https://github.com/huggingface/datasets/pull/5001",
"diff_url": "https://github.com/huggingface/datasets/pull/5001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5001.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5001). All of your documentation changes will be reflected on that endpoint.",
"> CC: @davanstrien\r\n\r\nI should have some time to look at this on Friday :) ",
"@albertvillanova I've tried this with a few different XML datasets. One issue I've run into is getting a `KeyError` when the attributes of a field differ from the first parsed row. Unfortunately, this can come up in the ALTO XML format, for example, if you want to parse the 'string' field, which contains the text in the ALTO XML files. \r\n\r\nWhen parsing a file, this instance has no 'STYLE' attribute: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"295\" VPOS=\"926\" HPOS=\"247\"><String WC=\"0.4600000083\" CONTENT=\"jufqu’en\" HEIGHT=\"39\" WIDTH=\"117\" VPOS=\"926\" HPOS=\"247\"/><SP WIDTH=\"14\" VPOS=\"928\" HPOS=\"365\"/><String WC=\"0.6075000167\" CONTENT=\"l’an\" HEIGHT=\"26\" WIDTH=\"50\" VPOS=\"928\" HPOS=\"380\"/><SP WIDTH=\"24\" VPOS=\"936\" HPOS=\"431\"/><String WC=\"0.4300000072\" CONTENT=\"1\" HEIGHT=\"16\" WIDTH=\"9\" VPOS=\"936\" HPOS=\"456\"/><String STYLE=\"italics\" WC=\"0.5774999857\" CONTENT=\"361.\" HEIGHT=\"25\" WIDTH=\"68\" VPOS=\"933\" HPOS=\"474\"/></TextLine>\r\n```\r\n\r\nWhereas this one which appears later in the file, does have this field: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"712\" VPOS=\"966\" HPOS=\"297\"><String STYLE=\"italics\" WC=\"0.6999999881\" CONTENT=\"I\" HEIGHT=\"17\" WIDTH=\"9\" VPOS=\"977\" HPOS=\"297\"/><String WC=\"0.5\" CONTENT=\"I.\" HEIGHT=\"18\" WIDTH=\"25\" VPOS=\"976\" HPOS=\"318\"/><SP WIDTH=\"24\" VPOS=\"971\" HPOS=\"344\"/><String STYLE=\"italics\" WC=\"0.3359999955\" CONTENT=\"Crade\" HEIGHT=\"26\" WIDTH=\"91\" VPOS=\"967\" HPOS=\"369\"/><SP WIDTH=\"31\" VPOS=\"971\" HPOS=\"461\"/><String STYLE=\"italics\" WC=\"0.6060000062\" CONTENT=\"Pétri\" HEIGHT=\"26\" WIDTH=\"71\" VPOS=\"968\" HPOS=\"493\"/><SP WIDTH=\"23\" VPOS=\"968\" HPOS=\"565\"/><String STYLE=\"italics\" WC=\"0.612857163\" CONTENT=\"Candidi\" HEIGHT=\"27\" WIDTH=\"111\" VPOS=\"967\" HPOS=\"589\"/><SP WIDTH=\"19\" VPOS=\"967\" HPOS=\"701\"/><String STYLE=\"italics\" WC=\"0.4088888764\" CONTENT=\"Decembrii\" HEIGHT=\"28\" WIDTH=\"144\" VPOS=\"966\" HPOS=\"721\"/><SP WIDTH=\"10\" VPOS=\"968\" HPOS=\"866\"/><String STYLE=\"italics\" WC=\"0.4600000083\" CONTENT=\"in\" HEIGHT=\"25\" WIDTH=\"27\" VPOS=\"968\" HPOS=\"877\"/><SP WIDTH=\"9\" VPOS=\"967\" HPOS=\"905\"/><String STYLE=\"italics\" WC=\"0.5099999905\" CONTENT=\"funere\" HEIGHT=\"38\" WIDTH=\"94\" VPOS=\"967\" HPOS=\"915\"/></TextLine>\r\n```\r\n\r\nSince the first-seen fields define what is passed to `arrow_writer`, this causes a KeyError when the version with the extra attributes is encountered because it doesn't expect this column. \r\n\r\nSince it's important to support streaming, I'm not sure there is a nice way to detect attributes for the whole file easily in an automatic way. The two potential ways I can see of doing it.\r\n\r\n- Do an initial pass on a batch of data to have a higher chance of encountering variations in attributes before doing the arrow write. \r\n- Do a full pass on one file (and assume that this won't change across files) \r\n\r\nI think the other way of doing this would be to allow users to define expected/wanted attributes as another loading argument. This could then be used to extract the described attributes (and make them None if not found). This requires a bit more work from the user but could be helpful. For example, in the XML above, likely, most users will only want the `WC` and `CONTENT` attributes. So they could specify this upfront and avoid loading extra data they don't need or want. I suspect this option would make more sense than making this operation automatic for the case where attributes might change. WDYT? \r\n\r\n\r\n\r\n\r\n\r\n\r\n"
] |
1,379,709,398
| 5,000
|
Dataset Viewer issue for asapp/slue
|
closed
| 2022-09-20T16:45:45
| 2022-09-27T07:04:03
| 2022-09-21T07:24:07
|
https://github.com/huggingface/datasets/issues/5000
| null |
fwu-asapp
| false
|
[
"<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```",
"I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?",
"The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture d’écran 2022-09-20 à 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n",
"OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```",
"Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n",
"Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492",
"Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.",
"FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!",
"Great! And thank you for sharing that interesting dataset!"
] |
1,379,610,030
| 4,999
|
Add EmptyDatasetError
|
closed
| 2022-09-20T15:28:05
| 2022-09-21T12:23:43
| 2022-09-21T12:21:24
|
https://github.com/huggingface/datasets/pull/4999
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4999",
"html_url": "https://github.com/huggingface/datasets/pull/4999",
"diff_url": "https://github.com/huggingface/datasets/pull/4999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4999.patch",
"merged_at": "2022-09-21T12:21:24"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,379,466,717
| 4,998
|
Don't add a tag on the Hub on release
|
closed
| 2022-09-20T13:54:57
| 2022-09-20T14:11:46
| 2022-09-20T14:08:54
|
https://github.com/huggingface/datasets/pull/4998
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4998",
"html_url": "https://github.com/huggingface/datasets/pull/4998",
"diff_url": "https://github.com/huggingface/datasets/pull/4998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4998.patch",
"merged_at": "2022-09-20T14:08:54"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,379,430,711
| 4,997
|
Add support for parsing JSON files in array form
|
closed
| 2022-09-20T13:31:26
| 2022-09-20T15:42:40
| 2022-09-20T15:40:06
|
https://github.com/huggingface/datasets/pull/4997
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4997",
"html_url": "https://github.com/huggingface/datasets/pull/4997",
"diff_url": "https://github.com/huggingface/datasets/pull/4997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4997.patch",
"merged_at": "2022-09-20T15:40:05"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,379,345,161
| 4,996
|
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
|
closed
| 2022-09-20T12:32:07
| 2022-09-27T12:35:44
| 2022-09-27T12:35:44
|
https://github.com/huggingface/datasets/issues/4996
| null |
severo
| false
|
[
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this."
] |
1,379,108,482
| 4,995
|
Get a specific Exception when the dataset has no data
|
closed
| 2022-09-20T09:31:59
| 2022-09-21T12:21:25
| 2022-09-21T12:21:25
|
https://github.com/huggingface/datasets/issues/4995
| null |
severo
| false
|
[] |
1,379,084,015
| 4,994
|
delete the hardcoded license list in `datasets`
|
closed
| 2022-09-20T09:14:41
| 2022-09-22T11:45:47
| 2022-09-22T11:45:47
|
https://github.com/huggingface/datasets/issues/4994
| null |
julien-c
| false
|
[] |
1,379,044,435
| 4,993
|
fix: avoid casting tuples after Dataset.map
|
closed
| 2022-09-20T08:45:16
| 2022-09-20T16:11:27
| 2022-09-20T13:08:29
|
https://github.com/huggingface/datasets/pull/4993
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4993",
"html_url": "https://github.com/huggingface/datasets/pull/4993",
"diff_url": "https://github.com/huggingface/datasets/pull/4993.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4993.patch",
"merged_at": "2022-09-20T13:08:29"
}
|
szmoro
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,379,031,842
| 4,992
|
Support streaming iwslt2017 dataset
|
closed
| 2022-09-20T08:35:41
| 2022-09-20T09:27:55
| 2022-09-20T09:15:24
|
https://github.com/huggingface/datasets/pull/4992
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4992",
"html_url": "https://github.com/huggingface/datasets/pull/4992",
"diff_url": "https://github.com/huggingface/datasets/pull/4992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4992.patch",
"merged_at": "2022-09-20T09:15:24"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,378,898,752
| 4,991
|
Fix missing tags in dataset cards
|
closed
| 2022-09-20T06:42:07
| 2022-09-22T12:25:32
| 2022-09-20T07:37:30
|
https://github.com/huggingface/datasets/pull/4991
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4991",
"html_url": "https://github.com/huggingface/datasets/pull/4991",
"diff_url": "https://github.com/huggingface/datasets/pull/4991.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4991.patch",
"merged_at": "2022-09-20T07:37:30"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,378,120,806
| 4,990
|
"no-token" is passed to `huggingface_hub` when token is `None`
|
closed
| 2022-09-19T15:14:40
| 2022-09-30T09:16:00
| 2022-09-30T09:16:00
|
https://github.com/huggingface/datasets/issues/4990
| null |
Wauplin
| false
|
[
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @albertvillanova , thanks for finding the original issue :+1: \r\n\r\nAs of next release of `huggingface_hub`, the `token` argument will be deprecated in favor of the `use_auth_token` argument in `dataset_info` method. This change as been done by @SBrandeis in https://github.com/huggingface/huggingface_hub/pull/928. `use_auth_token` is a bit different and allow the case \"don't sent the cached token by default\".\r\n\r\nIf you want to strictly avoid sending the cached token from `datasets`, you can use:\r\n```py\r\n# token=token if token else \"no-token\", <- will fail because token is not valid\r\n\r\nuse_auth_token=token if token else False, # using the new `use_auth_token` parameter\r\n```\r\n\r\nAnd as a note, I am currently updating the \"don't send the cached token by default\"-rule to \"don't send the cached token on public repos by default but use it in private ones\" in https://github.com/huggingface/huggingface_hub/pull/1064. This will not change the fact that `use_auth_token=False` doesn't send the token at all.\r\n",
"What is current strategy in term of updating `huggingface_hub` version in `datasets` ? I don't want to break stuff in the next release so let's find a proper solution :) ",
"As soon as `token` is deprecated and hfh has a new release, we'll update `datasets` to use the new argument instead. Does it sound good to you ?",
"Perfect :ok_hand: ",
"Hi @Wauplin, thanks for the warning about the deprecation of `token` in favor of `use_auth_token`.\r\n\r\nIndeed, in datasets we use internally `use_auth_token`, which in this case was transformed to `token` to call `HfApi.dataset_info`:\r\nhttps://github.com/huggingface/datasets/blob/1a9385d7cc8a3241b44015145ef56a230fdadc51/src/datasets/load.py#L747\r\n\r\nTherefore, for the new hfh release, the fix will be trivial: we will pass directly `use_auth_token`.\r\n\r\nAs discussed during our meeting yesterday, due to the fact that at datasets we support multiple hfh versions, I think we should handle passing `token` or `use_auth_token` depending on the hfh version."
] |
1,376,832,233
| 4,989
|
Running add_column() seems to corrupt existing sequence-type column info
|
closed
| 2022-09-17T17:42:05
| 2022-09-19T12:54:54
| 2022-09-19T12:54:54
|
https://github.com/huggingface/datasets/issues/4989
| null |
derek-rocheleau
| false
|
[
"Nevermind, I was incorrect."
] |
1,376,096,584
| 4,988
|
Add `IterableDataset.from_generator` to the API
|
closed
| 2022-09-16T15:19:41
| 2022-10-05T12:10:49
| 2022-10-05T12:10:49
|
https://github.com/huggingface/datasets/issues/4988
| null |
mariosasko
| false
|
[
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] |
1,376,006,477
| 4,987
|
Embed image/audio data in dl_and_prepare parquet
|
closed
| 2022-09-16T14:09:27
| 2022-09-16T16:24:47
| 2022-09-16T16:22:35
|
https://github.com/huggingface/datasets/pull/4987
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4987",
"html_url": "https://github.com/huggingface/datasets/pull/4987",
"diff_url": "https://github.com/huggingface/datasets/pull/4987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4987.patch",
"merged_at": "2022-09-16T16:22:35"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,375,895,035
| 4,986
|
[doc] Fix broken snippet that had too many quotes
|
closed
| 2022-09-16T12:41:07
| 2022-09-16T22:12:21
| 2022-09-16T17:32:14
|
https://github.com/huggingface/datasets/pull/4986
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4986",
"html_url": "https://github.com/huggingface/datasets/pull/4986",
"diff_url": "https://github.com/huggingface/datasets/pull/4986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4986.patch",
"merged_at": "2022-09-16T17:32:14"
}
|
tomaarsen
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4986/en/process#map):\r\n\r\n"
] |
1,375,807,768
| 4,985
|
Prefer split patterns from directories over split patterns from filenames
|
closed
| 2022-09-16T11:20:40
| 2022-11-02T11:54:28
| 2022-09-29T08:07:49
|
https://github.com/huggingface/datasets/pull/4985
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4985",
"html_url": "https://github.com/huggingface/datasets/pull/4985",
"diff_url": "https://github.com/huggingface/datasets/pull/4985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4985.patch",
"merged_at": "2022-09-29T08:07:49"
}
|
polinaeterna
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can we merge this one since the issue this PR fixes was reported for the second time? I also think we don't need a test for this simple change.",
"@mariosasko sure! could you please approve it? ",
"Hi there @polinaeterna @mariosasko! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!"
] |
1,375,690,330
| 4,984
|
docs: ✏️ add links to the Datasets API
|
closed
| 2022-09-16T09:34:12
| 2022-09-16T13:10:14
| 2022-09-16T13:07:33
|
https://github.com/huggingface/datasets/pull/4984
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"merged_at": null
}
|
severo
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] |
1,375,667,654
| 4,983
|
How to convert torch.utils.data.Dataset to huggingface dataset?
|
closed
| 2022-09-16T09:15:10
| 2023-12-14T20:54:15
| 2022-09-20T11:23:43
|
https://github.com/huggingface/datasets/issues/4983
| null |
DEROOCE
| false
|
[
"Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```",
"Maybe `Dataset.from_list` can work as well no ?\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndset = Dataset.from_list(torch_dataset)\r\n```",
"> ```python\r\n> from datasets import Dataset\r\n> \r\n> def gen():\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> ## or if it's an IterableDataset\r\n> # for ex in torch_dataset:\r\n> # yield ex\r\n> \r\n> dset = Dataset.from_generator(gen)\r\n> ```\r\n\r\nI try to use `Dataset.from_generator()` method, and it returns an error:\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_generator'\r\n```\r\nAnd I think it maybe the version of my datasets package is out-of-date, so I update it\r\n```bash\r\npip install --upgrade datasets\r\n```\r\nBut after that, the code still return the above Error. ",
"> ```python\r\n> dset = Dataset.from_list(torch_dataset)\r\n> ```\r\n\r\nIt seems that Dataset also has no `from_list` method 😂\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_list'\r\n```",
"> I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> \r\n> ```python\r\n> from datasets import Dataset\r\n> data = [[1, 2],[3, 4]]\r\n> ds = Dataset.from_dict({\"data\": data})\r\n> ds = ds.with_format(\"torch\")\r\n> ds[0]\r\n> ds[:2]\r\n> ```\r\n> \r\n> So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n\r\nMy dummy code is like:\r\n```python\r\nimport os\r\nimport json\r\nfrom torch.utils import data\r\nimport datasets\r\n\r\ndef gen(torch_dataset):\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n\r\nclass MyDataset(data.Dataset):\r\n def __init__(self, path):\r\n self.dict = []\r\n for line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n self.dict.append(j_dict['context'])\r\n \r\n def __getitem__(self, idx):\r\n return self.dict[idx]\r\n\r\n def __len__(self):\r\n return len(self.dict)\r\n\r\nroot_path = os.path.dirname(os.path.abspath(__file__))\r\npath = os.path.join(root_path, 'dataset', 'train.json')\r\ntorch_dataset = MyDataset(path)\r\n\r\ndit = []\r\nfor line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n dit.append(j_dict['context'])\r\ndset1 = datasets.Dataset.from_list(dit)\r\nprint(dset1)\r\ndset2 = datasets.Dataset.from_generator(gen)\r\nprint(dset2)\r\n```",
"We're releasing `from_generator` and `from_list` today :)\r\nIn the meantime you can play with them by installing `datasets` from source",
"> We're releasing `from_generator` and `from_list` today :) In the meantime you can play with them by installing `datasets` from source\r\n\r\nThanks a lot for your work!",
"> > I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> > ```python\r\n> > from datasets import Dataset\r\n> > data = [[1, 2],[3, 4]]\r\n> > ds = Dataset.from_dict({\"data\": data})\r\n> > ds = ds.with_format(\"torch\")\r\n> > ds[0]\r\n> > ds[:2]\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n> \r\n> My dummy code is like:\r\n> \r\n> ```python\r\n> import os\r\n> import json\r\n> from torch.utils import data\r\n> import datasets\r\n> \r\n> def gen(torch_dataset):\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> \r\n> class MyDataset(data.Dataset):\r\n> def __init__(self, path):\r\n> self.dict = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> self.dict.append(j_dict['context'])\r\n> \r\n> def __getitem__(self, idx):\r\n> return self.dict[idx]\r\n> \r\n> def __len__(self):\r\n> return len(self.dict)\r\n> \r\n> root_path = os.path.dirname(os.path.abspath(__file__))\r\n> path = os.path.join(root_path, 'dataset', 'train.json')\r\n> torch_dataset = MyDataset(path)\r\n> \r\n> dit = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> dit.append(j_dict['context'])\r\n> dset1 = datasets.Dataset.from_list(dit)\r\n> print(dset1)\r\n> dset2 = datasets.Dataset.from_generator(gen)\r\n> print(dset2)\r\n> ```\r\nHi, when I am using this code to build my own dataset, ` datasets.Dataset.from_generator(gen)` report `TypeError: cannot pickle generator object` whre MyDataset returns a dict like {'image': bytes, 'text': string}. How can I resolve this? Thanks a lot!",
"Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n\r\nIn the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n```python\r\nwith open(...) as f:\r\n\r\n def gen():\r\n for x in f:\r\n yield json.loads(x)\r\n\r\n ds = Dataset.from_generator(gen)\r\n```\r\nbut this does work:\r\n```python\r\ndef gen():\r\n with open(...) as f:\r\n for x in f:\r\n yield json.loads(x)\r\n\r\nds = Dataset.from_generator(gen)\r\n```",
"> Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n> \r\n> In the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n> \r\n> ```python\r\n> with open(...) as f:\r\n> \r\n> def gen():\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n> \r\n> but this does work:\r\n> \r\n> ```python\r\n> def gen():\r\n> with open(...) as f:\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n\r\nThanks a lot! That's the reason why I have encountered this issue. Sorry for bothering you again with another problem, since my dataset is large and I use IterableDataset.from_generator which has no attribute with_transform, how can I equip it with some customed preprocessings like Dataset.from_generator? Should I move the preprocessing to the my torch Dataset?",
"Iterable datasets are lazy: exactly like `with_transform` they apply processing on the fly when accessing the examples.\r\n\r\nTherefore you can use `my_iterable_dataset.map()` instead :)",
"@lhoestq thanks a lot and I have successfully made it work~",
"@lhoestq I am having a similar issue. Can you help me understand which kinds of generators are picklable? I previously thought that no generators are picklable so I'm intrigued to hear this.",
"Generator functions are generally picklable. E.g.\r\n```python\r\nimport dill as pickle\r\n\r\ndef generator_fn():\r\n for i in range(10):\r\n yield i\r\n\r\npickle.dumps(generator_fn)\r\n```\r\n\r\nhowever generators are not picklable\r\n```python\r\ngenerator = generator_fn()\r\npickle.dumps(generator)\r\n# TypeError: cannot pickle 'generator' object\r\n```\r\n\r\nThough it can happen that some generator functions are not recursively picklable if they use global objects that are not picklable:\r\n```python\r\ndef generator_fn_not_picklable():\r\n for i in generator:\r\n yield i\r\n\r\npickle.dumps(generator_fn_not_picklable, recurse=True)\r\n# TypeError: cannot pickle 'generator' object\r\n````",
"I'm trying to create an IterableDataset from a generator but I get this error:\r\n`PicklingError: Can't pickle <built-in function input>: it's not the same object as builtins.input`\r\n\r\nWhat can I do?"
] |
1,375,604,693
| 4,982
|
Create dataset_infos.json with VALIDATION and TEST splits
|
closed
| 2022-09-16T08:21:19
| 2022-09-28T07:59:39
| 2022-09-28T07:59:39
|
https://github.com/huggingface/datasets/issues/4982
| null |
skalinin
| false
|
[
"@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)",
"Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?",
"Yes, it worked! thanks a lot"
] |
1,375,086,773
| 4,981
|
Can't create a dataset with `float16` features
|
open
| 2022-09-15T21:03:24
| 2025-06-12T11:47:42
| null |
https://github.com/huggingface/datasets/issues/4981
| null |
dconathan
| false
|
[
"Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",
"Thanks for the link…. didn’t realize arrow didn’t support it yet. Should it be removed from https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Value until Arrow supports it?",
"Yes, you are right: maybe we should either remove it from our docs or add a comment explaining the issue.\r\n\r\nThe thing is that in Arrow it is partially supported: you can create `float16` values, but you can't cast them from/to other types. And current implementation of `Value` always tries to perform a cast from `float64` to `float16`.",
"Maybe we can just add a note in the `Value` documentation ?",
"Would you accept a PR to fix this? @lhoestq Do you have an idea of how hard it would be to fix?",
"I think the issue comes mostly from pyarrow not supporting `float16` completely.\r\n\r\nFor example you stil can't cast from/to `float16`\r\n```python\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\npa.array(range(5)).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float\r\npa.array(range(5), pa.float32()).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from float to halffloat using function cast_half_float\r\npa.array(range(5), pa.float16())\r\n# ArrowTypeError: Expected np.float16 instance\r\npa.array(np.arange(5, dtype=np.float16())).cast(pa.float32())\r\n# ArrowNotImplementedError: Unsupported cast from halffloat to float using function cast_float\r\n```",
"Hmm it seems like we can either:\r\n1. try to fix pyarrow upstream\r\n2. half-support float16 with some workaround to make sure we don't ever do casting internally\r\n",
"This seems to be fixed now. Not sure if all operations are supported, but at least creating the dataset is supported."
] |
1,374,868,083
| 4,980
|
Make `pyarrow` optional
|
closed
| 2022-09-15T17:38:03
| 2022-09-16T17:23:47
| 2022-09-16T17:23:47
|
https://github.com/huggingface/datasets/issues/4980
| null |
KOLANICH
| false
|
[
"The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite / a different library with minimal functionality (datasets-lite ?)",
"Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ",
"Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n"
] |
1,374,820,758
| 4,979
|
Fix missing tags in dataset cards
|
closed
| 2022-09-15T16:51:03
| 2022-09-22T12:37:55
| 2022-09-15T17:12:09
|
https://github.com/huggingface/datasets/pull/4979
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4979",
"html_url": "https://github.com/huggingface/datasets/pull/4979",
"diff_url": "https://github.com/huggingface/datasets/pull/4979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4979.patch",
"merged_at": "2022-09-15T17:12:09"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,374,271,504
| 4,978
|
Update IndicGLUE download links
|
closed
| 2022-09-15T10:05:57
| 2022-09-15T22:00:20
| 2022-09-15T21:57:34
|
https://github.com/huggingface/datasets/pull/4978
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4978",
"html_url": "https://github.com/huggingface/datasets/pull/4978",
"diff_url": "https://github.com/huggingface/datasets/pull/4978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4978.patch",
"merged_at": "2022-09-15T21:57:34"
}
|
sumanthd17
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,372,962,157
| 4,977
|
Providing dataset size
|
open
| 2022-09-14T13:09:27
| 2022-09-15T16:03:58
| null |
https://github.com/huggingface/datasets/issues/4977
| null |
sashavor
| false
|
[
"Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926",
"Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API",
"Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests/additional coding) would be really useful :hugs: "
] |
1,372,322,382
| 4,976
|
Hope to adapt Python3.9 as soon as possible
|
open
| 2022-09-14T04:42:22
| 2022-09-26T16:32:35
| null |
https://github.com/huggingface/datasets/issues/4976
| null |
RedHeartSecretMan
| false
|
[
"Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?",
"There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^",
"Perhaps we should report this issue in the `filelock` repo?"
] |
1,371,703,691
| 4,975
|
Add `fn_kwargs` param to `IterableDataset.map`
|
closed
| 2022-09-13T16:19:05
| 2023-05-05T16:53:43
| 2022-09-13T16:45:34
|
https://github.com/huggingface/datasets/pull/4975
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4975",
"html_url": "https://github.com/huggingface/datasets/pull/4975",
"diff_url": "https://github.com/huggingface/datasets/pull/4975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4975.patch",
"merged_at": "2022-09-13T16:45:34"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for adding this fix! \r\n\r\nWould it be possible to get `fn_kwargs` added to `IterableDatasetDict.map` as well? It looks like a very similar problem, and hopefully shouldn't be a huge change. \r\n",
"Hi @brianhill11! https://github.com/huggingface/datasets/pull/5810 adds this (opened a couple of days ago). It should be merged soon.",
"That's fantastic news, thanks @mariosasko ! I'll give it a shot once the changes are merged in. "
] |
1,371,682,020
| 4,974
|
[GH->HF] Part 2: Remove all dataset scripts from github
|
closed
| 2022-09-13T16:01:12
| 2022-10-03T17:09:39
| 2022-10-03T17:07:32
|
https://github.com/huggingface/datasets/pull/4974
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4974",
"html_url": "https://github.com/huggingface/datasets/pull/4974",
"diff_url": "https://github.com/huggingface/datasets/pull/4974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4974.patch",
"merged_at": "2022-10-03T17:07:32"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"So this means metrics will be deleted from this repo in favor of the \"evaluate\" library? Maybe you guys could just redirect metrics to that library.",
"We are deprecating the metrics in `datasets` indeed and suggest users to switch to `evaluate` (via a warning message)\r\n\r\nWe'll keep the current metrics as they are for now, but they'll be completely removed at one point",
"I guess this is ready to merge ?\r\n\r\nIt should break nothing except one rare case:\r\n\r\nIf someone is using an old version of `datasets` to try to load a recent dataset. Indeed in that case it fetches the `main` branch on github to see if it exists. But since we're removing all the datasets, forward fetching won't work anymore.\r\n\r\ne.g. if someone uses \"imagenet-1k\" with a version of `datasets` that didn't have it at that time. I checked on kibana and one single user would be affected with 4k downloads/months. It should still work for them though thanks to the `datasets` cache\r\n\r\nBut if they delete their cache, the workaround is... 🥁 update `datasets` 😅",
"Let's merge this on monday if we can, to make sure contributors who wanted to merge their dataset PRs here could do it",
"Alright, merging !"
] |
1,371,600,074
| 4,973
|
[GH->HF] Load datasets from the Hub
|
closed
| 2022-09-13T15:01:41
| 2023-09-24T10:06:02
| 2022-09-15T15:24:26
|
https://github.com/huggingface/datasets/pull/4973
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4973",
"html_url": "https://github.com/huggingface/datasets/pull/4973",
"diff_url": "https://github.com/huggingface/datasets/pull/4973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4973.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate of:\r\n- #4059"
] |
1,371,443,306
| 4,972
|
Fix map batched with torch output
|
closed
| 2022-09-13T13:16:34
| 2022-09-20T09:42:02
| 2022-09-20T09:39:33
|
https://github.com/huggingface/datasets/pull/4972
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4972",
"html_url": "https://github.com/huggingface/datasets/pull/4972",
"diff_url": "https://github.com/huggingface/datasets/pull/4972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4972.patch",
"merged_at": "2022-09-20T09:39:33"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,370,319,516
| 4,971
|
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified
|
closed
| 2022-09-12T18:08:24
| 2022-09-13T13:51:08
| 2022-09-13T13:48:45
|
https://github.com/huggingface/datasets/pull/4971
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4971",
"html_url": "https://github.com/huggingface/datasets/pull/4971",
"diff_url": "https://github.com/huggingface/datasets/pull/4971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4971.patch",
"merged_at": "2022-09-13T13:48:44"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,369,433,074
| 4,970
|
Support streaming nli_tr dataset
|
closed
| 2022-09-12T07:48:45
| 2022-09-12T08:45:04
| 2022-09-12T08:43:08
|
https://github.com/huggingface/datasets/pull/4970
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4970",
"html_url": "https://github.com/huggingface/datasets/pull/4970",
"diff_url": "https://github.com/huggingface/datasets/pull/4970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4970.patch",
"merged_at": "2022-09-12T08:43:08"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,369,334,740
| 4,969
|
Fix data URL and metadata of vivos dataset
|
closed
| 2022-09-12T06:12:34
| 2022-09-12T07:16:15
| 2022-09-12T07:14:19
|
https://github.com/huggingface/datasets/pull/4969
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4969",
"html_url": "https://github.com/huggingface/datasets/pull/4969",
"diff_url": "https://github.com/huggingface/datasets/pull/4969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4969.patch",
"merged_at": "2022-09-12T07:14:19"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,369,312,877
| 4,968
|
Support streaming compguesswhat dataset
|
closed
| 2022-09-12T05:42:24
| 2022-09-12T08:00:06
| 2022-09-12T07:58:06
|
https://github.com/huggingface/datasets/pull/4968
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"merged_at": "2022-09-12T07:58:06"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,369,092,452
| 4,967
|
Strip "/" in local dataset path to avoid empty dataset name error
|
closed
| 2022-09-11T23:09:16
| 2022-09-29T10:46:21
| 2022-09-12T15:30:38
|
https://github.com/huggingface/datasets/pull/4967
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4967",
"html_url": "https://github.com/huggingface/datasets/pull/4967",
"diff_url": "https://github.com/huggingface/datasets/pull/4967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4967.patch",
"merged_at": "2022-09-12T15:30:38"
}
|
apohllo
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool :-)"
] |
1,368,661,002
| 4,965
|
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
|
closed
| 2022-09-10T15:55:49
| 2024-03-21T17:25:53
| 2023-07-21T14:45:50
|
https://github.com/huggingface/datasets/issues/4965
| null |
hoangtnm
| false
|
[
"Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.",
"Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?",
"Hi @hoangtnm - I upgraded to python 3.10 and it fixed the problem for me. I was also running 3.8 on an M1 mac.",
"Same here, upgrade python didn't work for me \r\n\r\nMemoryError: Cannot allocate write+execute memory for ffi.callback()\r\n\r\nany idea?",
"This is a `soundfile` issue, so there isn't much we can do about it. Hopefully, it gets fixed soon.",
"> Hi @hoangtnm - I upgraded to python 3.10 and it fixed the problem for me. I was also running 3.8 on an M1 mac.\r\n\r\nit work for me too \r\n"
] |
1,368,617,322
| 4,964
|
Column of arrays (2D+) are using unreasonably high memory
|
open
| 2022-09-10T13:07:22
| 2022-09-22T18:29:22
| null |
https://github.com/huggingface/datasets/issues/4964
| null |
vigsterkr
| false
|
[
"note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.",
"Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.",
"Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them with\r\n```python\r\ndataset.save_to_disk(\"path/to/local\")\r\ndataset = load_from_disk(\"path/to/local\")\r\n```\r\nthis way you'll end up with a dataset loaded from your disk using memory mapping, and it won't fill up your RAM :)\r\n\r\nrelated to https://github.com/huggingface/datasets/issues/4861",
"@lhoestq thnx for getting back to me! i've tested the suggested method, but unfortunately the memory consumption is the very same:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Array2D, Array3D, load_from_disk\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\ndataset.save_to_disk(\"foo\")\r\n\r\nfoo_db = load_from_disk(\"foo\")\r\ncolum_value = foo_db[column_name]\r\n```\r\n\r\nthe very same happens when you create the dataset, but dont specify the feature type.\r\n\r\ni've tried running this on different envs (macOS, linux) and it's behaving the very same way.",
"When you call `colum_value = foo_db[column_name]`, you load the full column in memory.\r\n\r\nIf you want to avoid filling up your memory, you can access chunks of data instead\r\n```python\r\nembeddings = dataset[i:i + chunk_size][\"embeddings\"]\r\n```",
"@lhoestq yeah that's intentional, i.e. i really want to load the whole column into the memory. but as said above there's an unreasonable amount of overhead for the memory. the np array itself is using about 1G of memory:\r\n```\r\n>>> getsizeof(data)/1024/1024\r\n937.5001525878906\r\n```\r\nthat accessing of column above is using 10x memory compared to the original numpy array.",
"The dataset must be twice as big because we use regular arrow ListArray under the hood and not FixedSizeListArray. Basically we store unnecessary offsets.\r\n\r\nAnd this should affect performance as well. When we developed this, FixedSizeListArray still had some issues but they should be resolved on the PyArrow side now",
"A doubling would be fine. My very basic understanding of PyArrow is that using ListArray is probably related to the issue though. Using a multi-dimensional array in datasets is storing everything as strange nested 1d object arrays, which I imagine is creating the massive overhead.\r\n\r\nI think it should be a PyArrow Tensor, no?",
"PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694",
"That's... unfortunate. I didn't realize that."
] |
1,368,201,188
| 4,963
|
Dataset without script does not support regular JSON data file
|
closed
| 2022-09-09T18:45:33
| 2022-09-20T15:40:07
| 2022-09-20T15:40:07
|
https://github.com/huggingface/datasets/issues/4963
| null |
julien-c
| false
|
[
"Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "
] |
1,368,155,365
| 4,962
|
Update setup.py
|
closed
| 2022-09-09T17:57:56
| 2022-09-12T14:33:04
| 2022-09-12T14:33:04
|
https://github.com/huggingface/datasets/pull/4962
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4962",
"html_url": "https://github.com/huggingface/datasets/pull/4962",
"diff_url": "https://github.com/huggingface/datasets/pull/4962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4962.patch",
"merged_at": null
}
|
DCNemesis
| true
|
[
"Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https://github.com/huggingface/datasets/issues/4961#issuecomment-1243376247",
"Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue."
] |
1,368,124,033
| 4,961
|
fsspec 2022.8.2 breaks xopen in streaming mode
|
closed
| 2022-09-09T17:26:55
| 2022-09-12T17:45:50
| 2022-09-12T14:32:05
|
https://github.com/huggingface/datasets/issues/4961
| null |
DCNemesis
| false
|
[
"loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.",
"Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.",
"Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n",
"@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ",
"Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.",
"Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010"
] |
1,368,035,159
| 4,960
|
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'
|
open
| 2022-09-09T16:06:43
| 2022-09-13T08:51:03
| null |
https://github.com/huggingface/datasets/issues/4960
| null |
DSLituiev
| false
|
[
"Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps/bioasq_task_b\"`), before it would not even mention that it requires `name` argument",
"Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https://huggingface.co/datasets/aps/bioasq_task_b/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error"
] |
1,367,924,429
| 4,959
|
Fix data URLs of compguesswhat dataset
|
closed
| 2022-09-09T14:36:10
| 2022-09-09T16:01:34
| 2022-09-09T15:59:04
|
https://github.com/huggingface/datasets/pull/4959
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4959",
"html_url": "https://github.com/huggingface/datasets/pull/4959",
"diff_url": "https://github.com/huggingface/datasets/pull/4959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4959.patch",
"merged_at": "2022-09-09T15:59:04"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,367,695,376
| 4,958
|
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
|
closed
| 2022-09-09T11:29:55
| 2022-09-09T11:38:44
| 2022-09-09T11:38:44
|
https://github.com/huggingface/datasets/issues/4958
| null |
hasakikiki
| false
|
[
"I have solved this problem... The extension of the file should be `.json` not `.jsonl`"
] |
1,366,532,849
| 4,957
|
Add `Dataset.from_generator`
|
closed
| 2022-09-08T15:08:25
| 2022-09-16T14:46:35
| 2022-09-16T14:44:18
|
https://github.com/huggingface/datasets/pull/4957
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4957",
"html_url": "https://github.com/huggingface/datasets/pull/4957",
"diff_url": "https://github.com/huggingface/datasets/pull/4957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4957.patch",
"merged_at": "2022-09-16T14:44:18"
}
|
mariosasko
| true
|
[
"I restarted the builder PR job just in case",
"_The documentation is not available anymore as the PR was closed or merged._",
"CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed."
] |
1,366,475,160
| 4,956
|
Fix TF tests for 2.10
|
closed
| 2022-09-08T14:39:10
| 2022-09-08T15:16:51
| 2022-09-08T15:14:44
|
https://github.com/huggingface/datasets/pull/4956
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4956",
"html_url": "https://github.com/huggingface/datasets/pull/4956",
"diff_url": "https://github.com/huggingface/datasets/pull/4956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4956.patch",
"merged_at": "2022-09-08T15:14:44"
}
|
Rocketknight1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,366,382,314
| 4,955
|
Raise a more precise error when the URL is unreachable in streaming mode
|
open
| 2022-09-08T13:52:37
| 2022-09-08T13:53:36
| null |
https://github.com/huggingface/datasets/issues/4955
| null |
severo
| false
|
[] |
1,366,369,682
| 4,954
|
Pin TensorFlow temporarily
|
closed
| 2022-09-08T13:46:15
| 2022-09-08T14:12:33
| 2022-09-08T14:10:03
|
https://github.com/huggingface/datasets/pull/4954
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4954",
"html_url": "https://github.com/huggingface/datasets/pull/4954",
"diff_url": "https://github.com/huggingface/datasets/pull/4954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4954.patch",
"merged_at": "2022-09-08T14:10:03"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.