id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,160,154,352
| 3,829
|
[📄 Docs] Create a `datasets` performance guide.
|
open
| 2022-03-05T00:28:06
| 2022-03-10T16:24:27
| null |
https://github.com/huggingface/datasets/issues/3829
| null |
dynamicwebpaige
| false
|
[
"Hi ! Yes this is definitely something we'll explore, since optimizing processing pipelines can be challenging and because performance is key here: we want anyone to be able to play with large-scale datasets more easily.\r\n\r\nI think we'll start by documenting the performance of the dataset transforms we provide, and then we can have some tools to help debugging/optimizing them"
] |
1,160,064,029
| 3,828
|
The Pile's _FEATURE spec seems to be incorrect
|
closed
| 2022-03-04T21:25:32
| 2022-03-08T09:30:49
| 2022-03-08T09:30:48
|
https://github.com/huggingface/datasets/issues/3828
| null |
dlwh
| false
|
[
"Hi @dlwh, thanks for reporting.\r\n\r\nPlease note, that the source data files for \"all\" config are different from the other configurations.\r\n\r\nThe \"all\" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/\r\nAll data examples contain a \"meta\" dict with a single \"pile_set_name\" key:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ds = load_dataset(\"the_pile\", \"all\", split=\"train\", streaming=True)\r\n item = next(iter(ds))\r\nDownloading builder script: 9.09kB [00:00, 4.42MB/s]\r\n\r\nIn [3]: item[\"meta\"]\r\nOut[3]: {'pile_set_name': 'Pile-CC'}\r\n```\r\n\r\nOn the other hand, all the other subset configs data files come from the Pile preliminary components directory: https://mystic.the-eye.eu/public/AI/pile_preliminary_components/\r\nFor theses components, the \"meta\" field may have different keys depending on the subset: \"id\", \"language\", \"pmid\",... Because of that, if we had kept the `dict` data format for the \"meta\" field, we would have an error when trying to concatenate different subsets, whose \"meta\" keys are not identical. In order to avoid that, the \"meta\" field is cast to `str` in all these cases, so that there is no incompatibility in their \"meta\" data type when concatenating.\r\n\r\nYou can check, for example, that for \"pubmed_central\" the \"meta\" field is cast to `str`:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ds = load_dataset(\"the_pile\", \"pubmed_central\", split=\"train\", streaming=True)\r\n item = next(iter(ds))\r\n\r\nIn [5]: item[\"meta\"]\r\nOut[5]: \"{'id': 'PMC6071596'}\"\r\n```\r\n\r\nFeel free to reopen this issue if you have further questions. "
] |
1,159,878,436
| 3,827
|
Remove deprecated `remove_columns` param in `filter`
|
closed
| 2022-03-04T17:23:26
| 2022-03-07T12:37:52
| 2022-03-07T12:37:51
|
https://github.com/huggingface/datasets/pull/3827
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3827",
"html_url": "https://github.com/huggingface/datasets/pull/3827",
"diff_url": "https://github.com/huggingface/datasets/pull/3827.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3827.patch",
"merged_at": "2022-03-07T12:37:51"
}
|
mariosasko
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3827). All of your documentation changes will be reflected on that endpoint."
] |
1,159,851,110
| 3,826
|
Add IterableDataset.filter
|
closed
| 2022-03-04T16:57:23
| 2022-03-09T17:23:13
| 2022-03-09T17:23:11
|
https://github.com/huggingface/datasets/pull/3826
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3826",
"html_url": "https://github.com/huggingface/datasets/pull/3826",
"diff_url": "https://github.com/huggingface/datasets/pull/3826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3826.patch",
"merged_at": "2022-03-09T17:23:11"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3826). All of your documentation changes will be reflected on that endpoint.",
"Indeed ! If `batch_size` is `None` or `<=0` then the full dataset should be passed. It's been mentioned in the docs for a while but never actually implemented. We can fix that later"
] |
1,159,802,345
| 3,825
|
Update version and date in Wikipedia dataset
|
closed
| 2022-03-04T16:05:27
| 2022-03-04T17:24:37
| 2022-03-04T17:24:36
|
https://github.com/huggingface/datasets/pull/3825
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3825",
"html_url": "https://github.com/huggingface/datasets/pull/3825",
"diff_url": "https://github.com/huggingface/datasets/pull/3825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3825.patch",
"merged_at": "2022-03-04T17:24:36"
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint."
] |
1,159,574,186
| 3,824
|
Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute`
|
closed
| 2022-03-04T12:04:40
| 2022-03-04T18:04:22
| 2022-03-04T18:04:21
|
https://github.com/huggingface/datasets/pull/3824
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3824",
"html_url": "https://github.com/huggingface/datasets/pull/3824",
"diff_url": "https://github.com/huggingface/datasets/pull/3824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3824.patch",
"merged_at": "2022-03-04T18:04:21"
}
|
mariosasko
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3824). All of your documentation changes will be reflected on that endpoint."
] |
1,159,497,844
| 3,823
|
500 internal server error when trying to open a dataset composed of Zarr stores
|
closed
| 2022-03-04T10:37:14
| 2022-03-08T09:47:39
| 2022-03-08T09:47:39
|
https://github.com/huggingface/datasets/issues/3823
| null |
jacobbieker
| false
|
[
"Hi @jacobbieker, thanks for reporting!\r\n\r\nI have transferred this issue to our Hub team and they are investigating it. I keep you informed. ",
"Hi @jacobbieker, we are investigating this issue on our side and we'll see if we can fix it, but please note that your repo is considered problematic for git. Here are the results of running https://github.com/github/git-sizer on it:\r\n\r\n```\r\nProcessing blobs: 147448 \r\nProcessing trees: 27 \r\nProcessing commits: 4 \r\nMatching commits to trees: 4 \r\nProcessing annotated tags: 0 \r\nProcessing references: 3 \r\n| Name | Value | Level of concern |\r\n| ---------------------------- | --------- | ------------------------------ |\r\n| Biggest objects | | |\r\n| * Trees | | |\r\n| * Maximum entries [1] | 167 k | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |\r\n| | | |\r\n| Biggest checkouts | | |\r\n| * Number of files [2] | 189 k | *** |\r\n\r\n[1] aa057d2667c34c70c6146efc631f5c9917ff326e (refs/heads/main:2016.zarr/unknown)\r\n[2] 6897b7bf6440fdd16b2c39d08085a669e7eaa59d (refs/heads/main^{tree})\r\n```\r\n\r\nYou can check https://github.com/github/git-sizer for more information on how to avoid such pathological structures.",
"Hi, thanks for getting back to me so quick! And yeah, I figured that was probably the problem. I was going to try to delete the repo, but couldn't through the website, so if that's the easiest way to solve it, I can regenerate the dataset in a different format with less tiny files, and you guys can delete the repo as it is. Zarr just saves everything as lots of small files to make chunks easy to load, which is why I was preferring that format, but maybne that just doesn't work well for HF datasets.",
"Hi @jacobbieker,\r\n\r\nFor future use cases, our Hub team is still pondering whether to limit the maximum number of files per repo to avoid technical issues...\r\n\r\nOn the meantime, they have made a fix and your dataset is working: https://huggingface.co/datasets/openclimatefix/mrms"
] |
1,159,395,728
| 3,822
|
Add Biwi Kinect Head Pose Database
|
closed
| 2022-03-04T08:48:39
| 2025-04-07T13:04:25
| 2022-06-01T13:00:47
|
https://github.com/huggingface/datasets/issues/3822
| null |
osanseviero
| false
|
[
"Official dataset location : https://icu.ee.ethz.ch/research/datsets.html\r\nIn the \"Biwi Kinect Head Pose Database\" section, I do not find any information regarding \"Downloading the dataset.\" . Do we mail the authors regarding this ?\r\n\r\nI found the dataset on Kaggle : [Link](https://www.kaggle.com/kmader/biwi-kinect-head-pose-database) , but since 🤗 does not host any of the datasets, this would require the user to provide their Kaggle username and API key to download. \r\n\r\nAny inputs on how we could proceed ? Thank you.\r\n[ Need your inputs here, @lhoestq or @mariosasko ]",
"Hi @dnaveenr! Thanks for tackling this issue. This link should work: https://data.vision.ee.ethz.ch/cvl/gfanelli/kinect_head_pose_db.tgz",
"#self-assign",
"Added in https://github.com/huggingface/datasets/pull/3903, thanks @dnaveenr !",
"@mariosasko Hi,\nthis link [https://data.vision.ee.ethz.ch/cvl/gfanelli/kinect_head_pose_db.tgz]\nseems not working now. Do you have alternatives?",
"It would be cool to find a copy of the data and host it directly in https://huggingface.co/datasets/ETHZurich/biwi_kinect_head_pose",
"> It would be cool to find a copy of the data and host it directly in https://huggingface.co/datasets/ETHZurich/biwi_kinect_head_pose\n\nHi, you mean I can download it by running python script in your link?",
"This dataset repository only contains a script that points to the URL of the file that doesn't exist anymore, so it's not usable anymore either :/",
"> This dataset repository only contains a script that points to the URL of the file that doesn't exist anymore, so it's not usable anymore either :/\n\nThen what did you mean \n\"It would be cool to find a copy of the data and host it directly in https://huggingface.co/datasets/ETHZurich/biwi_kinect_head_pose\"?\nI don't understand the meaning of 'host'",
"If we find a copy of the data we can upload the file in the dataset repository on HF, since the original data is not available anymore"
] |
1,159,371,927
| 3,821
|
Update Wikipedia dataset
|
closed
| 2022-03-04T08:19:21
| 2022-03-21T12:35:23
| 2022-03-21T12:31:00
|
https://github.com/huggingface/datasets/pull/3821
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3821",
"html_url": "https://github.com/huggingface/datasets/pull/3821",
"diff_url": "https://github.com/huggingface/datasets/pull/3821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3821.patch",
"merged_at": "2022-03-21T12:31:00"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm starting to generate the pre-processed data for some of the languages (for backward compatibility).\r\n\r\nOnce this merged, we will create the pre-processed data on the Hub under the Wikimedia namespace.",
"All steps have been properly done.\r\n\r\nI'm merging all these commits into master."
] |
1,159,106,603
| 3,820
|
`pubmed_qa` checksum mismatch
|
closed
| 2022-03-04T00:28:08
| 2022-03-04T09:42:32
| 2022-03-04T09:42:32
|
https://github.com/huggingface/datasets/issues/3820
| null |
jon-tow
| false
|
[
"Hi @jon-tow, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today.\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] |
1,158,848,288
| 3,819
|
Fix typo in doc build yml
|
closed
| 2022-03-03T20:08:44
| 2022-03-04T13:07:41
| 2022-03-04T13:07:41
|
https://github.com/huggingface/datasets/pull/3819
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3819",
"html_url": "https://github.com/huggingface/datasets/pull/3819",
"diff_url": "https://github.com/huggingface/datasets/pull/3819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3819.patch",
"merged_at": "2022-03-04T13:07:41"
}
|
mishig25
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3819). All of your documentation changes will be reflected on that endpoint."
] |
1,158,788,545
| 3,818
|
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
|
closed
| 2022-03-03T18:57:54
| 2022-03-04T18:04:21
| 2022-03-04T18:04:21
|
https://github.com/huggingface/datasets/issues/3818
| null |
lmvasque
| false
|
[
"Hi, thanks for reporting! We can add a `sources: datasets.Value(\"string\")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR?",
"Hi Mario,\r\n\r\nThanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:\r\n```\r\n features=datasets.Features(\r\n {\r\n \"sources\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"predictions\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"references\": datasets.Sequence(datasets.Value(\"string\", id=\"sequence\"), id=\"references\"),\r\n }\r\n ),\r\n```\r\n\r\nBut that only avoids a failure in `encode_batch` in the `add_batch` method:\r\n```\r\n batch = {\"predictions\": predictions, \"references\": references}\r\n batch = self.info.features.encode_batch(batch)\r\n```\r\n\r\nThe real problem is that `add_batch()`, `add()` and `compute()` does not receive a `sources` param:\r\n```\r\ndef add_batch(self, *, predictions=None, references=None):\r\ndef add(self, *, prediction=None, reference=None):\r\ndef compute(self, *, predictions=None, references=None, **kwargs)\r\n```\r\n\r\nAnd then, it fails:\r\n`TypeError: add_batch() got an unexpected keyword argument sources`\r\n\r\nI need this for adding any metric based on SARI or alike, not only for sari.py :)\r\n\r\nLet me know if I understood correctly the proposed solution.\r\n",
"The `Metric` class has been modified recently to support this use-case, but the `add_batch` + `compute` pattern still doesn't work correctly. I'll open a PR."
] |
1,158,592,335
| 3,817
|
Simplify Common Voice code
|
closed
| 2022-03-03T16:01:21
| 2022-03-04T14:51:48
| 2022-03-04T12:39:23
|
https://github.com/huggingface/datasets/pull/3817
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3817",
"html_url": "https://github.com/huggingface/datasets/pull/3817",
"diff_url": "https://github.com/huggingface/datasets/pull/3817.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3817.patch",
"merged_at": "2022-03-04T12:39:23"
}
|
lhoestq
| true
|
[
"I think the script looks pretty clean and readable now! cool!\r\n"
] |
1,158,589,913
| 3,816
|
Doc new UI test workflows2
|
closed
| 2022-03-03T15:59:14
| 2022-10-04T09:35:53
| 2022-03-03T16:42:15
|
https://github.com/huggingface/datasets/pull/3816
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3816",
"html_url": "https://github.com/huggingface/datasets/pull/3816",
"diff_url": "https://github.com/huggingface/datasets/pull/3816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3816.patch",
"merged_at": null
}
|
mishig25
| true
|
[
"<img src=\"https://www.bikevillastravel.com/cms/static/images/loading.gif\" alt=\"Girl in a jacket\" width=\"50\" >"
] |
1,158,589,512
| 3,815
|
Fix iter_archive getting reset
|
closed
| 2022-03-03T15:58:52
| 2022-03-03T18:06:37
| 2022-03-03T18:06:13
|
https://github.com/huggingface/datasets/pull/3815
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3815",
"html_url": "https://github.com/huggingface/datasets/pull/3815",
"diff_url": "https://github.com/huggingface/datasets/pull/3815.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3815.patch",
"merged_at": "2022-03-03T18:06:13"
}
|
lhoestq
| true
|
[] |
1,158,518,995
| 3,814
|
Handle Nones in PyArrow struct
|
closed
| 2022-03-03T15:03:35
| 2022-03-03T16:37:44
| 2022-03-03T16:37:43
|
https://github.com/huggingface/datasets/pull/3814
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3814",
"html_url": "https://github.com/huggingface/datasets/pull/3814",
"diff_url": "https://github.com/huggingface/datasets/pull/3814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3814.patch",
"merged_at": "2022-03-03T16:37:43"
}
|
mariosasko
| true
|
[
"Looks like I added my comments while you were editing - sorry about that"
] |
1,158,474,859
| 3,813
|
Add MetaShift dataset
|
closed
| 2022-03-03T14:26:45
| 2022-04-10T13:39:59
| 2022-04-10T13:39:59
|
https://github.com/huggingface/datasets/issues/3813
| null |
osanseviero
| false
|
[
"I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.",
"#self-assign",
"I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_metashift_dataset/datasets/metashift/metashift.py)\r\n1. The dataset does not have a typical - train/test/val split. What do we do for the _split_generators() function ? How do we go about this ?\r\n2. This dataset builds on the Visual Genome dataset, using a metadata file. The dataset is generated using generate_full_MetaShift.py script. By default, the authors choose to generate the dataset only for a SELECTED_CLASSES. The following script is used : \r\nCode : https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/generate_full_MetaShift.py \r\nInfo : https://metashift.readthedocs.io/en/latest/sub_pages/download_MetaShift.html#generate-the-full-metashift-dataset\r\nCan I just copy over the required functions into the metashift.py to generate the dataset ?\r\n3. How do we complete the _generate_examples for this dataset ?\r\n\r\nThe user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nInputs, suggestions would be helpful. Thank you.",
"I think @mariosasko and @lhoestq should be able to help here 😄 ",
"Hi ! Thanks for adding this dataset :) Let me answer your questions:\r\n\r\n1. in this case you can put everything in the \"train\" split\r\n2. Yes you can copy the script (provided you also include the MIT license of the code in the file header for example). Though we ideally try to not create new directories nor files when generating dataset, so if possible this script should be adapted to not create the file structure they mentioned, but instead yield the images one by one in `_generate_examples`. Let me know if you think this is feasible\r\n3. see point 2 haha\r\n\r\n> The user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nYup ! We can also define a `selected_classes` parameter such that users can do\r\n```python\r\nload_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...])\r\n```",
"Great. This is helpful. Thanks @lhoestq .\r\nRegarding Point 2, I'll try using yield instead of creating the directories and see if its feasible. selected_classes config sounds good.",
"Closed via #3900 "
] |
1,158,369,995
| 3,812
|
benchmark streaming speed with tar vs zip archives
|
closed
| 2022-03-03T12:48:41
| 2022-03-03T14:55:34
| 2022-03-03T14:55:33
|
https://github.com/huggingface/datasets/pull/3812
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3812",
"html_url": "https://github.com/huggingface/datasets/pull/3812",
"diff_url": "https://github.com/huggingface/datasets/pull/3812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3812.patch",
"merged_at": null
}
|
polinaeterna
| true
|
[
"I'm closing the PR since we're not going to merge it"
] |
1,158,234,407
| 3,811
|
Update dev doc gh workflows
|
closed
| 2022-03-03T10:29:01
| 2022-10-04T09:35:54
| 2022-03-03T10:45:54
|
https://github.com/huggingface/datasets/pull/3811
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3811",
"html_url": "https://github.com/huggingface/datasets/pull/3811",
"diff_url": "https://github.com/huggingface/datasets/pull/3811.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3811.patch",
"merged_at": null
}
|
mishig25
| true
|
[] |
1,158,202,093
| 3,810
|
Update version of xcopa dataset
|
closed
| 2022-03-03T09:58:25
| 2022-03-03T10:44:30
| 2022-03-03T10:44:29
|
https://github.com/huggingface/datasets/pull/3810
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3810",
"html_url": "https://github.com/huggingface/datasets/pull/3810",
"diff_url": "https://github.com/huggingface/datasets/pull/3810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3810.patch",
"merged_at": "2022-03-03T10:44:29"
}
|
albertvillanova
| true
|
[] |
1,158,143,480
| 3,809
|
Checksums didn't match for datasets on Google Drive
|
closed
| 2022-03-03T09:01:10
| 2022-03-03T09:24:58
| 2022-03-03T09:24:05
|
https://github.com/huggingface/datasets/issues/3809
| null |
muelletm
| false
|
[
"Hi @muelletm, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nUntil our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] |
1,157,650,043
| 3,808
|
Pre-Processing Cache Fails when using a Factory pattern
|
closed
| 2022-03-02T20:18:43
| 2022-03-10T23:01:47
| 2022-03-10T23:01:47
|
https://github.com/huggingface/datasets/issues/3808
| null |
Helw150
| false
|
[
"Ok - this is still an issue but I believe the root cause is different than I originally thought. I'm now able to get caching to work consistently with the above example as long as I fix the python hash seed `export PYTHONHASHSEED=1234`",
"Hi! \r\n\r\nYes, our hasher should work with decorators. For instance, this dummy example:\r\n```python\r\ndef f(arg):\r\n def f1(ex):\r\n return {\"a\": ex[\"col1\"] + arg}\r\n return f1\r\n```\r\ngives the same hash across different Python sessions (`datasets.fingerprint.Hasher.hash(f(\"string1\")` returns `\"408c9059f89dbd6c\"` on my machine).\r\n\r\nCould you please make the example self-contained? This way, we can reproduce the bug. Additionally, you can try to find the problematic object yourself by testing their hash with `datasets.fingerprint.Hasher.hash(obj)`\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/3638.",
"#3638 was indeed my issue. Thanks!"
] |
1,157,531,812
| 3,807
|
NonMatchingChecksumError in xcopa dataset
|
closed
| 2022-03-02T18:10:19
| 2022-05-20T06:00:42
| 2022-03-03T17:40:31
|
https://github.com/huggingface/datasets/issues/3807
| null |
afcruzs-ms
| false
|
[
"@albertvillanova here's a separate issue for a bug similar to #3792",
"Hi @afcruzs-ms, thanks for opening this separate issue for your problem.\r\n\r\nThe root problem in the other issue (#3792) was a change in the service of Google Drive.\r\n\r\nBut in your case, the `xcopa` dataset is not hosted on Google Drive. Therefore, the root cause should be a different one.\r\n\r\nLet me look at it... ",
"@afcruzs-ms, I'm not able to reproduce the issue you reported:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"xcopa\", \"it\")\r\nDownloading builder script: 5.21kB [00:00, 2.75MB/s] \r\nDownloading metadata: 28.6kB [00:00, 14.5MB/s] \r\nDownloading and preparing dataset xcopa/it (download: 627.09 KiB, generated: 76.43 KiB, post-processed: Unknown size, total: 703.52 KiB) to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6...\r\nDownloading data: 642kB [00:00, 5.42MB/s]\r\nDataset xcopa downloaded and prepared to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 733.27it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n test: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 500\r\n })\r\n validation: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 100\r\n })\r\n})\r\n```\r\n\r\nMaybe you have some issue with your cached data... Could you please try to force the redownload of the data?\r\n```python\r\ndataset = load_dataset(\"xcopa\", \"it\", download_mode=\"force_redownload\")\r\n```",
"It works indeed, thanks! ",
"unfortunately, i am having a similar problem with the irc_disentaglement dataset :/\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\n\r\nhowever, it produces the same error as @afcruzs-ms \r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\n\r\nI attempted to use the `ignore_verifications' as such:\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\n```\r\n```\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗",
"Thanks @labouz for reporting: yes, better opening a new GitHub issue as you did. I'm addressing it:\r\n- #4376"
] |
1,157,505,826
| 3,806
|
Fix Spanish data file URL in wiki_lingua dataset
|
closed
| 2022-03-02T17:43:42
| 2022-03-03T08:38:17
| 2022-03-03T08:38:16
|
https://github.com/huggingface/datasets/pull/3806
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3806",
"html_url": "https://github.com/huggingface/datasets/pull/3806",
"diff_url": "https://github.com/huggingface/datasets/pull/3806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3806.patch",
"merged_at": "2022-03-03T08:38:16"
}
|
albertvillanova
| true
|
[] |
1,157,454,884
| 3,805
|
Remove decode: true for image feature in head_qa
|
closed
| 2022-03-02T16:58:34
| 2022-03-07T12:13:36
| 2022-03-07T12:13:35
|
https://github.com/huggingface/datasets/pull/3805
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3805",
"html_url": "https://github.com/huggingface/datasets/pull/3805",
"diff_url": "https://github.com/huggingface/datasets/pull/3805.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3805.patch",
"merged_at": "2022-03-07T12:13:35"
}
|
craffel
| true
|
[] |
1,157,297,278
| 3,804
|
Text builder with custom separator line boundaries
|
open
| 2022-03-02T14:50:16
| 2022-03-16T15:53:59
| null |
https://github.com/huggingface/datasets/issues/3804
| null |
cronoik
| false
|
[
"Gently pinging @lhoestq",
"Hi ! Interresting :)\r\n\r\nCould you give more details on what kind of separators you would like to use instead ?",
"In my case, I just want to use `\\n` but not `U+2028`.",
"Ok I see, maybe there can be a `sep` parameter to allow users to specify what line/paragraph separator they'd like to use",
"Related to:\r\n- #3729 \r\n- #3910",
"Thanks for requesting this enhancement. We have recently found a somehow related issue with another dataset:\r\n- #3704\r\n\r\nLet me make a PR proposal."
] |
1,157,271,679
| 3,803
|
Remove deprecated methods/params (preparation for v2.0)
|
closed
| 2022-03-02T14:29:12
| 2022-03-02T14:53:21
| 2022-03-02T14:53:21
|
https://github.com/huggingface/datasets/pull/3803
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3803",
"html_url": "https://github.com/huggingface/datasets/pull/3803",
"diff_url": "https://github.com/huggingface/datasets/pull/3803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3803.patch",
"merged_at": "2022-03-02T14:53:21"
}
|
mariosasko
| true
|
[] |
1,157,009,964
| 3,802
|
Release of FairLex dataset
|
closed
| 2022-03-02T10:40:18
| 2022-03-02T15:21:10
| 2022-03-02T15:18:54
|
https://github.com/huggingface/datasets/pull/3802
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3802",
"html_url": "https://github.com/huggingface/datasets/pull/3802",
"diff_url": "https://github.com/huggingface/datasets/pull/3802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3802.patch",
"merged_at": null
}
|
iliaschalkidis
| true
|
[
"This is awesome ! The dataset card and the dataset script look amazing :)\r\n\r\nI wanted to ask you if you'd be interested to have this dataset under the namespace of you research group at https://huggingface.co/coastalcph ? If yes, then you can actually create a dataset repository under your research group name and upload the files from this PR there",
"Hi @lhoestq,\r\n\r\nYeah, I could do that. I see that people do that a lot of models, but not for datasets. \r\n\r\nIs there any good reason to have it under the organization domain instead of the general domain?\r\n\r\n Thanks!",
"It's nice to have it under your namespace:\r\n- it will appear on your research group page, along with your models\r\n- you can edit or create datasets at any time - you don't need to open PRs on GitHub\r\n\r\nAll the datasets that are not under a namespace are this way because we started adding datasets from GitHub. Now we encourage users to upload them directly to make things simpler, and aligned with the workflow for models\r\n\r\n(the documentation will be updated in the following days)\r\n\r\nNote that we will keep accepting PRs here though when there is no clear namespace under which a dataset should be, or for users that want a review from us",
"Ok, I'll do that. So, I'll just have to upload all the files under the `/fairlex` directory in my PR, right?",
"Yes exactly !",
"Ok, I uploaded most of them from the UI environment (https://huggingface.co/datasets/coastalcph/fairlex). Can I possibly upload the dummy data as well from the UI environment. I really want to avoid the CLI right now 😄 ",
"Yea sure, feel free to use the UI of the website, even for the dummy data ^^",
"Did you upload them yourself? Because I see the data preview, and I'm pretty sure, I didn't do that 😄 ",
"The preview is computed from the real data ;)\r\n\r\nThe dummy data are used for testing only",
"Haha, ok I was shocked! Cool, I close this PR, then. Thanks, again! ",
"Thank you 🤗"
] |
1,155,649,279
| 3,801
|
[Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters
|
closed
| 2022-03-01T18:06:43
| 2022-03-07T16:30:30
| 2022-03-07T16:30:29
|
https://github.com/huggingface/datasets/pull/3801
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3801",
"html_url": "https://github.com/huggingface/datasets/pull/3801",
"diff_url": "https://github.com/huggingface/datasets/pull/3801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3801.patch",
"merged_at": "2022-03-07T16:30:29"
}
|
lhoestq
| true
|
[
"Right ! Will add it in another PR :)"
] |
1,155,620,761
| 3,800
|
Added computer vision tasks
|
closed
| 2022-03-01T17:37:46
| 2022-03-04T07:15:55
| 2022-03-04T07:15:55
|
https://github.com/huggingface/datasets/pull/3800
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3800",
"html_url": "https://github.com/huggingface/datasets/pull/3800",
"diff_url": "https://github.com/huggingface/datasets/pull/3800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3800.patch",
"merged_at": "2022-03-04T07:15:55"
}
|
merveenoyan
| true
|
[] |
1,155,356,102
| 3,799
|
Xtreme-S Metrics
|
closed
| 2022-03-01T13:42:28
| 2022-03-16T14:40:29
| 2022-03-16T14:40:26
|
https://github.com/huggingface/datasets/pull/3799
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3799",
"html_url": "https://github.com/huggingface/datasets/pull/3799",
"diff_url": "https://github.com/huggingface/datasets/pull/3799.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3799.patch",
"merged_at": "2022-03-16T14:40:26"
}
|
patrickvonplaten
| true
|
[
"@lhoestq - if you could take a final review here this would be great (if you have 5min :-) ) ",
"Don't think the failures are related but not 100% sure",
"Yes the CI fail is unrelated - you can ignore it"
] |
1,154,411,066
| 3,798
|
Fix error message in CSV loader for newer Pandas versions
|
closed
| 2022-02-28T18:24:10
| 2022-02-28T18:51:39
| 2022-02-28T18:51:38
|
https://github.com/huggingface/datasets/pull/3798
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3798",
"html_url": "https://github.com/huggingface/datasets/pull/3798",
"diff_url": "https://github.com/huggingface/datasets/pull/3798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3798.patch",
"merged_at": "2022-02-28T18:51:38"
}
|
mariosasko
| true
|
[] |
1,154,383,063
| 3,797
|
Reddit dataset card contribution
|
closed
| 2022-02-28T17:53:18
| 2023-03-09T22:08:58
| 2022-03-01T12:58:57
|
https://github.com/huggingface/datasets/pull/3797
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3797",
"html_url": "https://github.com/huggingface/datasets/pull/3797",
"diff_url": "https://github.com/huggingface/datasets/pull/3797.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3797.patch",
"merged_at": "2022-03-01T12:58:56"
}
|
anna-kay
| true
|
[] |
1,154,298,629
| 3,796
|
Skip checksum computation if `ignore_verifications` is `True`
|
closed
| 2022-02-28T16:28:45
| 2022-02-28T17:03:46
| 2022-02-28T17:03:46
|
https://github.com/huggingface/datasets/pull/3796
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3796",
"html_url": "https://github.com/huggingface/datasets/pull/3796",
"diff_url": "https://github.com/huggingface/datasets/pull/3796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3796.patch",
"merged_at": "2022-02-28T17:03:46"
}
|
mariosasko
| true
|
[] |
1,153,261,281
| 3,795
|
can not flatten natural_questions dataset
|
closed
| 2022-02-27T13:57:40
| 2022-03-21T14:36:12
| 2022-03-21T14:36:12
|
https://github.com/huggingface/datasets/issues/3795
| null |
Hannibal046
| false
|
[
"same issue. downgrade it to a lower version.",
"Thanks for reporting, I'll take a look tomorrow :)"
] |
1,153,185,343
| 3,794
|
Add Mahalanobis distance metric
|
closed
| 2022-02-27T10:56:31
| 2022-03-02T14:46:15
| 2022-03-02T14:46:15
|
https://github.com/huggingface/datasets/pull/3794
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3794",
"html_url": "https://github.com/huggingface/datasets/pull/3794",
"diff_url": "https://github.com/huggingface/datasets/pull/3794.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3794.patch",
"merged_at": "2022-03-02T14:46:14"
}
|
JoaoLages
| true
|
[] |
1,150,974,950
| 3,793
|
Docs new UI actions no self hosted
|
closed
| 2022-02-25T23:48:55
| 2022-03-01T15:55:29
| 2022-03-01T15:55:28
|
https://github.com/huggingface/datasets/pull/3793
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3793",
"html_url": "https://github.com/huggingface/datasets/pull/3793",
"diff_url": "https://github.com/huggingface/datasets/pull/3793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3793.patch",
"merged_at": "2022-03-01T15:55:28"
}
|
LysandreJik
| true
|
[
"It seems like the doc can't be compiled right now because of the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 33, in <module>\r\n sys.exit(load_entry_point('doc-builder', 'console_scripts', 'doc-builder')())\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/commands/doc_builder_cli.py\", line 39, in main\r\n args.func(args)\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/commands/build.py\", line 95, in build_command\r\n build_doc(\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/build_doc.py\", line 361, in build_doc\r\n anchors_mapping = build_mdx_files(package, doc_folder, output_dir, page_info)\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/build_doc.py\", line 200, in build_mdx_files\r\n raise type(e)(f\"There was an error when converting {file} to the MDX format.\\n\" + e.args[0]) from e\r\nTypeError: There was an error when converting datasets/docs/source/package_reference/table_classes.mdx to the MDX format.\r\nexpected string or bytes-like object\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3793). All of your documentation changes will be reflected on that endpoint.",
"This is due to the injection of docstrings from PyArrow. I think I can fix that by moving all the docstrings and fix them manually.",
"> It seems like the doc can't be compiled right now because of the following:\r\n\r\nit is expected since there is something I need to change on doc-builder side.\r\n\r\n> This is due to the injection of docstrings from PyArrow. I think I can fix that by moving all the docstrings and fix them manually.\r\n\r\n@lhoestq I will let you know if we need to change it manually.\r\n\r\n@LysandreJik thanks a lot for this PR! I only had one question [here](https://github.com/huggingface/datasets/pull/3793#discussion_r816100194)",
"> @lhoestq I will let you know if we need to change it manually.\r\n\r\nIt would be simpler to change it manually anyway - I don't want our documentation to break if PyArrow has documentation issues",
"For some reason it fails when `Installing node dependencies` when running `npm ci` from the `kit` directory, any idea why @mishig25 ?",
"Checking it rn",
"It's very likely linked to an OOM error: https://github.com/huggingface/transformers/pull/15710#issuecomment-1051737337"
] |
1,150,812,404
| 3,792
|
Checksums didn't match for dataset source
|
closed
| 2022-02-25T19:55:09
| 2024-03-13T12:25:08
| 2022-02-28T08:44:18
|
https://github.com/huggingface/datasets/issues/3792
| null |
rafikg
| false
|
[
"Same issue with `dataset = load_dataset(\"dbpedia_14\")`\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']",
"I think this is a side-effect of #3787. The checksums won't match because the URLs have changed. @rafikg @Y0mingZhang, while this is fixed, maybe you can load the datasets as such:\r\n\r\n`data = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", ignore_verifications=True)`\r\n`dataset = load_dataset(\"dbpedia_14\", ignore_verifications=True)`\r\n\r\nThis will, most probably, skip the verifications and integrity checks listed [here](https://huggingface.co/docs/datasets/loading_datasets.html#integrity-verifications)",
"Hi! Installing the `datasets` package from master (`pip install git+https://github.com/huggingface/datasets.git`) and then redownloading the datasets with `download_mode` set to `force_redownload` (e.g. `dataset = load_dataset(\"dbpedia_14\", download_mode=\"force_redownload\")`) should fix the issue.",
"Hi @rafikg and @Y0mingZhang, thanks for reporting.\r\n\r\nIndeed it seems that Google Drive changed their way to access their data files. We have recently handled that change:\r\n- #3787\r\n\r\nbut it will be accessible to users only in our next release of the `datasets` version.\r\n- Note that our latest release (version 1.18.3) was made before this fix: https://github.com/huggingface/datasets/releases/tag/1.18.3\r\n\r\nIn the meantime, as @mariosasko explained, you can incorporate this \"fix\" by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, you should force the redownload of the data (before the fix, you are just downloading/caching the virus scan warning page, instead of the data file):\r\n```shell\r\ndata = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", download_mode=\"force_redownload\")",
"@albertvillanova by running:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\ndata = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", download_mode=\"force_redownload\", ignore_verifications=True)\r\n```\r\n\r\nI had a pickle error **UnpicklingError: invalid load key, '<'** in this part of code both `locally and on google colab`:\r\n\r\n```\r\n\"\"\"Yields examples.\"\"\"\r\nwith open(filepath, \"rb\") as f:\r\n data = pickle.load(f)\r\nfor id_, row in enumerate(data.items()):\r\n yield id_, {\"url\": row[0], \"article\": self._process_article(row[1])}\r\n```\r\n",
"This issue impacts many more datasets than the ones mention in this thread. Can we post # of downloads for each dataset by day (by successes and failures)? If so, it should be obvious which ones are failing.",
"I can see this problem too in xcopa, unfortunately installing the latest master (1.18.4.dev0) doesn't work, @albertvillanova .\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"xcopa\", \"it\")\r\n```\r\n\r\nThrows\r\n\r\n```\r\nin verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/cambridgeltl/xcopa/archive/master.zip']\r\n```",
"Hi @rafikg, I think that is another different issue. Let me check it... \r\n\r\nI guess maybe you are using a different Python version that the one the dataset owner used to create the pickle file...",
"@kwchurch the datasets impacted for this specific issue are the ones which are hosted at Google Drive.",
"@afcruzs-ms I think your issue is a different one, because that dataset is not hosted at Google Drive. Would you mind open another issue for that other problem, please? Thanks! :)",
"@albertvillanova just to let you know that I tried it locally and on colab and it is the same error",
"There are many many datasets on HugggingFace that are receiving this checksum error. Some of these datasets are very popular. There must be a way to track these errors, or to do regression testing. We don't want to catch each of these errors on each dataset, one at a time.",
"@rafikg I am sorry, but I can't reproduce your issue. For me it works OK for all languages. See: https://colab.research.google.com/drive/1yIcLw1it118-TYE3ZlFmV7gJcsF6UCsH?usp=sharing",
"@kwchurch the PR #3787 fixes this issue (generated by a change in Google Drive service) for ALL datasets with this issue. Once we make our next library release (in a couple of days), the fix will be accessible to all users that update our library from PyPI.",
"By the way, @rafikg, I discovered the URL for Spanish was wrong. I've created a PR to fix it:\r\n- #3806 ",
"I have the same problem with \"wider_face\" dataset. It seems that \"load_dataset\" function can not download the dataset from google drive.\r\n",
"still getting this issue with datasets==2.2.2 for \r\ndataset_fever_original_dev = load_dataset('fever', \"v1.0\", split=\"labelled_dev\")\r\n(this one seems to be hosted by aws though)\r\n\r\nupdate: also tried to install from source to get the latest 2.2.3.dev0, but still get the error below (and also force-redownloaded)\r\n\r\nupdate2: Seems like this issues is linked to a change in the links in the specific fever datasets: https://fever.ai/\r\n\"28/04/2022\r\nDataset download URLs have changed\r\nDownload URLs for shared task data for FEVER, FEVER2.0 and FEVEROUS have been updated. New URLS begin with https://fever.ai/download/[task name]/[filename]. All resource pages have been updated with the new URLs. Previous dataset URLs may not work and should be updated if you require these in your scripts. \"\r\n\r\n=> I don't know how to update the links for HF datasets - would be great if someone could update them :) \r\n\r\n```\r\n\r\nDownloading and preparing dataset fever/v1.0 (download: 42.78 MiB, generated: 38.39 MiB, post-processed: Unknown size, total: 81.17 MiB) to /root/.cache/huggingface/datasets/fever/v1.0/1.0.0/956b0a9c4b05e126fd956be73e09da5710992b5c85c30f0e5e1c500bc6051d0a...\r\n\r\nDownloading data files: 100%\r\n6/6 [00:07<00:00, 1.21s/it]\r\nDownloading data:\r\n278/? [00:00<00:00, 2.34kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 1.53kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 7.43kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 5.54kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 6.19kB/s]\r\nDownloading data:\r\n278/? [00:00<00:00, 7.51kB/s]\r\nExtracting data files: 100%\r\n6/6 [00:00<00:00, 108.05it/s]\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n\r\n[<ipython-input-20-92ec5c728ecf>](https://localhost:8080/#) in <module>()\r\n 27 # get labels for fever-nli-dev from original fever - only works for dev\r\n 28 # \"(The labels for both dev and test are hidden but you can retrieve the label for dev using the cid and the original FEVER data.)\"\" https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md\r\n---> 29 dataset_fever_original_dev = load_dataset('fever', \"v1.0\", split=\"labelled_dev\")\r\n 30 df_fever_original_dev = pd.DataFrame(data={\"id\": dataset_fever_original_dev[\"id\"], \"label\": dataset_fever_original_dev[\"label\"], \"claim\": dataset_fever_original_dev[\"claim\"], \"evidence_id\": dataset_fever_original_dev[\"evidence_id\"]})\r\n 31 df_fever_dev = pd.merge(df_fever_dev, df_fever_original_dev, how=\"left\", left_on=\"cid\", right_on=\"id\")\r\n\r\n4 frames\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl']\r\n```\r\n",
"I think this has to be fixed on the google drive side, but you also have to delete the bad stuff from your local cache. This is not a great design, but it is what it is.",
"We have fixed the issues with the datasets:\r\n- wider_face: by hosting their data files on the HuggingFace Hub (CC: @HosseynGT)\r\n- fever: by updating to their new data URLs (CC: @MoritzLaurer)",
"The yelp_review_full datasets has this problem as well and can't be fixed with the suggestion.",
"This is a super-common failure mode. We really need to find a better workaround. My solution was to wait until the owner of the dataset in question did the right thing, and then I had to delete my cached versions of the datasets with the bad checksums. I don't understand why this happens. Would it be possible to maintain a copy of the most recent version that was known to work, and roll back to that automatically if the checksums fail? And if the checksums fail, couldn't the system automatically flush the cached versions with the bad checksums? It feels like we are blaming the provider of the dataset, when in fact, there are things that the system could do to ease the pain. Let's take these error messages seriously. There are too many of them involving too many different datasets.",
"the [exams](https://huggingface.co/datasets/exams) dataset also has this issue and the provided fix above doesn't work",
"Same for [DART dataset](https://huggingface.co/datasets/dart):\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-train.json', 'https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-dev.json', 'https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-test.json']\r\n```",
"same for multi_news dataset",
"- @thesofakillers the issue with `exams` was fixed on 16 Aug by this PR:\r\n - #4853\r\n- @Aktsvigun the issue with `dart` has been transferred to the Hub: https://huggingface.co/datasets/dart/discussions/1\r\n - and fixed by PR: https://huggingface.co/datasets/dart/discussions/2\r\n- @Carol-gutianle the issue with `multi_news` have been transferred to the Hub as well: https://huggingface.co/datasets/multi_news/discussions/1\r\n - not reproducible: maybe you should try to update `datasets`\r\n\r\nFor information to everybody, we are removing the checksum verifications (that were creating a bad user experience). This will be in place in the following weeks.",
"auto_gptq is required for real quantization\r\n['/home/sam/Doctorproject/OmniQuant-main/main.py', '--model', '/home/sam/Doctorproject/OmniQuant-main/PATH/TO/LLaMA/llama-7b/', '--epochs', '20', '--output_dir', '/home/sam/Doctorproject/OmniQuant-main/outdir/llama-7b-w3a16/', '--eval_ppl', '--wbits', '3', '--abits', '16', '--lwc', '--net', 'llama-7b', '--aug_loss']\r\n[2024-03-13 17:58:48 root](main.py 262): INFO Namespace(model='/home/sam/Doctorproject/OmniQuant-main/PATH/TO/LLaMA/llama-7b/', cache_dir='./cache', output_dir='/home/sam/Doctorproject/OmniQuant-main/outdir/llama-7b-w3a16/', save_dir=None, resume=None, real_quant=False, calib_dataset='wikitext2', nsamples=128, batch_size=1, seed=2, tasks='', eval_ppl=True, num_fewshot=0, wbits=3, abits=16, group_size=None, alpha=0.5, let_lr=0.005, lwc_lr=0.01, wd=0, epochs=20, let=False, lwc=True, aug_loss=True, symmetric=False, disable_zero_point=False, a_dynamic_method='per_token', w_dynamic_method='per_channel', limit=-1, multigpu=False, deactive_amp=False, attn_implementation='eager', net='llama-7b', act_scales=None, act_shifts=None)\r\nLoading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s]/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\nLoading checkpoint shards: 100%|██████████| 33/33 [00:11<00:00, 2.98it/s]\r\nvocab size: 32000\r\n[2024-03-13 17:58:59 root](main.py 331): INFO === start quantization ===\r\nget_wikitext2\r\n[2024-03-13 18:02:20 datasets.load](load.py 1586): WARNING Using the latest cached version of the module from /home/sam/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 (last modified on Wed Mar 13 16:54:26 2024) since it couldn't be found locally at wikitext, or remotely on the Hugging Face Hub.\r\nUsing the latest cached version of the module from /home/sam/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126 (last modified on Wed Mar 13 16:54:26 2024) since it couldn't be found locally at wikitext, or remotely on the Hugging Face Hub.\r\nDownloading data: 243B [00:00, 877kB/s]\r\nGenerating test split: 0%| | 0/4358 [00:00<?, ? examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1742, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/home/sam/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/wikitext.py\", line 187, in _generate_examples\r\n with open(data_file, encoding=\"utf-8\") as f:\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/streaming.py\", line 75, in wrapper\r\n return function(*args, download_config=download_config, **kwargs)\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 507, in xopen\r\n return open(main_hop, mode, *args, **kwargs)\r\nNotADirectoryError: [Errno 20] Not a directory: '/home/sam/.cache/huggingface/datasets/downloads/94be2a7b3fff32ae7379658c8d3821035b666baddad3a06d29b55ab3a4ab3115/wikitext-2-raw/wiki.test.raw'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/sam/Doctorproject/OmniQuant-main/main.py\", line 382, in <module>\r\n main()\r\n File \"/home/sam/Doctorproject/OmniQuant-main/main.py\", line 339, in main\r\n dataloader, _ = get_loaders(\r\n File \"/home/sam/Doctorproject/OmniQuant-main/datautils.py\", line 178, in get_loaders\r\n return get_wikitext2(nsamples, seed, seqlen, model)\r\n File \"/home/sam/Doctorproject/OmniQuant-main/datautils.py\", line 37, in get_wikitext2\r\n traindata = load_dataset(path='wikitext', name='wikitext-2-raw-v1', split='train', download_mode=\"force_redownload\")\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/load.py\", line 2598, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1021, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1783, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1116, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1621, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/sam/anaconda3/envs/omniquant/lib/python3.10/site-packages/datasets/builder.py\", line 1778, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n\r\n\r\n@albertvillanova @Y0mingZhang @kwchurch @HosseynGT @rafikg I tried the solutions you provided above, but none of them worked. Could you please give me some guidance\r\n"
] |
1,150,733,475
| 3,791
|
Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem
|
closed
| 2022-02-25T18:26:35
| 2022-03-01T13:10:43
| 2022-03-01T13:10:42
|
https://github.com/huggingface/datasets/pull/3791
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3791",
"html_url": "https://github.com/huggingface/datasets/pull/3791",
"diff_url": "https://github.com/huggingface/datasets/pull/3791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3791.patch",
"merged_at": "2022-03-01T13:10:42"
}
|
mariosasko
| true
|
[] |
1,150,646,899
| 3,790
|
Add doc builder scripts
|
closed
| 2022-02-25T16:38:47
| 2022-03-01T15:55:42
| 2022-03-01T15:55:41
|
https://github.com/huggingface/datasets/pull/3790
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3790",
"html_url": "https://github.com/huggingface/datasets/pull/3790",
"diff_url": "https://github.com/huggingface/datasets/pull/3790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3790.patch",
"merged_at": "2022-03-01T15:55:41"
}
|
lhoestq
| true
|
[
"I think we're only missing the hosted runner to be configured for this repository and we should be good",
"Regarding the self-hosted runner, I actually encourage using the approach defined here: https://github.com/huggingface/transformers/pull/15710, which doesn't leverage a self-hosted runner. This prevents queuing jobs, which is important when we expect several concurrent jobs.",
"Opened a PR for that on your branch here: https://github.com/huggingface/datasets/pull/3793"
] |
1,150,587,404
| 3,789
|
Add URL and ID fields to Wikipedia dataset
|
closed
| 2022-02-25T15:34:37
| 2022-03-04T08:24:24
| 2022-03-04T08:24:23
|
https://github.com/huggingface/datasets/pull/3789
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3789",
"html_url": "https://github.com/huggingface/datasets/pull/3789",
"diff_url": "https://github.com/huggingface/datasets/pull/3789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3789.patch",
"merged_at": "2022-03-04T08:24:23"
}
|
albertvillanova
| true
|
[
"Do you think we have a dedicated branch for all the changes we want to do to wikipedia ? Then once everything looks good + we have preprocessed the main languages, we can merge it on the `master` branch",
"Yes, @lhoestq, I agree with you.\r\n\r\nI have just created the dedicated branch [`update-wikipedia`](https://github.com/huggingface/datasets/tree/update-wikipedia). We can merge every PR (once validated) to that branch; once all changes are merged to that branch, we could create the preprocessed datasets and then merge the branch to master. ",
"@lhoestq I guess you approve this PR?"
] |
1,150,375,720
| 3,788
|
Only-data dataset loaded unexpectedly as validation split
|
open
| 2022-02-25T12:11:39
| 2022-02-28T11:22:22
| null |
https://github.com/huggingface/datasets/issues/3788
| null |
albertvillanova
| false
|
[
"I see two options:\r\n1. drop the \"dev\" keyword since it can be considered too generic\r\n2. improve the pattern to something more reasonable, e.g. asking for a separator before and after \"dev\"\r\n```python\r\n[\"*[ ._-]dev[ ._-]*\", \"dev[ ._-]*\"]\r\n```\r\n\r\nI think 2. is nice. If we agree on this one we can even decide to require the separation for the other split keywords \"train\", \"test\" etc.",
"Yes, I had something like that on mind: \"dev\" not being part of a word.\r\n```\r\n\"[^a-zA-Z]dev[^a-zA-Z]\"",
"Is there a reason why we want that regex? It feels like something that'll still be an issue for some weird case. \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?",
"The regex is needed as part of our effort to make datasets configurable without code. In particular we define some generic dataset repository structures that users can follow\r\n\r\n> ```\r\n> \"[^a-zA-Z]*dev[^a-zA-Z]*\"\r\n> ```\r\n\r\nunfortunately our glob doesn't support \"^\": \r\n\r\nhttps://github.com/fsspec/filesystem_spec/blob/3e739db7e53f5b408319dcc9d11e92bc1f938902/fsspec/spec.py#L465-L479",
"> \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?\r\n\r\nAnd `my_dataset_dev.foo` would match the pattern, and we also have the same pattern but for the \"validation\" keyword so `my_dataset_validation.foo` would work too",
"> The regex is needed as part of our effort to make datasets configurable without code\r\n\r\nThis feels like coding with the filename ^^'",
"This is still much easier than having to write a full dataset script right ? :p"
] |
1,150,235,569
| 3,787
|
Fix Google Drive URL to avoid Virus scan warning
|
closed
| 2022-02-25T09:35:12
| 2022-03-04T20:43:32
| 2022-02-25T11:56:35
|
https://github.com/huggingface/datasets/pull/3787
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3787",
"html_url": "https://github.com/huggingface/datasets/pull/3787",
"diff_url": "https://github.com/huggingface/datasets/pull/3787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3787.patch",
"merged_at": "2022-02-25T11:56:35"
}
|
albertvillanova
| true
|
[
"Thanks for this @albertvillanova!",
"Once this PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```",
"Thanks, that solved a bunch of problems we had downstream!\r\ncf. https://github.com/ElementAI/picard/issues/61"
] |
1,150,233,067
| 3,786
|
Bug downloading Virus scan warning page from Google Drive URLs
|
closed
| 2022-02-25T09:32:23
| 2022-03-03T09:25:59
| 2022-02-25T11:56:35
|
https://github.com/huggingface/datasets/issues/3786
| null |
albertvillanova
| false
|
[
"Once the PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] |
1,150,069,801
| 3,785
|
Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset)
|
closed
| 2022-02-25T05:48:57
| 2022-03-03T16:43:47
| 2022-03-03T14:03:37
|
https://github.com/huggingface/datasets/pull/3785
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3785",
"html_url": "https://github.com/huggingface/datasets/pull/3785",
"diff_url": "https://github.com/huggingface/datasets/pull/3785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3785.patch",
"merged_at": null
}
|
AngadSethi
| true
|
[
"Thank you, @albertvillanova!",
"Got it. Thanks for explaining this, @albertvillanova!\r\n\r\n> On the other hand, the tests are not passing because the dummy data should also be fixed. Once done, this PR will be able to be merged into master.\r\n\r\nWill do this 👍",
"Hi ! I think we need to fix the issue for every dataset. This can be done simply by fixing how we handle Google Drive links, see my comment https://github.com/huggingface/datasets/pull/3775#issuecomment-1050970157",
"Hi @lhoestq! I think @albertvillanova has already fixed this in #3787",
"Cool ! I missed this one :) thanks",
"No problem!",
"Hi, @AngadSethi, I think that once:\r\n- #3787 \r\n\r\nwas merged, issue:\r\n- #3784 \r\n\r\nwas also fixed.\r\n\r\nTherefore, I think this PR is no longer necessary. I'm closing it. Let me know if you agree.",
"Yes, absolutely @albertvillanova! I agree :)"
] |
1,150,057,955
| 3,784
|
Unable to Download CNN-Dailymail Dataset
|
closed
| 2022-02-25T05:24:47
| 2022-03-03T14:05:17
| 2022-03-03T14:05:17
|
https://github.com/huggingface/datasets/issues/3784
| null |
AngadSethi
| false
|
[
"#self-assign",
"@AngadSethi thanks for reporting and thanks for your PR!",
"Glad to help @albertvillanova! Just fine-tuning the PR, will comment once I am able to get it up and running 😀",
"Fixed by:\r\n- #3787"
] |
1,149,256,744
| 3,783
|
Support passing str to iter_files
|
closed
| 2022-02-24T12:58:15
| 2022-02-24T16:01:40
| 2022-02-24T16:01:40
|
https://github.com/huggingface/datasets/pull/3783
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3783",
"html_url": "https://github.com/huggingface/datasets/pull/3783",
"diff_url": "https://github.com/huggingface/datasets/pull/3783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3783.patch",
"merged_at": "2022-02-24T16:01:39"
}
|
albertvillanova
| true
|
[
"@mariosasko it was indeed while reading that PR, that I remembered this change I wanted to do long ago... 😉"
] |
1,148,994,022
| 3,782
|
Error of writing with different schema, due to nonpreservation of nullability
|
closed
| 2022-02-24T08:23:07
| 2022-03-03T14:54:39
| 2022-03-03T14:54:39
|
https://github.com/huggingface/datasets/pull/3782
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3782",
"html_url": "https://github.com/huggingface/datasets/pull/3782",
"diff_url": "https://github.com/huggingface/datasets/pull/3782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3782.patch",
"merged_at": "2022-03-03T14:54:39"
}
|
richarddwang
| true
|
[
"Hi ! Thanks for reporting, indeed `disable_nullable` doesn't seem to be supported in this case. Maybe at one point we can have `disable_nullable` as a parameter of certain feature types"
] |
1,148,599,680
| 3,781
|
Reddit dataset card additions
|
closed
| 2022-02-23T21:29:16
| 2022-02-28T18:00:40
| 2022-02-28T11:21:14
|
https://github.com/huggingface/datasets/pull/3781
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3781",
"html_url": "https://github.com/huggingface/datasets/pull/3781",
"diff_url": "https://github.com/huggingface/datasets/pull/3781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3781.patch",
"merged_at": "2022-02-28T11:21:14"
}
|
anna-kay
| true
|
[
"Hello! I added the tags and created a PR. Just to note, regarding the paperswithcode_id tag, that currently has the value \"reddit\"; the dataset described as reddit in paperswithcode is https://paperswithcode.com/dataset/reddit and it isn't the Webis-tldr-17. I could not find Webis-tldr-17 in paperswithcode neither in the Summarization category nor using the keywords reddit, webis, & tldr. I didn't change this tag."
] |
1,148,186,272
| 3,780
|
Add ElkarHizketak v1.0 dataset
|
closed
| 2022-02-23T14:44:17
| 2022-03-04T19:04:29
| 2022-03-04T19:04:29
|
https://github.com/huggingface/datasets/pull/3780
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3780",
"html_url": "https://github.com/huggingface/datasets/pull/3780",
"diff_url": "https://github.com/huggingface/datasets/pull/3780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3780.patch",
"merged_at": "2022-03-04T19:04:29"
}
|
antxa
| true
|
[
"I also filled some missing sections in the dataset card"
] |
1,148,050,636
| 3,779
|
Update manual download URL in newsroom dataset
|
closed
| 2022-02-23T12:49:07
| 2022-02-23T13:26:41
| 2022-02-23T13:26:40
|
https://github.com/huggingface/datasets/pull/3779
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3779",
"html_url": "https://github.com/huggingface/datasets/pull/3779",
"diff_url": "https://github.com/huggingface/datasets/pull/3779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3779.patch",
"merged_at": "2022-02-23T13:26:40"
}
|
albertvillanova
| true
|
[] |
1,147,898,946
| 3,778
|
Not be able to download dataset - "Newsroom"
|
closed
| 2022-02-23T10:15:50
| 2022-02-23T17:05:04
| 2022-02-23T13:26:40
|
https://github.com/huggingface/datasets/issues/3778
| null |
Darshan2104
| false
|
[
"Hi @Darshan2104, thanks for reporting.\r\n\r\nPlease note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.\r\n\r\nApparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.nlp.cornell.edu/newsroom/index.html\r\n- Download page: https://lil.nlp.cornell.edu/newsroom/download/index.html\r\n\r\nI'm fixing the link in our Datasets library.",
"@albertvillanova Thanks for the solution and link you made my day!"
] |
1,147,232,875
| 3,777
|
Start removing canonical datasets logic
|
closed
| 2022-02-22T18:23:30
| 2022-02-24T15:04:37
| 2022-02-24T15:04:36
|
https://github.com/huggingface/datasets/pull/3777
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3777",
"html_url": "https://github.com/huggingface/datasets/pull/3777",
"diff_url": "https://github.com/huggingface/datasets/pull/3777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3777.patch",
"merged_at": "2022-02-24T15:04:36"
}
|
lhoestq
| true
|
[
"I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?",
"> I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?\r\n\r\nI added an explanation, let me know if it sounds good to you:\r\n\r\n```\r\nDatasets used to be hosted on our GitHub repository, but all datasets have now been migrated to the Hugging Face Hub.\r\nThe legacy GitHub datasets were added originally on our GitHub repository and therefore don't have a namespace: \"squad\", \"glue\", etc. unlike the other datasets that are named \"username/dataset_name\" or \"org/dataset_name\".\r\n```\r\n",
"Thanks for the feedbacks ! Merging this now - if you have some comments I can take care of them in a subsequent PR\r\n\r\nI'll also take care of resolving the conflicts with https://github.com/huggingface/datasets/pull/3690"
] |
1,146,932,871
| 3,776
|
Allow download only some files from the Wikipedia dataset
|
open
| 2022-02-22T13:46:41
| 2022-02-22T14:50:02
| null |
https://github.com/huggingface/datasets/issues/3776
| null |
jvanz
| false
|
[
"Hi @jvanz, thank you for your proposal.\r\n\r\nIn fact, we are aware that it is very common the problem you mention. Because of that, we are currently working in implementing a new version of wikipedia on the Hub, with all data preprocessed (no need to use Apache Beam), from where you will be able to use `data_files` to load only a specific subset of the data files.\r\n\r\nSee:\r\n- #3401 "
] |
1,146,849,454
| 3,775
|
Update gigaword card and info
|
closed
| 2022-02-22T12:27:16
| 2022-02-28T11:35:24
| 2022-02-28T11:35:24
|
https://github.com/huggingface/datasets/pull/3775
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3775",
"html_url": "https://github.com/huggingface/datasets/pull/3775",
"diff_url": "https://github.com/huggingface/datasets/pull/3775.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3775.patch",
"merged_at": "2022-02-28T11:35:24"
}
|
mariosasko
| true
|
[
"I think it actually comes from an issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/file_utils.py#L575-L579\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/streaming_download_manager.py#L386-L389\r\n\r\nThis code doesn't seem to work anymore. This can probably be fixed with\r\n\r\n```python\r\nif url.startswith(\"https://drive.google.com/\"): \r\n url += \"&confirm=t\"\r\n cookies = response.cookies \r\n```\r\n\r\nbecause Google Drive doesn't return the `download_warning` cookie anymore.",
"Actually it seems that is has been fixed already in https://github.com/huggingface/datasets/pull/3787 :)\r\n\r\nI think it should have fixed the gigaword dataset loading",
"@lhoestq The linked PR indeed fixes the issue. This PR is still worth merging IMO to update `gigaword`'s card."
] |
1,146,843,177
| 3,774
|
Fix reddit_tifu data URL
|
closed
| 2022-02-22T12:21:15
| 2022-02-22T12:38:45
| 2022-02-22T12:38:44
|
https://github.com/huggingface/datasets/pull/3774
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3774",
"html_url": "https://github.com/huggingface/datasets/pull/3774",
"diff_url": "https://github.com/huggingface/datasets/pull/3774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3774.patch",
"merged_at": "2022-02-22T12:38:44"
}
|
albertvillanova
| true
|
[] |
1,146,758,335
| 3,773
|
Checksum mismatch for the reddit_tifu dataset
|
closed
| 2022-02-22T10:57:07
| 2022-02-25T19:27:49
| 2022-02-22T12:38:44
|
https://github.com/huggingface/datasets/issues/3773
| null |
anna-kay
| false
|
[
"Thanks for reporting, @anna-kay. We are fixing it.",
"@albertvillanova Thank you for the fast response! However I am still getting the same error:\r\n\r\nDownloading: 2.23kB [00:00, ?B/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Anna\\PycharmProjects\\summarization\\main.py\", line 17, in <module>\r\n dataset = load_dataset('reddit_tifu', 'long')\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\load.py\", line 1702, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\builder.py\", line 594, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n verify_checksums(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\utils\\info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']\r\n\r\nI have cleaned the cache/huggingface/datasets & cache/huggingface/modules files and also tried on another machine with a fresh installation of trasnformers & datasets. \r\nThe reddit_tifu.py that gets downloaded still has the previous url on line 51, _URL = \"https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF\" ",
"Hi @anna-kay, I'm sorry I didn't clearly explain the details to you:\r\n- the error has been fixed in our `master` branch on GitHub: https://github.com/huggingface/datasets/commit/8ae21bf6a77175dc803ce2f1b93d18b8fbf45586\r\n- the fix will not be accessible to users in PyPI until our next release of the `datasets` library\r\n - our latest release (version 1.18.3) was made 23 days ago: https://github.com/huggingface/datasets/releases/tag/1.18.3\r\n- in the meantime, you can get the fix if you install datasets from our GitHub `master` branch:\r\n ```\r\n pip install git+https://github.com/huggingface/datasets#egg=datasets\r\n ```",
"@albertvillanova Ok great, makes sence. Thank you very much for the explanation!"
] |
1,146,718,630
| 3,772
|
Fix: dataset name is stored in keys
|
closed
| 2022-02-22T10:20:37
| 2022-02-22T11:08:34
| 2022-02-22T11:08:33
|
https://github.com/huggingface/datasets/pull/3772
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3772",
"html_url": "https://github.com/huggingface/datasets/pull/3772",
"diff_url": "https://github.com/huggingface/datasets/pull/3772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3772.patch",
"merged_at": "2022-02-22T11:08:33"
}
|
thomasw21
| true
|
[] |
1,146,561,140
| 3,771
|
Fix DuplicatedKeysError on msr_sqa dataset
|
closed
| 2022-02-22T07:44:24
| 2022-02-22T08:12:40
| 2022-02-22T08:12:39
|
https://github.com/huggingface/datasets/pull/3771
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3771",
"html_url": "https://github.com/huggingface/datasets/pull/3771",
"diff_url": "https://github.com/huggingface/datasets/pull/3771.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3771.patch",
"merged_at": "2022-02-22T08:12:39"
}
|
albertvillanova
| true
|
[] |
1,146,336,667
| 3,770
|
DuplicatedKeysError on msr_sqa dataset
|
closed
| 2022-02-22T00:43:33
| 2022-02-22T08:12:39
| 2022-02-22T08:12:39
|
https://github.com/huggingface/datasets/issues/3770
| null |
kolk
| false
|
[
"Thanks for reporting, @kolk.\r\n\r\nWe are fixing it. "
] |
1,146,258,023
| 3,769
|
`dataset = dataset.map()` causes faiss index lost
|
open
| 2022-02-21T21:59:23
| 2022-06-27T14:56:29
| null |
https://github.com/huggingface/datasets/issues/3769
| null |
Oaklight
| false
|
[
"Hi ! Indeed `map` is dropping the index right now, because one can create a dataset with more or fewer rows using `map` (and therefore the index might not be relevant anymore)\r\n\r\nI guess we could check the resulting dataset length, and if the user hasn't changed the dataset size we could keep the index, what do you think ?",
"doing `.add_column(\"x\",x_data)` also removes the index. the new column might be irrelevant to the index so I don't think it should drop. \r\n\r\nMinimal example\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nimport numpy as np\r\n\r\ndata=load_dataset(\"ceyda/cats_vs_dogs_sample\") #just a test dataset\r\ndata=data[\"train\"]\r\nembd_data=data.map(lambda x: {\"emb\":np.random.uniform(-1,0,50).astype(np.float32)})\r\nembd_data.add_faiss_index(column=\"emb\")\r\nprint(embd_data.list_indexes())\r\nembd_data=embd_data.add_column(\"x\",[0]*data.num_rows)\r\nprint(embd_data.list_indexes())\r\n```",
"I agree `add_column` shouldn't drop the index indeed ! Is it something you'd like to contribute ? I think it's just a matter of copying the `self._indexes` dictionary to the output dataset"
] |
1,146,102,442
| 3,768
|
Fix HfFileSystem docstring
|
closed
| 2022-02-21T18:14:40
| 2022-02-22T09:13:03
| 2022-02-22T09:13:02
|
https://github.com/huggingface/datasets/pull/3768
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3768",
"html_url": "https://github.com/huggingface/datasets/pull/3768",
"diff_url": "https://github.com/huggingface/datasets/pull/3768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3768.patch",
"merged_at": "2022-02-22T09:13:02"
}
|
lhoestq
| true
|
[] |
1,146,036,648
| 3,767
|
Expose method and fix param
|
closed
| 2022-02-21T16:57:47
| 2022-02-22T08:35:03
| 2022-02-22T08:35:02
|
https://github.com/huggingface/datasets/pull/3767
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3767",
"html_url": "https://github.com/huggingface/datasets/pull/3767",
"diff_url": "https://github.com/huggingface/datasets/pull/3767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3767.patch",
"merged_at": "2022-02-22T08:35:02"
}
|
severo
| true
|
[] |
1,145,829,289
| 3,766
|
Fix head_qa data URL
|
closed
| 2022-02-21T13:52:50
| 2022-02-21T14:39:20
| 2022-02-21T14:39:19
|
https://github.com/huggingface/datasets/pull/3766
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3766",
"html_url": "https://github.com/huggingface/datasets/pull/3766",
"diff_url": "https://github.com/huggingface/datasets/pull/3766.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3766.patch",
"merged_at": "2022-02-21T14:39:19"
}
|
albertvillanova
| true
|
[] |
1,145,126,881
| 3,765
|
Update URL for tagging app
|
closed
| 2022-02-20T20:34:31
| 2022-02-20T20:36:10
| 2022-02-20T20:36:06
|
https://github.com/huggingface/datasets/pull/3765
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3765",
"html_url": "https://github.com/huggingface/datasets/pull/3765",
"diff_url": "https://github.com/huggingface/datasets/pull/3765.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3765.patch",
"merged_at": null
}
|
lewtun
| true
|
[
"Oh, this URL shouldn't be updated to the tagging app as it's actually used for creating the README - closing this."
] |
1,145,107,050
| 3,764
|
!
|
closed
| 2022-02-20T19:05:43
| 2022-02-21T08:55:58
| 2022-02-21T08:55:58
|
https://github.com/huggingface/datasets/issues/3764
| null |
LesiaFedorenko
| false
|
[] |
1,145,099,878
| 3,763
|
It's not possible download `20200501.pt` dataset
|
closed
| 2022-02-20T18:34:58
| 2022-02-21T12:06:12
| 2022-02-21T09:25:06
|
https://github.com/huggingface/datasets/issues/3763
| null |
jvanz
| false
|
[
"Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\ndataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n```",
"> ```python\r\n> dataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n> ```\r\n\r\nThank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.\r\n\r\nI've tried something similar changing the date in the `load_dataset` call that I've shared in the bug description. Obviously, it did not work. I need to read the docs more carefully next time. My bad!\r\n\r\nThanks again and sorry for the noise.\r\n\r\n"
] |
1,144,849,557
| 3,762
|
`Dataset.class_encode` should support custom class names
|
closed
| 2022-02-19T21:21:45
| 2022-02-21T12:16:35
| 2022-02-21T12:16:35
|
https://github.com/huggingface/datasets/issues/3762
| null |
Dref360
| false
|
[
"Hi @Dref360, thanks a lot for your proposal.\r\n\r\nIt totally makes sense to have more flexibility when class encoding, I agree.\r\n\r\nYou could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_encode_column` arguments).\r\n\r\nAnd the latter made me think of `Dataset.cast_column`...\r\n\r\nMaybe better to have some others' opinions @lhoestq @mariosasko ",
"Hi @Dref360! You can use [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/package_reference/main_classes.html#datasets.Dataset.align_labels_with_mapping) after `Dataset.class_encode_column` to assign a different mapping of labels to ids.\r\n\r\n@albertvillanova I'd like to avoid adding more complexity to the API where it's not (absolutely) needed, so I don't think introducing a new param in `Dataset.class_encode_column` is a good idea.\r\n\r\n",
"I wasn't aware that it existed thank you for the link.\n\nClosing then! "
] |
1,144,830,702
| 3,761
|
Know your data for HF hub
|
closed
| 2022-02-19T19:48:47
| 2022-02-21T14:15:23
| 2022-02-21T14:15:23
|
https://github.com/huggingface/datasets/issues/3761
| null |
Muhtasham
| false
|
[
"Hi @Muhtasham you should take a look at https://huggingface.co/blog/data-measurements-tool and accompanying demo app at https://huggingface.co/spaces/huggingface/data-measurements-tool\r\n\r\nWe would be interested in your feedback. cc @meg-huggingface @sashavor @yjernite "
] |
1,144,804,558
| 3,760
|
Unable to view the Gradio flagged call back dataset
|
closed
| 2022-02-19T17:45:08
| 2022-03-22T07:12:11
| 2022-03-22T07:12:11
|
https://github.com/huggingface/datasets/issues/3760
| null |
kingabzpro
| false
|
[
"Hi @kingabzpro.\r\n\r\nI think you need to create a loading script that creates the dataset from the CSV file and the image paths.\r\n\r\nAs example, you could have a look at the Food-101 dataset: https://huggingface.co/datasets/food101\r\n- Loading script: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nOnce the loading script is created, the viewer will show a previsualization of your dataset. ",
"@albertvillanova I don't think this is the issue. I have created another dataset with similar files and format and it works. https://huggingface.co/datasets/kingabzpro/savtadepth-flags-V2",
"Yes, you are right, that was not the issue.\r\n\r\nJust take into account that sometimes the viewer can take some time until it shows the preview of the dataset.\r\nAfter some time, yours is finally properly shown: https://huggingface.co/datasets/kingabzpro/savtadepth-flags",
"The problem was resolved by deleted the dataset and creating new one with similar name and then clicking on flag button.",
"I think if you make manual changes to dataset the whole system breaks. "
] |
1,143,400,770
| 3,759
|
Rename GenerateMode to DownloadMode
|
closed
| 2022-02-18T16:53:53
| 2022-02-22T13:57:24
| 2022-02-22T12:22:52
|
https://github.com/huggingface/datasets/pull/3759
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3759",
"html_url": "https://github.com/huggingface/datasets/pull/3759",
"diff_url": "https://github.com/huggingface/datasets/pull/3759.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3759.patch",
"merged_at": "2022-02-22T12:22:52"
}
|
albertvillanova
| true
|
[
"Thanks! Used here: https://github.com/huggingface/datasets-preview-backend/blob/main/src/datasets_preview_backend/models/dataset.py#L26 :) "
] |
1,143,366,393
| 3,758
|
head_qa file missing
|
closed
| 2022-02-18T16:32:43
| 2022-02-28T14:29:18
| 2022-02-21T14:39:19
|
https://github.com/huggingface/datasets/issues/3758
| null |
severo
| false
|
[
"We usually find issues with files hosted at Google Drive...\r\n\r\nIn this case we download the Google Drive Virus scan warning instead of the data file.",
"Fixed: https://huggingface.co/datasets/head_qa/viewer/en/train. Thanks\r\n\r\n<img width=\"1551\" alt=\"Capture d’écran 2022-02-28 à 15 29 04\" src=\"https://user-images.githubusercontent.com/1676121/156000224-fd3f62c6-8b54-4df1-8911-bdcb0bac3f1a.png\">\r\n"
] |
1,143,300,880
| 3,757
|
Add perplexity to metrics
|
closed
| 2022-02-18T15:52:23
| 2022-02-25T17:13:34
| 2022-02-25T17:13:34
|
https://github.com/huggingface/datasets/pull/3757
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3757",
"html_url": "https://github.com/huggingface/datasets/pull/3757",
"diff_url": "https://github.com/huggingface/datasets/pull/3757.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3757.patch",
"merged_at": "2022-02-25T17:13:34"
}
|
emibaylor
| true
|
[
"Awesome thank you ! The implementation of the parent `Metric` class was assuming that all metrics were supposed to have references/predictions pairs - I just changed that so you don't have to override `compute()`. I took the liberty of doing the changes directly inside this PR to make sure it works as expected with perplexity.\r\n\r\nOther than that it looks in pretty good shape :) I just did minor changes like remove a remaining `print` as well as fixing the `Features` defined in `_info()`. I also renamed `input_text` to `input_texts` since it makes it more obvious that it's a list of strings - let me know if it sounds good to you.\r\n\r\nLet me know if you'd like to make other changes or if it's all good for you !",
"The test with the full test set seems to take too much time in the CI - maybe we can just select `split=\"test[:few_examples]\"` (around 10 maybe ?)"
] |
1,143,273,825
| 3,756
|
Images get decoded when using `map()` with `input_columns` argument on a dataset
|
closed
| 2022-02-18T15:35:38
| 2022-12-13T16:59:06
| 2022-12-13T16:59:06
|
https://github.com/huggingface/datasets/issues/3756
| null |
kklemon
| false
|
[
"Hi! If I'm not mistaken, this behavior is intentional, but I agree it could be more intuitive.\r\n\r\n@albertvillanova Do you remember why you decided not to decode columns in the `Audio` feature PR when `input_columns` is not `None`? IMO we should decode those columns, and we don't even have to use lazy structures here because the user explicitly requires them in the map transform. \r\n\r\ncc @lhoestq for visibility",
"I think I excluded to decorate the function when `input_columns` were passed as a quick fix for some non-passing tests: \r\n- https://github.com/huggingface/datasets/pull/2324/commits/9d7c3e8fa53e23ec636859b4407eeec904b1b3f9\r\n\r\nThat PR was quite complex and I decided to focus on the main feature requests, leaving refinements for subsequent PRs.\r\n\r\nNote that when `input_columns` are passed, the signature of the function is effectively changed, while the decorated function expects an item (whether an example or a batch) as first arg (which is not the case when passing `input_columns`.\r\n\r\nI agree we should consider supporting the case when `input_columns` are passed."
] |
1,143,032,961
| 3,755
|
Cannot preview dataset
|
closed
| 2022-02-18T13:06:45
| 2022-02-19T14:30:28
| 2022-02-18T15:41:33
|
https://github.com/huggingface/datasets/issues/3755
| null |
frascuchon
| false
|
[
"Thanks for reporting. The dataset viewer depends on some backend treatments, and for now, they might take some hours to get processed. We're working on improving it.",
"It has finally been processed. Thanks for the patience.",
"Thanks for the info @severo !"
] |
1,142,886,536
| 3,754
|
Overflowing indices in `select`
|
closed
| 2022-02-18T11:30:52
| 2022-02-18T11:38:23
| 2022-02-18T11:38:23
|
https://github.com/huggingface/datasets/issues/3754
| null |
lvwerra
| false
|
[
"Fixed on master (see https://github.com/huggingface/datasets/pull/3719).",
"Awesome, I did not find that one! Thanks."
] |
1,142,821,144
| 3,753
|
Expanding streaming capabilities
|
open
| 2022-02-18T10:45:41
| 2025-03-19T14:50:14
| null |
https://github.com/huggingface/datasets/issues/3753
| null |
lvwerra
| false
|
[
"Related to: https://github.com/huggingface/datasets/issues/3444",
"Cool ! `filter` will be very useful. There can be a filter that you can apply on a streaming dataset:\r\n```python\r\nload_dataset(..., streaming=True).filter(lambda x: x[\"lang\"] == \"sw\")\r\n```\r\n\r\nOtherwise if you want to apply a filter on the source files that are going to be used for streaming, the logic has to be impIemented directly in the dataset script, or if there's no dataset script this can be done with pattern matching\r\n```python\r\nload_dataset(..., lang=\"sw\") # if the dataset script supports this parameter\r\nload_dataset(..., data_files=\"data/lang=sw/*\") # if there's no dataset script, but only data files\r\n```\r\n\r\n--------------\r\n\r\nHere are also some additional ideas of API to convert from iterable to map-style dataset:\r\n```python\r\non_disk_dataset = streaming_dataset.to_disk()\r\non_disk_dataset = streaming_dataset.to_disk(path=\"path/to/my/dataset/dir\")\r\n\r\nin_memory_dataset = streaming_dataset.take(100).to_memory() # to experiment without having to write files\r\n```\r\n--------------\r\n\r\nFinally regarding `push_to_hub`, we can replace `batch_size` by `shard_size` (same API as for on-disk datasets). The default is 500MB per file\r\n\r\nLet me know what you think !",
"Regarding conversion, I'd also ask for some kind of equivalent to `save_to_disk` for an `IterableDataset`.\r\n\r\nSimilarly to the streaming to hub idea, my use case would be to define a sequence of dataset transforms via `.map()`, using an `IterableDataset` as the input (so processing could start without doing whole download up-front), but streaming the resultant processed dataset just to disk.",
"That makes sense @athewsey , thanks for the suggestion :)\r\n\r\nMaybe instead of the `to_disk` we could simply have `save_to_disk` instead:\r\n```python\r\nstreaming_dataset.save_to_disk(\"path/to/my/dataset/dir\")\r\non_disk_dataset = load_from_disk(\"path/to/my/dataset/dir\")\r\n\r\nin_memory_dataset = Dataset.from_list(list(streaming_dataset.take(100))) # to experiment without having to write files\r\n```",
"Any updates on this?",
"So far are implemented: `IterableDataset.filter()` and `Dataset.to_iterable_dataset()`.\r\n\r\nStill missing: `IterableDataset.push_to_hub()` - though there is a hack to write on disk and then push to hub using\r\n\r\n```python\r\nds_on_disk = Dataset.from_generator(streaming_ds.__iter__) # stream to disk\r\nds_on_disk.push_to_hub(...)\r\n```",
"Do we have anything related to save_to_disk for IterableDataset / IterableDatasetDict. I believe this should be implementing that Dataset / DatasetDict have to offer, since for large amounts of data, this becomes quite a bit of problem.",
"Not yet, but this would be a welcome addition"
] |
1,142,627,889
| 3,752
|
Update metadata JSON for cats_vs_dogs dataset
|
closed
| 2022-02-18T08:32:53
| 2022-02-18T14:56:12
| 2022-02-18T14:56:11
|
https://github.com/huggingface/datasets/pull/3752
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3752",
"html_url": "https://github.com/huggingface/datasets/pull/3752",
"diff_url": "https://github.com/huggingface/datasets/pull/3752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3752.patch",
"merged_at": "2022-02-18T14:56:11"
}
|
albertvillanova
| true
|
[] |
1,142,609,327
| 3,751
|
Fix typo in train split name
|
closed
| 2022-02-18T08:18:04
| 2022-02-18T14:28:52
| 2022-02-18T14:28:52
|
https://github.com/huggingface/datasets/pull/3751
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3751",
"html_url": "https://github.com/huggingface/datasets/pull/3751",
"diff_url": "https://github.com/huggingface/datasets/pull/3751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3751.patch",
"merged_at": "2022-02-18T14:28:52"
}
|
albertvillanova
| true
|
[] |
1,142,408,331
| 3,750
|
`NonMatchingSplitsSizesError` for cats_vs_dogs dataset
|
closed
| 2022-02-18T05:46:39
| 2022-02-18T14:56:11
| 2022-02-18T14:56:11
|
https://github.com/huggingface/datasets/issues/3750
| null |
jaketae
| false
|
[
"Thnaks for reporting @jaketae. We are fixing it. "
] |
1,142,156,678
| 3,749
|
Add tqdm arguments
|
closed
| 2022-02-18T01:34:46
| 2022-03-08T09:38:48
| 2022-03-08T09:38:48
|
https://github.com/huggingface/datasets/pull/3749
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3749",
"html_url": "https://github.com/huggingface/datasets/pull/3749",
"diff_url": "https://github.com/huggingface/datasets/pull/3749.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3749.patch",
"merged_at": null
}
|
penguinwang96825
| true
|
[
"Hi ! Thanks this will be very useful :)\r\n\r\nIt looks like there are some changes in the github diff that are not related to your contribution, can you try fixing this by merging `master` into your PR, or create a new PR from an updated version of `master` ?",
"I have already solved the conflict on this latest version. This is my first time sending PR, if there's anything I need to adjust just let me know~",
"Thanks, most changes are gone :)\r\nIt still seems to include changes though - do you mind try creating a new branch from upstream/master and create a new PR please ?",
"Yeah sure, I'll try to send a new PR today!",
"Please forward to [#3850](https://github.com/huggingface/datasets/pull/3850)",
"Thanks ! Closing this one in favor of https://github.com/huggingface/datasets/pull/3850/files"
] |
1,142,128,763
| 3,748
|
Add tqdm arguments
|
closed
| 2022-02-18T00:47:55
| 2022-02-18T00:59:15
| 2022-02-18T00:59:15
|
https://github.com/huggingface/datasets/pull/3748
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3748",
"html_url": "https://github.com/huggingface/datasets/pull/3748",
"diff_url": "https://github.com/huggingface/datasets/pull/3748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3748.patch",
"merged_at": null
}
|
penguinwang96825
| true
|
[] |
1,141,688,854
| 3,747
|
Passing invalid subset should throw an error
|
open
| 2022-02-17T18:16:11
| 2022-02-17T18:16:11
| null |
https://github.com/huggingface/datasets/issues/3747
| null |
jxmorris12
| false
|
[] |
1,141,612,810
| 3,746
|
Use the same seed to shuffle shards and metadata in streaming mode
|
closed
| 2022-02-17T17:06:31
| 2022-02-23T15:00:59
| 2022-02-23T15:00:58
|
https://github.com/huggingface/datasets/pull/3746
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3746",
"html_url": "https://github.com/huggingface/datasets/pull/3746",
"diff_url": "https://github.com/huggingface/datasets/pull/3746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3746.patch",
"merged_at": "2022-02-23T15:00:58"
}
|
lhoestq
| true
|
[] |
1,141,520,953
| 3,745
|
Add mIoU metric
|
closed
| 2022-02-17T15:52:17
| 2022-03-08T13:20:26
| 2022-03-08T13:20:26
|
https://github.com/huggingface/datasets/pull/3745
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3745",
"html_url": "https://github.com/huggingface/datasets/pull/3745",
"diff_url": "https://github.com/huggingface/datasets/pull/3745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3745.patch",
"merged_at": "2022-03-08T13:20:26"
}
|
NielsRogge
| true
|
[
"Hmm the doctest failed again - maybe the full result needs to be on one single line",
"cc @lhoestq for the final review",
"Cool ! Feel free to merge if it's all good for you"
] |
1,141,461,165
| 3,744
|
Better shards shuffling in streaming mode
|
closed
| 2022-02-17T15:07:21
| 2022-02-23T15:00:58
| 2022-02-23T15:00:58
|
https://github.com/huggingface/datasets/issues/3744
| null |
lhoestq
| false
|
[] |
1,141,176,011
| 3,743
|
initial monash time series forecasting repository
|
closed
| 2022-02-17T10:51:31
| 2022-03-21T09:54:41
| 2022-03-21T09:50:16
|
https://github.com/huggingface/datasets/pull/3743
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3743",
"html_url": "https://github.com/huggingface/datasets/pull/3743",
"diff_url": "https://github.com/huggingface/datasets/pull/3743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3743.patch",
"merged_at": "2022-03-21T09:50:16"
}
|
kashif
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR, merging !",
"thanks 🙇🏽 "
] |
1,141,174,549
| 3,742
|
Fix ValueError message formatting in int2str
|
closed
| 2022-02-17T10:50:08
| 2022-02-17T15:32:02
| 2022-02-17T15:32:02
|
https://github.com/huggingface/datasets/pull/3742
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3742",
"html_url": "https://github.com/huggingface/datasets/pull/3742",
"diff_url": "https://github.com/huggingface/datasets/pull/3742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3742.patch",
"merged_at": "2022-02-17T15:32:02"
}
|
aaakulchyk
| true
|
[] |
1,141,132,649
| 3,741
|
Rm sphinx doc
|
closed
| 2022-02-17T10:11:37
| 2022-02-17T10:15:17
| 2022-02-17T10:15:12
|
https://github.com/huggingface/datasets/pull/3741
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3741",
"html_url": "https://github.com/huggingface/datasets/pull/3741",
"diff_url": "https://github.com/huggingface/datasets/pull/3741.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3741.patch",
"merged_at": null
}
|
mishig25
| true
|
[] |
1,140,720,739
| 3,740
|
Support streaming for pubmed
|
closed
| 2022-02-17T00:18:22
| 2022-02-18T14:42:13
| 2022-02-18T14:42:13
|
https://github.com/huggingface/datasets/pull/3740
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3740",
"html_url": "https://github.com/huggingface/datasets/pull/3740",
"diff_url": "https://github.com/huggingface/datasets/pull/3740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3740.patch",
"merged_at": "2022-02-18T14:42:13"
}
|
abhi-mosaic
| true
|
[
"@albertvillanova just FYI, since you were so helpful with the previous pubmed issue :) ",
"IIRC streaming from FTP is not fully tested yet, so I'm fine with switching to HTTPS for now, as long as the download speed/availability is great",
"@albertvillanova Thanks for pointing me to the `ET` module replacement. It should look a lot cleaner now.\r\n\r\nUnfortunately I tried keeping the `ftp://` protocol but was seeing timeout errors? in streaming mode (below). I think the `https://` performance is not an issue, when I was profiling the `open(..) -> f.read() -> etree.fromstring(xml_str)` codepath, most of the time was spent in the XML parsing rather than the data download.\r\n\r\n\r\nError when using `ftp://`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 301, in _fetch_range\r\n self.fs.ftp.retrbinary(\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 430, in retrbinary\r\n callback(data)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 293, in callback\r\n raise TransferDone\r\nfsspec.implementations.ftp.TransferDone\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test_pubmed_streaming.py\", line 9, in <module>\r\n print (next(iter(pubmed_train_streaming)))\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 365, in __iter__\r\n for key, example in self._iter():\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 362, in _iter\r\n yield from ex_iterable\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 79, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/Users/abhinav/.cache/huggingface/modules/datasets_modules/datasets/pubmed/af552ed918e2841e8427203530e3cfed3a8bc3213041d7853bea1ca67eec683d/pubmed.py\", line 362, in _generate_examples\r\n tree = ET.parse(filename)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/streaming.py\", line 65, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/utils/streaming_download_manager.py\", line 636, in xet_parse\r\n return ET.parse(f, parser=parser)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py\", line 1202, in parse\r\n tree.parse(source, parser)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py\", line 595, in parse\r\n self._root = parser._parse_whole(source)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/utils/streaming_download_manager.py\", line 293, in read_with_retries\r\n out = read(*args, **kwargs)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 292, in read\r\n return self._buffer.read(size)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/_compression.py\", line 68, in readinto\r\n data = self.read(len(byte_view))\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 479, in read\r\n if not self._read_gzip_header():\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 422, in _read_gzip_header\r\n magic = self._fp.read(2)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 96, in read\r\n self.file.read(size-self._length+read)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/spec.py\", line 1485, in read\r\n out = self.cache._fetch(self.loc, self.loc + length)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/caching.py\", line 153, in _fetch\r\n self.cache = self.fetcher(start, end) # new block replaces old\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 311, in _fetch_range\r\n self.fs.ftp.getmultiline()\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 224, in getmultiline\r\n line = self.getline()\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 206, in getline\r\n line = self.file.readline(self.maxline + 1)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\nsocket.timeout: timed out\r\n```"
] |
1,140,329,189
| 3,739
|
Pubmed dataset does not work in streaming mode
|
closed
| 2022-02-16T17:13:37
| 2022-02-18T14:42:13
| 2022-02-18T14:42:13
|
https://github.com/huggingface/datasets/issues/3739
| null |
abhi-mosaic
| false
|
[
"Thanks for reporting, @abhi-mosaic (related to #3655).\r\n\r\nPlease note that `xml.etree.ElementTree.parse` already supports streaming:\r\n- #3476\r\n\r\nNo need to refactor to use `open`/`xopen`. Is is enough with importing the package `as ET` (instead of `as etree`)."
] |
1,140,164,253
| 3,738
|
For data-only datasets, streaming and non-streaming don't behave the same
|
open
| 2022-02-16T15:20:57
| 2022-02-21T14:24:55
| null |
https://github.com/huggingface/datasets/issues/3738
| null |
severo
| false
|
[
"Note that we might change the heuristic and create a different config per file, at least in that case.",
"Hi @severo, thanks for reporting.\r\n\r\nYes, this happens because when non-streaming, a cast of all data is done in order to \"concatenate\" it all into a single dataset (thus the error), while this casting is not done while yielding item by item in streaming mode.\r\n\r\nMaybe in streaming mode we should keep the schema (inferred from the first item) and throw an exception if a subsequent item does not conform to the inferred schema?",
"Why do we want to concatenate the files? Is it the expected behavior for most datasets that lack a script and dataset info?",
"These files are two different dataset configurations since they don't share the same schema.\r\n\r\nIMO the streaming mode should fail in this case, as @albertvillanova said.\r\n\r\nThere is one challenge though: inferring the schema from the first example is not robust enough in the general case - especially if some fields are nullable. I guess we can at least make sure that no new columns are added",
"OK. So, if we make the streaming also fail, the dataset https://huggingface.co/datasets/huggingface/transformers-metadata will never be [viewable](https://github.com/huggingface/datasets-preview-backend/issues/144) (be it using streaming or fallback to downloading the files), right?\r\n",
"Yes, until we have a way for the user to specify explicitly that those two files are different configurations.\r\n\r\nWe can maybe have some rule to detect this automatically, maybe checking the first line of each file ? That would mean that for dataset of 10,000+ files we would have to verify every single one of them just to know if there is one ore more configurations, so I'm not sure if this is a good idea",
"i think requiring the user to specify that those two files are different configurations is in that case perfectly reasonable.\r\n\r\n(Maybe at some point we could however detect this type of case and prompt them to define a config mapping etc)",
"OK, so, before closing the issue, what do you think should be done?\r\n\r\n> Maybe in streaming mode we should keep the schema (inferred from the first item) and throw an exception if a subsequent item does not conform to the inferred schema?\r\n\r\nor nothing?",
"We should at least raise an error if a new sample has column names that are missing, or if it has extra columns. No need to check for the type for now.\r\n\r\nI'm in favor of having an error especially because we want to avoid silent issues as much as possible - i.e. when something goes wrong (when schemas don't match or some data are missing) and no errors/warnings are raised.\r\n\r\nConsistency between streaming and non-streaming is also important."
] |
1,140,148,050
| 3,737
|
Make RedCaps streamable
|
closed
| 2022-02-16T15:12:23
| 2022-02-16T15:28:38
| 2022-02-16T15:28:37
|
https://github.com/huggingface/datasets/pull/3737
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3737",
"html_url": "https://github.com/huggingface/datasets/pull/3737",
"diff_url": "https://github.com/huggingface/datasets/pull/3737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3737.patch",
"merged_at": "2022-02-16T15:28:37"
}
|
mariosasko
| true
|
[] |
1,140,134,483
| 3,736
|
Local paths in common voice
|
closed
| 2022-02-16T15:01:29
| 2022-09-21T14:58:38
| 2022-02-22T09:13:43
|
https://github.com/huggingface/datasets/pull/3736
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3736",
"html_url": "https://github.com/huggingface/datasets/pull/3736",
"diff_url": "https://github.com/huggingface/datasets/pull/3736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3736.patch",
"merged_at": "2022-02-22T09:13:43"
}
|
lhoestq
| true
|
[
"I just changed to `dl_manager.is_streaming` rather than an additional parameter `streaming` that has to be handled by the DatasetBuilder class - this way the streaming logic doesn't interfere with the base builder's code.\r\n\r\nI think it's better this way, but let me know if you preferred the previous way and I can revert\r\n\r\n> But on the other hand, IMHO, I think this specific solution adds complexity to handling streaming/non-streaming, and moves this complexity to the loading script and thus to the contributors/users who want to create the loading script for their canonical/community datasets (instead of keeping it hidden form the end users).\r\n\r\nI'm down to discuss this more in the future !",
"@lhoestq good idea: much cleaner this way! That way each class has its own responsibilities without mixing around..."
] |
1,140,087,891
| 3,735
|
Performance of `datasets` at scale
|
open
| 2022-02-16T14:23:32
| 2024-06-27T01:17:48
| null |
https://github.com/huggingface/datasets/issues/3735
| null |
lvwerra
| false
|
[
"> using command line git-lfs - [...] 300MB/s!\r\n\r\nwhich server location did you upload from?",
"From GCP region `us-central1-a`.",
"The most surprising part to me is the saving time. Wondering if it could be due to compression (`ParquetWriter` uses SNAPPY compression by default; it can be turned off with `to_parquet(..., compression=None)`). ",
"+1 to what @mariosasko mentioned. Also, @lvwerra did you parallelize `to_parquet` using similar approach in #2747? (we used multiprocessing at the shard level). I'm working on a similar PR to add multi_proc in `to_parquet` which might give you further speed up. \r\nStas benchmarked his approach and mine in this [gist](https://gist.github.com/stas00/dc1597a1e245c5915cfeefa0eee6902c) for `lama` dataset when we were working on adding multi_proc support for `to_json`.",
"@mariosasko I did not turn it off but I can try the next time - I have to run the pipeline again, anyway. \r\n\r\n@bhavitvyamalik Yes, I also sharded the dataset and used multiprocessing to save each shard. I'll have a closer look at your approach, too.",
"Is there a way to read from the cache files directly as a dataset in its own"
] |
1,140,050,336
| 3,734
|
Fix bugs in NewsQA dataset
|
closed
| 2022-02-16T13:51:28
| 2022-02-17T07:54:26
| 2022-02-17T07:54:25
|
https://github.com/huggingface/datasets/pull/3734
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3734",
"html_url": "https://github.com/huggingface/datasets/pull/3734",
"diff_url": "https://github.com/huggingface/datasets/pull/3734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3734.patch",
"merged_at": "2022-02-17T07:54:25"
}
|
albertvillanova
| true
|
[] |
1,140,011,378
| 3,733
|
Bugs in NewsQA dataset
|
closed
| 2022-02-16T13:17:37
| 2022-02-17T07:54:25
| 2022-02-17T07:54:25
|
https://github.com/huggingface/datasets/issues/3733
| null |
albertvillanova
| false
|
[] |
1,140,004,022
| 3,732
|
Support streaming in size estimation function in `push_to_hub`
|
closed
| 2022-02-16T13:10:48
| 2022-02-21T18:18:45
| 2022-02-21T18:18:44
|
https://github.com/huggingface/datasets/pull/3732
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3732",
"html_url": "https://github.com/huggingface/datasets/pull/3732",
"diff_url": "https://github.com/huggingface/datasets/pull/3732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3732.patch",
"merged_at": "2022-02-21T18:18:44"
}
|
mariosasko
| true
|
[
"would this allow to include the size in the dataset info without downloading the files? related to https://github.com/huggingface/datasets/pull/3670",
"@severo I don't think so. We could use this to get `info.download_checksums[\"num_bytes\"]`, but we must process the files to get the rest of the size info. "
] |
1,139,626,362
| 3,731
|
Fix Multi-News dataset metadata and card
|
closed
| 2022-02-16T07:14:57
| 2022-02-16T08:48:47
| 2022-02-16T08:48:47
|
https://github.com/huggingface/datasets/pull/3731
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3731",
"html_url": "https://github.com/huggingface/datasets/pull/3731",
"diff_url": "https://github.com/huggingface/datasets/pull/3731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3731.patch",
"merged_at": "2022-02-16T08:48:46"
}
|
albertvillanova
| true
|
[] |
1,139,545,613
| 3,730
|
Checksum Error when loading multi-news dataset
|
closed
| 2022-02-16T05:11:08
| 2022-02-16T20:05:06
| 2022-02-16T08:48:46
|
https://github.com/huggingface/datasets/issues/3730
| null |
byw2
| false
|
[
"Thanks for reporting @byw2.\r\nWe are fixing it.\r\nIn the meantime, you can load the dataset by passing `ignore_verifications=True`:\r\n ```python\r\ndataset = load_dataset(\"multi_news\", ignore_verifications=True)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.