id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,339,085,917
| 4,851
|
Fix license tag and Source Data section in billsum dataset card
|
closed
| 2022-08-15T14:37:00
| 2022-08-22T13:56:24
| 2022-08-22T13:40:59
|
https://github.com/huggingface/datasets/pull/4851
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4851",
"html_url": "https://github.com/huggingface/datasets/pull/4851",
"diff_url": "https://github.com/huggingface/datasets/pull/4851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4851.patch",
"merged_at": "2022-08-22T13:40:59"
}
|
kashif
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"thanks @albertvillanova done thank you!"
] |
1,338,702,306
| 4,850
|
Fix test of _get_extraction_protocol for TAR files
|
closed
| 2022-08-15T08:37:58
| 2022-08-15T09:42:56
| 2022-08-15T09:28:46
|
https://github.com/huggingface/datasets/pull/4850
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4850",
"html_url": "https://github.com/huggingface/datasets/pull/4850",
"diff_url": "https://github.com/huggingface/datasets/pull/4850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4850.patch",
"merged_at": "2022-08-15T09:28:46"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,338,273,900
| 4,849
|
1.18.x
|
closed
| 2022-08-14T15:09:19
| 2022-08-14T15:10:02
| 2022-08-14T15:10:02
|
https://github.com/huggingface/datasets/pull/4849
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4849",
"html_url": "https://github.com/huggingface/datasets/pull/4849",
"diff_url": "https://github.com/huggingface/datasets/pull/4849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4849.patch",
"merged_at": null
}
|
Mr-Robot-001
| true
|
[] |
1,338,271,833
| 4,848
|
a
|
closed
| 2022-08-14T15:01:16
| 2022-08-14T15:09:59
| 2022-08-14T15:09:59
|
https://github.com/huggingface/datasets/pull/4848
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4848",
"html_url": "https://github.com/huggingface/datasets/pull/4848",
"diff_url": "https://github.com/huggingface/datasets/pull/4848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4848.patch",
"merged_at": null
}
|
Mr-Robot-001
| true
|
[] |
1,338,270,636
| 4,847
|
Test win ci
|
closed
| 2022-08-14T14:57:00
| 2023-09-24T10:04:13
| 2022-08-14T14:57:45
|
https://github.com/huggingface/datasets/pull/4847
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4847",
"html_url": "https://github.com/huggingface/datasets/pull/4847",
"diff_url": "https://github.com/huggingface/datasets/pull/4847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4847.patch",
"merged_at": null
}
|
Mr-Robot-001
| true
|
[] |
1,337,979,897
| 4,846
|
Update documentation card of miam dataset
|
closed
| 2022-08-13T14:38:55
| 2022-08-17T00:50:04
| 2022-08-14T10:26:08
|
https://github.com/huggingface/datasets/pull/4846
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4846",
"html_url": "https://github.com/huggingface/datasets/pull/4846",
"diff_url": "https://github.com/huggingface/datasets/pull/4846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4846.patch",
"merged_at": "2022-08-14T10:26:08"
}
|
PierreColombo
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ahahah :D not sur how i broke something by updating the README :D ",
"Thanks for the fix @PierreColombo. \r\n\r\nOnce a README is modified, our CI runs tests on it, requiring additional quality fixes, so that all READMEs are progressively improved and have some minimal tags/sections/information.\r\n\r\nFor this specific README file, the additional quality requirements of the CI are: https://github.com/huggingface/datasets/runs/7819924428?check_suite_focus=true\r\n```\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/miam/README.md`:\r\nE -\tSection `Additional Information` is missing subsection: `Dataset Curators`.\r\nE -\tSection `Additional Information` is missing subsection: `Contributions`.\r\nE -\t`Additional Information` has an extra subsection: `Benchmark Curators`. Skipping further validation checks for this subsection as expected structure is unknown.\r\n```",
"Thanks a lot Albert :)))"
] |
1,337,928,283
| 4,845
|
Mark CI tests as xfail if Hub HTTP error
|
closed
| 2022-08-13T10:45:11
| 2022-08-23T04:57:12
| 2022-08-23T04:42:26
|
https://github.com/huggingface/datasets/pull/4845
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4845",
"html_url": "https://github.com/huggingface/datasets/pull/4845",
"diff_url": "https://github.com/huggingface/datasets/pull/4845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4845.patch",
"merged_at": "2022-08-23T04:42:26"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,337,878,249
| 4,844
|
Add 'val' to VALIDATION_KEYWORDS.
|
closed
| 2022-08-13T06:49:41
| 2022-08-30T10:17:35
| 2022-08-30T10:14:54
|
https://github.com/huggingface/datasets/pull/4844
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4844",
"html_url": "https://github.com/huggingface/datasets/pull/4844",
"diff_url": "https://github.com/huggingface/datasets/pull/4844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4844.patch",
"merged_at": "2022-08-30T10:14:54"
}
|
akt42
| true
|
[
"@mariosasko not sure about how the reviewing process works. Maybe you can have a look because we discussed this elsewhere?",
"Hi, thanks! \r\n\r\nLet's add one pattern with `val` to this test before merging: \r\nhttps://github.com/huggingface/datasets/blob/b88a656cf94c4ad972154371c83c1af759fde522/tests/test_data_files.py#L598",
"_The documentation is not available anymore as the PR was closed or merged._",
"@akt42 note that there is some info about splits keywords in the docs: https://huggingface.co/docs/datasets/main/en/repository_structure#split-names-keywords. I agree it's not clear that it applies not only to filenames, but to directories as well.\r\n\r\nI think \"val\" should be now added to the documentation source file here: https://github.com/huggingface/datasets/blob/main/docs/source/repository_structure.mdx?plain=1#L98",
"@polinaeterna Thanks for notifying us that there is a list of supported keywords\r\n\r\nI've added \"val\" to that list and a test."
] |
1,337,668,699
| 4,843
|
Fix typo in streaming docs
|
closed
| 2022-08-12T20:18:21
| 2022-08-14T11:43:30
| 2022-08-14T11:02:09
|
https://github.com/huggingface/datasets/pull/4843
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4843",
"html_url": "https://github.com/huggingface/datasets/pull/4843",
"diff_url": "https://github.com/huggingface/datasets/pull/4843.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4843.patch",
"merged_at": "2022-08-14T11:02:09"
}
|
flozi00
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,337,527,764
| 4,842
|
Update stackexchange license
|
closed
| 2022-08-12T17:39:06
| 2022-08-14T10:43:18
| 2022-08-14T10:28:49
|
https://github.com/huggingface/datasets/pull/4842
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4842",
"html_url": "https://github.com/huggingface/datasets/pull/4842",
"diff_url": "https://github.com/huggingface/datasets/pull/4842.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4842.patch",
"merged_at": "2022-08-14T10:28:49"
}
|
cakiki
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,337,401,243
| 4,841
|
Update ted_talks_iwslt license to include ND
|
closed
| 2022-08-12T16:14:52
| 2022-08-14T11:15:22
| 2022-08-14T11:00:22
|
https://github.com/huggingface/datasets/pull/4841
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4841",
"html_url": "https://github.com/huggingface/datasets/pull/4841",
"diff_url": "https://github.com/huggingface/datasets/pull/4841.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4841.patch",
"merged_at": "2022-08-14T11:00:22"
}
|
cakiki
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,337,342,672
| 4,840
|
Dataset Viewer issue for darragh/demo_data_raw3
|
open
| 2022-08-12T15:22:58
| 2022-09-08T07:55:44
| null |
https://github.com/huggingface/datasets/issues/4840
| null |
severo
| false
|
[
"do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.",
"Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\nWhich pyarrow version are you using? Mine is 6.0.1. ",
"OK, I get now your error when not streaming.",
"OK!\r\n\r\nIf it's useful, the pyarrow version is 7.0.0:\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/487c39d87998f8d5a35972f1027d6c8e588e622d/services/worker/poetry.lock#L1537-L1543",
"Apparently, there is something weird with that Parquet file: its schema is:\r\n```\r\nimages: extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>>\r\n```\r\n\r\nI have forced a right schema:\r\n```python\r\nfrom datasets import Features, Image, load_dataset\r\n\r\nfeatures = Features({\"images\": Image()})\r\nds = datasets.load_dataset(\"parquet\", split=\"train\", data_files=\"train-00000-of-00001.parquet\", features=features)\r\n```\r\nand then recreated a new Parquet file:\r\n```python\r\nds.to_parquet(\"train.parquet\")\r\n```\r\n\r\nNow this Parquet file has the right schema:\r\n```\r\nimages: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\n```\r\nand can be loaded normally:\r\n```python\r\nIn [26]: ds = load_dataset(\"parquet\", split=\"train\", data_files=\"dataset.parquet\")\r\nn [27]: ds\r\nOut[27]: \r\nDataset({\r\n features: ['images'],\r\n num_rows: 20\r\n})\r\n```"
] |
1,337,206,377
| 4,839
|
ImageFolder dataset builder does not read the validation data set if it is named as "val"
|
closed
| 2022-08-12T13:26:00
| 2022-08-30T10:14:55
| 2022-08-30T10:14:55
|
https://github.com/huggingface/datasets/issues/4839
| null |
akt42
| false
|
[
"#take"
] |
1,337,194,918
| 4,838
|
Fix documentation card of adv_glue dataset
|
closed
| 2022-08-12T13:15:26
| 2022-08-15T10:17:14
| 2022-08-15T10:02:11
|
https://github.com/huggingface/datasets/pull/4838
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4838",
"html_url": "https://github.com/huggingface/datasets/pull/4838",
"diff_url": "https://github.com/huggingface/datasets/pull/4838.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4838.patch",
"merged_at": "2022-08-15T10:02:11"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The failing test has nothing to do with this PR:\r\n```\r\nFAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files\r\n```"
] |
1,337,079,723
| 4,837
|
Add support for CSV metadata files to ImageFolder
|
closed
| 2022-08-12T11:19:18
| 2022-08-31T12:01:27
| 2022-08-31T11:59:07
|
https://github.com/huggingface/datasets/pull/4837
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4837",
"html_url": "https://github.com/huggingface/datasets/pull/4837",
"diff_url": "https://github.com/huggingface/datasets/pull/4837.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4837.patch",
"merged_at": "2022-08-31T11:59:07"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool thanks ! Maybe let's include this change after the refactoring from FolderBasedBuilder in #3963 to avoid dealing with too many unpleasant conflicts ?",
"@lhoestq I resolved the conflicts (AudioFolder also supports CSV metadata now). Let me know what you think.\r\n",
"@lhoestq Thanks for the suggestion! Indeed it makes more sense to use CSV as the default format in the folder-based builders."
] |
1,337,067,632
| 4,836
|
Is it possible to pass multiple links to a split in load script?
|
open
| 2022-08-12T11:06:11
| 2022-08-12T11:06:11
| null |
https://github.com/huggingface/datasets/issues/4836
| null |
sadrasabouri
| false
|
[] |
1,336,994,835
| 4,835
|
Fix documentation card of ethos dataset
|
closed
| 2022-08-12T09:51:06
| 2022-08-12T13:13:55
| 2022-08-12T12:59:39
|
https://github.com/huggingface/datasets/pull/4835
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4835",
"html_url": "https://github.com/huggingface/datasets/pull/4835",
"diff_url": "https://github.com/huggingface/datasets/pull/4835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4835.patch",
"merged_at": "2022-08-12T12:59:39"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,336,993,511
| 4,834
|
Fix documentation card of recipe_nlg dataset
|
closed
| 2022-08-12T09:49:39
| 2022-08-12T11:28:18
| 2022-08-12T11:13:40
|
https://github.com/huggingface/datasets/pull/4834
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4834",
"html_url": "https://github.com/huggingface/datasets/pull/4834",
"diff_url": "https://github.com/huggingface/datasets/pull/4834.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4834.patch",
"merged_at": "2022-08-12T11:13:40"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,336,946,965
| 4,833
|
Fix missing tags in dataset cards
|
closed
| 2022-08-12T09:04:52
| 2022-09-22T14:41:23
| 2022-08-12T09:45:55
|
https://github.com/huggingface/datasets/pull/4833
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4833",
"html_url": "https://github.com/huggingface/datasets/pull/4833",
"diff_url": "https://github.com/huggingface/datasets/pull/4833.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4833.patch",
"merged_at": "2022-08-12T09:45:55"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,336,727,389
| 4,832
|
Fix tags in dataset cards
|
closed
| 2022-08-12T04:11:23
| 2022-08-12T04:41:55
| 2022-08-12T04:27:24
|
https://github.com/huggingface/datasets/pull/4832
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4832",
"html_url": "https://github.com/huggingface/datasets/pull/4832",
"diff_url": "https://github.com/huggingface/datasets/pull/4832.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4832.patch",
"merged_at": "2022-08-12T04:27:24"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
1,336,199,643
| 4,831
|
Add oversampling strategies to interleave datasets
|
closed
| 2022-08-11T16:24:51
| 2023-07-11T15:57:48
| 2022-08-24T16:46:07
|
https://github.com/huggingface/datasets/pull/4831
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4831",
"html_url": "https://github.com/huggingface/datasets/pull/4831",
"diff_url": "https://github.com/huggingface/datasets/pull/4831.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4831.patch",
"merged_at": "2022-08-24T16:46:07"
}
|
ylacombe
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4831). All of your documentation changes will be reflected on that endpoint.",
"Hi @lhoestq, \r\nThanks for your review! I've added the requested mention in the documentation and corrected the Error type in `interleave_datasets`. \r\nI've also added test cases in `test_arrow_dataset.py`, which was useful since it allow me to detect an error in the case of an oversampling strategy with no sampling probabilities. \r\nCould you double check this part ? I've commented the code to explain the approach.\r\nThanks!\r\n",
"@ylacombe Thanks for your effort!\r\n\r\n> Final note: I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.\r\n\r\nMay I ask why is that, and how to solve it? In some scenarios, such as domain adaptation with limited resources, it is normal to have a big generic dataset and a small in-domain dataset.\r\n\r\nHere is an example with data sizes 8:2 and oversampling ratios 0.2:0.8\r\n\r\n```python\r\nfrom datasets import Dataset, interleave_datasets\r\n\r\nd1 = Dataset.from_dict({\"a\": [1, 2, 3, 4, 5, 6, 7, 8]})\r\nd2 = Dataset.from_dict({\"a\": [9, 10]})\r\n\r\nnew_d = interleave_datasets([d1, d2], probabilities=[0.2, 0.8], seed=42, stopping_strategy=\"all_exhausted\")\r\nprint(len(new_d))\r\nprint(new_d[\"a\"])\r\n```\r\n\r\n> 37\r\n> [9, 10, 9, 10, 1, 9, 10, 9, 2, 10, 9, 10, 9, 10, 9, 10, 9, 3, 10, 9, 10, 9, 10, 9, 10, 4, 9, 5, 6, 10, 9, 10, 9, 10, 9, 7, 8]\r\n\r\nThe ratios sampled from the two original datasets to the output dataset are correct. However, the length of the output dataset is 37, which is too big. I think it should be only large enough to make the smaller dataset similar in size to the bigger dataset. Any solution for this? Many thanks!\r\n\r\n",
"Hi @ymoslem, it's a great question and yes, it's normal to have two different-sized datasets to interleave!\r\n\r\nMy recommendation here would be to either use probabilities more biased towards the large model (e.g `[0.8, 0.2]`) so that the big dataset is exhausted more quickly, or to not use probabilities altogether - in that case, `new_d` length will be 16 (`nb_datasets*len(largest_dataset)`).\r\n\r\nLet me know if I need to be clearer!\r\n ",
"@ylacombe Many thanks for your prompt response! As we needed to implement certain oversampling experiments, we ended up using Pandas.\r\n\r\nConsidering each dataset a class with a distinct \"label\":\r\n```python\r\nimport pandas as pd\r\n\r\ndef oversample(df):\r\n classes = df.label.value_counts().to_dict()\r\n most = max(classes.values())\r\n classes_list = []\r\n for key in classes:\r\n classes_list.append(df[df['label'] == key])\r\n classes_sample = []\r\n for i in range(1,len(classes_list)):\r\n classes_sample.append(classes_list[i].sample(most, replace=True))\r\n df_maybe = pd.concat(classes_sample)\r\n final_df = pd.concat([df_maybe,classes_list[0]], axis=0)\r\n final_df = final_df.reset_index(drop=True)\r\n return final_df\r\n```\r\n[Reference](https://medium.com/analytics-vidhya/undersampling-and-oversampling-an-old-and-a-new-approach-4f984a0e8392)"
] |
1,336,177,937
| 4,830
|
Fix task tags in dataset cards
|
closed
| 2022-08-11T16:06:06
| 2022-08-11T16:37:27
| 2022-08-11T16:23:00
|
https://github.com/huggingface/datasets/pull/4830
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"merged_at": "2022-08-11T16:23:00"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
1,336,068,068
| 4,829
|
Misalignment between card tag validation and docs
|
open
| 2022-08-11T14:44:45
| 2023-07-21T15:38:02
| null |
https://github.com/huggingface/datasets/issues/4829
| null |
albertvillanova
| false
|
[
"(Note that the doc is aligned with the hub validation rules, and the \"ground truth\" is the hub validation rules given that they apply to all datasets, not just the canonical ones)",
"Instead of our own implementation, we now use `huggingface_hub`'s `DatasetCardData`, which has the correct type hint, so I think we can close this issue."
] |
1,336,040,168
| 4,828
|
Support PIL Image objects in `add_item`/`add_column`
|
open
| 2022-08-11T14:25:45
| 2023-09-24T10:15:33
| null |
https://github.com/huggingface/datasets/pull/4828
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4828",
"html_url": "https://github.com/huggingface/datasets/pull/4828",
"diff_url": "https://github.com/huggingface/datasets/pull/4828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4828.patch",
"merged_at": null
}
|
mariosasko
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4828). All of your documentation changes will be reflected on that endpoint.",
"Hey @mariosasko could we please merge this? I'm still getting the original error at #4796 .",
"Are you planning to continue working on this?"
] |
1,335,994,312
| 4,827
|
Add license metadata to pg19
|
closed
| 2022-08-11T13:52:20
| 2022-08-11T15:01:03
| 2022-08-11T14:46:38
|
https://github.com/huggingface/datasets/pull/4827
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4827",
"html_url": "https://github.com/huggingface/datasets/pull/4827",
"diff_url": "https://github.com/huggingface/datasets/pull/4827.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4827.patch",
"merged_at": "2022-08-11T14:46:38"
}
|
julien-c
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,335,987,583
| 4,826
|
Fix language tags in dataset cards
|
closed
| 2022-08-11T13:47:14
| 2022-08-11T14:17:48
| 2022-08-11T14:03:12
|
https://github.com/huggingface/datasets/pull/4826
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4826",
"html_url": "https://github.com/huggingface/datasets/pull/4826",
"diff_url": "https://github.com/huggingface/datasets/pull/4826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4826.patch",
"merged_at": "2022-08-11T14:03:12"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
1,335,856,882
| 4,825
|
[Windows] Fix Access Denied when using os.rename()
|
closed
| 2022-08-11T11:57:15
| 2022-08-24T13:09:07
| 2022-08-24T13:09:07
|
https://github.com/huggingface/datasets/pull/4825
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4825",
"html_url": "https://github.com/huggingface/datasets/pull/4825",
"diff_url": "https://github.com/huggingface/datasets/pull/4825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4825.patch",
"merged_at": "2022-08-24T13:09:07"
}
|
DougTrajano
| true
|
[
"Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?",
"> Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?\r\n\r\nYes, I think that could be a better solution, but I didn't test it in Linux (e.g. Ubuntu) to guarantee that `os.rename()` could be completely replaced by `shutil.move()`.",
"AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)",
"> AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)\r\n\r\nalright, let me change the PR then.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4825). All of your documentation changes will be reflected on that endpoint.",
"Hi @lhoestq looks like one of the tests failed, but is not related to this change, do I need to do something from my side?"
] |
1,335,826,639
| 4,824
|
Fix titles in dataset cards
|
closed
| 2022-08-11T11:27:48
| 2022-08-11T13:46:11
| 2022-08-11T12:56:49
|
https://github.com/huggingface/datasets/pull/4824
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4824",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"merged_at": "2022-08-11T12:56:49"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
1,335,687,033
| 4,823
|
Update data URL in mkqa dataset
|
closed
| 2022-08-11T09:16:13
| 2022-08-11T09:51:50
| 2022-08-11T09:37:52
|
https://github.com/huggingface/datasets/pull/4823
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4823",
"html_url": "https://github.com/huggingface/datasets/pull/4823",
"diff_url": "https://github.com/huggingface/datasets/pull/4823.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4823.patch",
"merged_at": "2022-08-11T09:37:51"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,335,664,588
| 4,821
|
Fix train_test_split docs
|
closed
| 2022-08-11T08:55:45
| 2022-08-11T09:59:29
| 2022-08-11T09:45:40
|
https://github.com/huggingface/datasets/pull/4821
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4821",
"html_url": "https://github.com/huggingface/datasets/pull/4821",
"diff_url": "https://github.com/huggingface/datasets/pull/4821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4821.patch",
"merged_at": "2022-08-11T09:45:40"
}
|
NielsRogge
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,335,117,132
| 4,820
|
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
|
closed
| 2022-08-10T19:42:33
| 2022-08-10T19:53:10
| 2022-08-10T19:53:10
|
https://github.com/huggingface/datasets/issues/4820
| null |
talhaanwarch
| false
|
[
"Fixed by installing either resampy<3 or resampy>=4"
] |
1,335,064,449
| 4,819
|
Add missing language tags to resources
|
closed
| 2022-08-10T19:06:42
| 2022-08-10T19:45:49
| 2022-08-10T19:32:15
|
https://github.com/huggingface/datasets/pull/4819
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4819",
"html_url": "https://github.com/huggingface/datasets/pull/4819",
"diff_url": "https://github.com/huggingface/datasets/pull/4819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4819.patch",
"merged_at": "2022-08-10T19:32:15"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,334,941,810
| 4,818
|
Add add cc-by-sa-2.5 license tag
|
closed
| 2022-08-10T17:18:39
| 2022-10-04T13:47:24
| 2022-10-04T13:47:24
|
https://github.com/huggingface/datasets/pull/4818
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4818",
"html_url": "https://github.com/huggingface/datasets/pull/4818",
"diff_url": "https://github.com/huggingface/datasets/pull/4818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4818.patch",
"merged_at": null
}
|
polinaeterna
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4818). All of your documentation changes will be reflected on that endpoint.",
"I think we can close this PR because the `standard_licenses.tsv` file was removed from this repo and we no longer perform any dataset card validation."
] |
1,334,572,163
| 4,817
|
Outdated Link for mkqa Dataset
|
closed
| 2022-08-10T12:45:45
| 2022-08-11T09:37:52
| 2022-08-11T09:37:52
|
https://github.com/huggingface/datasets/issues/4817
| null |
liaeh
| false
|
[
"Thanks for reporting @liaeh, we are investigating this. "
] |
1,334,099,454
| 4,816
|
Update version of opus_paracrawl dataset
|
closed
| 2022-08-10T05:39:44
| 2022-08-12T14:32:29
| 2022-08-12T14:17:56
|
https://github.com/huggingface/datasets/pull/4816
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4816",
"html_url": "https://github.com/huggingface/datasets/pull/4816",
"diff_url": "https://github.com/huggingface/datasets/pull/4816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4816.patch",
"merged_at": "2022-08-12T14:17:56"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,334,078,303
| 4,815
|
Outdated loading script for OPUS ParaCrawl dataset
|
closed
| 2022-08-10T05:12:34
| 2022-08-12T14:17:57
| 2022-08-12T14:17:57
|
https://github.com/huggingface/datasets/issues/4815
| null |
albertvillanova
| false
|
[] |
1,333,356,230
| 4,814
|
Support CSV as metadata file format in AudioFolder/ImageFolder
|
closed
| 2022-08-09T14:36:49
| 2022-08-31T11:59:08
| 2022-08-31T11:59:08
|
https://github.com/huggingface/datasets/issues/4814
| null |
mariosasko
| false
|
[] |
1,333,287,756
| 4,813
|
Fix loading example in opus dataset cards
|
closed
| 2022-08-09T13:47:38
| 2022-08-09T17:52:15
| 2022-08-09T17:38:18
|
https://github.com/huggingface/datasets/pull/4813
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4813",
"html_url": "https://github.com/huggingface/datasets/pull/4813",
"diff_url": "https://github.com/huggingface/datasets/pull/4813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4813.patch",
"merged_at": "2022-08-09T17:38:18"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,333,051,730
| 4,812
|
Fix bug in function validate_type for Python >= 3.9
|
closed
| 2022-08-09T10:32:42
| 2022-08-12T13:41:23
| 2022-08-12T13:27:04
|
https://github.com/huggingface/datasets/pull/4812
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4812",
"html_url": "https://github.com/huggingface/datasets/pull/4812",
"diff_url": "https://github.com/huggingface/datasets/pull/4812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4812.patch",
"merged_at": "2022-08-12T13:27:04"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,333,043,421
| 4,811
|
Bug in function validate_type for Python >= 3.9
|
closed
| 2022-08-09T10:25:21
| 2022-08-12T13:27:05
| 2022-08-12T13:27:05
|
https://github.com/huggingface/datasets/issues/4811
| null |
albertvillanova
| false
|
[] |
1,333,038,702
| 4,810
|
Add description to hellaswag dataset
|
closed
| 2022-08-09T10:21:14
| 2022-09-23T11:35:38
| 2022-09-23T11:33:44
|
https://github.com/huggingface/datasets/pull/4810
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4810",
"html_url": "https://github.com/huggingface/datasets/pull/4810",
"diff_url": "https://github.com/huggingface/datasets/pull/4810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4810.patch",
"merged_at": "2022-09-23T11:33:44"
}
|
julien-c
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Are the `metadata JSON file` not on their way to deprecation? 😆😇\r\n\r\nIMO, more generally than this particular PR, the contribution process should be simplified now that many validation checks happen on the hub side.\r\n\r\nKeeping this open in the meantime to get more potential feedback!"
] |
1,332,842,747
| 4,809
|
Complete the mlqa dataset card
|
closed
| 2022-08-09T07:38:06
| 2022-08-09T16:26:21
| 2022-08-09T13:26:43
|
https://github.com/huggingface/datasets/pull/4809
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4809",
"html_url": "https://github.com/huggingface/datasets/pull/4809",
"diff_url": "https://github.com/huggingface/datasets/pull/4809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4809.patch",
"merged_at": "2022-08-09T13:26:43"
}
|
el2e10
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your contribution, @eldhoittangeorge.\r\n> \r\n> The CI error message: https://github.com/huggingface/datasets/runs/7743526624?check_suite_focus=true\r\n> \r\n> ```\r\n> E ValueError: The following issues have been found in the dataset cards:\r\n> E YAML tags:\r\n> E __init__() missing 5 required positional arguments: 'annotations_creators', 'language_creators', 'license', 'size_categories', and 'source_datasets'\r\n> ```\r\n\r\nI will fix the CI error.",
"@eldhoittangeorge, thanks again for all the fixes. Just a minor one before we can merge this PR: https://github.com/huggingface/datasets/runs/7744885754?check_suite_focus=true\r\n```\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language_creators':\r\nE \t['unknown'] are not registered tags for 'language_creators', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/creators.json\r\n```",
"> \r\n\r\nThanks, I updated the file. \r\nA small suggestion can you mention this link https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/ in the contribution page. So that others will know the acceptable values for the tags."
] |
1,332,840,217
| 4,808
|
Add more information to the dataset card of mlqa dataset
|
closed
| 2022-08-09T07:35:42
| 2022-08-09T13:33:23
| 2022-08-09T13:33:23
|
https://github.com/huggingface/datasets/issues/4808
| null |
el2e10
| false
|
[
"#self-assign",
"Fixed by:\r\n- #4809"
] |
1,332,784,110
| 4,807
|
document fix in opus_gnome dataset
|
closed
| 2022-08-09T06:38:13
| 2022-08-09T07:28:03
| 2022-08-09T07:28:03
|
https://github.com/huggingface/datasets/pull/4807
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4807",
"html_url": "https://github.com/huggingface/datasets/pull/4807",
"diff_url": "https://github.com/huggingface/datasets/pull/4807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4807.patch",
"merged_at": null
}
|
gojiteji
| true
|
[
"Duplicate:\r\n- #4806 "
] |
1,332,664,038
| 4,806
|
Fix opus_gnome dataset card
|
closed
| 2022-08-09T03:40:15
| 2022-08-09T12:06:46
| 2022-08-09T11:52:04
|
https://github.com/huggingface/datasets/pull/4806
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4806",
"html_url": "https://github.com/huggingface/datasets/pull/4806",
"diff_url": "https://github.com/huggingface/datasets/pull/4806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4806.patch",
"merged_at": "2022-08-09T11:52:04"
}
|
gojiteji
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@gojiteji why have you closed this PR and created an identical one?\r\n- #4807 ",
"@albertvillanova \r\nI forgot to follow \"How to create a Pull\" in CONTRIBUTING.md in this branch.",
"Both are identical. And you can push additional commits to this branch.",
"I see. Thank you for your comment.",
"Anyway, @gojiteji thanks for your contribution and this fix.",
"Once you have modified the `opus_gnome` dataset card, our Continuous Integration test suite performs some tests on it that make some additional requirements: the errors that appear have nothing to do with your contribution, but with these additional quality requirements.",
"> the errors that appear have nothing to do with your contribution, but with these additional quality requirements.\r\n\r\nIs there anything I should do?",
"If you would like to address them as well in this PR, it would be awesome: https://github.com/huggingface/datasets/runs/7741104780?check_suite_focus=true\r\n",
"These are the 2 error messages:\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tNo first-level heading starting with `Dataset Card for` found in README. Skipping further validation for this README.\r\n\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language':\r\nE \t['ara', 'cat', 'foo', 'gr', 'nqo', 'tmp'] are not registered tags for 'language', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/languages.json\r\n```",
"In principle there are 2 errors:\r\n\r\nThe first one says, the title of the README does not start with `Dataset Card for`:\r\n- The README title is: `# Dataset Card Creation Guide`\r\n- According to the [template here](https://github.com/huggingface/datasets/blob/main/templates/README.md), it should be: `# Dataset Card for [Dataset Name]`",
"In relation with the languages:\r\n- you should check whether the language codes are properly spelled\r\n- and if so, adding them to our `languages.json` file, so that they are properly validated",
"Thank you for the detailed information. I'm checking it now.",
"```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tExpected some content in section `Data Instances` but it is empty.\r\nE -\tExpected some content in section `Data Fields` but it is empty.\r\nE -\tExpected some content in section `Data Splits` but it is empty.\r\n```",
"I added `ara`, `cat`, `gr`, and `nqo` to `languages.json` and removed `foo` and `tmp` from `README.md`.\r\nI also write Data Instances, Data Fields, and Data Splits in `README.md`.",
"Thanks for your investigation and fixes to the dataset card structure! I'm just making some suggestions before merging this PR: see below.",
"Should I create PR for `config.json` to add ` ara cat gr nqo` first?\r\nI think I can pass this failing after that.\r\n\r\nOr removing `ara, cat, gr, nqo, foo, tmp` from `README.md`. ",
"Once you address these issues, all the CI tests will pass.",
"Once the remaining changes are addressed (see unresolved above), we will be able to merge this:\r\n- [ ] Remove \"ara\" from README\r\n- [ ] Remove \"cat\" from README\r\n- [ ] Remove \"gr\" from README\r\n- [ ] Replace \"tmp\" with \"tyj\" in README\r\n- [ ] Add \"tyj\" to `languages.json`:\r\n ```\r\n \"tyj\": \"Tai Do; Tai Yo\",",
"I did the five changes."
] |
1,332,653,531
| 4,805
|
Wrong example in opus_gnome dataset card
|
closed
| 2022-08-09T03:21:27
| 2022-08-09T11:52:05
| 2022-08-09T11:52:05
|
https://github.com/huggingface/datasets/issues/4805
| null |
gojiteji
| false
|
[] |
1,332,630,358
| 4,804
|
streaming dataset with concatenating splits raises an error
|
open
| 2022-08-09T02:41:56
| 2023-11-25T14:52:09
| null |
https://github.com/huggingface/datasets/issues/4804
| null |
Bing-su
| false
|
[
"Hi! Only the name of a particular split (\"train\", \"test\", ...) is supported as a split pattern if `streaming=True`. We plan to address this limitation soon.",
"Hi, have you addressed this yet?",
"yes, same error occurs.\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# error\r\nrepo = \"nateraw/ade20k-tiny\"\r\ndataset = load_dataset(repo, split=\"train+validation\", streaming=True)\r\n```\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-3-a6ae02d63899>](https://localhost:8080/#) in <cell line: 5>()\r\n 3 # error\r\n 4 repo = \"nateraw/ade20k-tiny\"\r\n----> 5 dataset = load_dataset(repo, split=\"train+validation\", streaming=True)\r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)\r\n 1265 splits_generator = splits_generators[split]\r\n 1266 else:\r\n-> 1267 raise ValueError(f\"Bad split: {split}. Available splits: {list(splits_generators)}\")\r\n 1268 \r\n 1269 # Create a dataset for each of the given splits\r\n\r\nValueError: Bad split: train+validation. Available splits: ['train', 'validation']\r\n```\r\n\r\ngoogle colab, `datasets==2.12.0`\r\n```\r\n- huggingface_hub version: 0.14.1\r\n- Platform: Linux-5.10.147+-x86_64-with-glibc2.31\r\n- Python version: 3.10.11\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: /root/.cache/huggingface/token\r\n- Has saved token ?: False\r\n- Configured git credential helpers: \r\n- FastAI: 2.7.12\r\n- Tensorflow: 2.12.0\r\n- Torch: 2.0.0+cu118\r\n- Jinja2: 3.1.2\r\n- Graphviz: 0.20.1\r\n- Pydot: 1.4.2\r\n- Pillow: 8.4.0\r\n- hf_transfer: N/A\r\n- gradio: N/A\r\n- ENDPOINT: https://huggingface.co/\r\n- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub\r\n- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /root/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n```\r\n",
"Hi!, still not fixed this, the truth is that it is an important update for what we want to train the entire dataset because we want to train fast, also should be enabled the function \"[train:18%]\" for streaming"
] |
1,332,079,562
| 4,803
|
Support `pipeline` argument in inspect.py functions
|
open
| 2022-08-08T16:01:24
| 2023-09-25T12:21:35
| null |
https://github.com/huggingface/datasets/issues/4803
| null |
severo
| false
|
[
"Now: the preview (first-rows) works, but not the conversion to parquet. See https://huggingface.co/datasets/wikipedia/viewer/20220301.de/train\r\n\r\n```\r\n_split_generators() missing 1 required positional argument: 'pipeline'\r\n\r\nError code: UnexpectedError\r\n```"
] |
1,331,676,691
| 4,802
|
`with_format` behavior is inconsistent on different datasets
|
open
| 2022-08-08T10:41:34
| 2022-08-09T16:49:09
| null |
https://github.com/huggingface/datasets/issues/4802
| null |
fxmarty
| false
|
[
"Hi! You can get a `torch.Tensor` if you do the following:\r\n```python\r\nraw = load_dataset(\"beans\", split=\"train\")\r\nraw = raw.select(range(100))\r\n\r\npreprocessor = AutoFeatureExtractor.from_pretrained(\"nateraw/vit-base-beans\")\r\n\r\nfrom datasets import Array3D\r\nfeatures = raw.features.copy()\r\nfeatures[\"pixel_values\"] = datasets.Array3D(shape=(3, 224, 224), dtype=\"float32\")\r\n\r\ndef preprocess_func(examples):\r\n imgs = [img.convert(\"RGB\") for img in examples[\"image\"]]\r\n return preprocessor(imgs)\r\n\r\ndata = raw.map(preprocess_func, batched=True, features=features)\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n\r\ndata = data.with_format(\"torch\", columns=[\"pixel_values\"])\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n```\r\n\r\nThe reason for this \"inconsistency\" in the default case is the way PyArrow infers the type of multi-dim arrays (in this case, the `pixel_values` column). If the type is not specified manually, PyArrow assumes it is a dynamic-length sequence (it needs to know the type before writing the first batch to a cache file, and it can't be sure the array is fixed ahead of time; `ArrayXD` is our way of telling that the dims are fixed), so it already fails to convert the corresponding array to NumPy properly (you get an array of `np.object` arrays). And `with_format(\"torch\")` replaces NumPy arrays with Torch tensors, so this bad formatting propagates."
] |
1,331,337,418
| 4,801
|
Fix fine classes in trec dataset
|
closed
| 2022-08-08T05:11:02
| 2022-08-22T16:29:14
| 2022-08-22T16:14:15
|
https://github.com/huggingface/datasets/pull/4801
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4801",
"html_url": "https://github.com/huggingface/datasets/pull/4801",
"diff_url": "https://github.com/huggingface/datasets/pull/4801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4801.patch",
"merged_at": "2022-08-22T16:14:15"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,331,288,128
| 4,800
|
support LargeListArray in pyarrow
|
closed
| 2022-08-08T03:58:46
| 2024-09-27T09:54:17
| 2024-08-12T14:43:46
|
https://github.com/huggingface/datasets/pull/4800
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4800",
"html_url": "https://github.com/huggingface/datasets/pull/4800",
"diff_url": "https://github.com/huggingface/datasets/pull/4800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4800.patch",
"merged_at": null
}
|
Jiaxin-Wen
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4800). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this! Can you run `make style` at the repo root to fix the code quality error in CI and add a test?",
"Hi, I have fixed the code quality error and added a test",
"It seems that CI fails due to the lack of memory for allocating a large array, while I pass the test locally.",
"Also, the current implementation of the NumPy-to-PyArrow conversion creates a lot of copies, which is not ideal for large arrays.\r\n\r\nWe can improve performance significantly if we rewrite this part:\r\nhttps://github.com/huggingface/datasets/blob/83f695c14507a3a38e9f4d84612cf49e5f50c153/src/datasets/features/features.py#L1322-L1323\r\n\r\nas\r\n```python\r\n values = pa.array(arr.ravel(), type=type) \r\n```",
"@xwwwwww Feel free to ignore https://github.com/huggingface/datasets/pull/4800#issuecomment-1212280549 and revert the changes you've made to address it. \r\n\r\nWithout copying the array, this would be possible:\r\n```python\r\narr = np.array([\r\n [1, 2, 3],\r\n [4, 5, 6]\r\n])\r\n\r\ndset = Dataset.from_dict({\"data\": [arr]})\r\n\r\narr[0][0] = 100 # this change would be reflected in dset's PyArrow table -> a breaking change and also probably unexpected by the user \r\n```",
"> @xwwwwww Feel free to ignore [#4800 (comment)](https://github.com/huggingface/datasets/pull/4800#issuecomment-1212280549) and revert the changes you've made to address it.\r\n> \r\n> Without copying the array, this would be possible:\r\n> \r\n> ```python\r\n> arr = np.array([\r\n> [1, 2, 3],\r\n> [4, 5, 6]\r\n> ])\r\n> \r\n> dset = Dataset.from_dict({\"data\": [arr]})\r\n> \r\n> arr[0][0] = 100 # this change would be reflected in dset's PyArrow table -> a breaking change and also probably unexpected by the user \r\n> ```\r\n\r\nOh, that makes sense.",
"passed tests in ubuntu while failed in windows",
"@mariosasko Hi, do you have any clue about this failure in windows?",
"Perhaps we can skip the added test on Windows then.\r\n\r\nNot sure if this can help, but the ERR tool available on Windows outputs the following for the returned error code `-1073741819`:\r\n```\r\n# for decimal -1073741819 / hex 0xc0000005\r\n ISCSI_ERR_SETUP_NETWORK_NODE iscsilog.h\r\n# Failed to setup initiator portal. Error status is given in\r\n# the dump data.\r\n STATUS_ACCESS_VIOLATION ntstatus.h\r\n# The instruction at 0x%p referenced memory at 0x%p. The\r\n# memory could not be %s.\r\n USBD_STATUS_DEV_NOT_RESPONDING usb.h\r\n# as an HRESULT: Severity: FAILURE (1), FACILITY_NONE (0x0), Code 0x5\r\n# for decimal 5 / hex 0x5\r\n WINBIO_FP_TOO_FAST winbio_err.h\r\n# Move your finger more slowly on the fingerprint reader.\r\n# as an HRESULT: Severity: FAILURE (1), FACILITY_NULL (0x0), Code 0x5\r\n ERROR_ACCESS_DENIED winerror.h\r\n# Access is denied.\r\n# 5 matches found for \"-1073741819\"\r\n```",
"What's the proper way to skip the added test in windows?\r\nI tried `if platform.system() == 'Linux'`, but the CI test seems stuck",
"@mariosasko Hi, any idea about this :)",
"Hi again! We want to skip the test on Windows but not on Linux. You can use this decorator to do so: \r\n```python\r\n@pytest.mark.skipif(os.name == \"nt\" and (os.getenv(\"CIRCLECI\") == \"true\" or os.getenv(\"GITHUB_ACTIONS\") == \"true\"), reason=\"The Windows CI runner does not have enough RAM to run this test\")\r\n@pytest.mark.parametrize(...)\r\ndef test_large_array_xd_with_np(...):\r\n ...\r\n```",
"> Hi again! We want to skip the test on Windows but not on Linux. You can use this decorator to do so:\r\n> \r\n> ```python\r\n> @pytest.mark.skipif(os.name == \"nt\" and (os.getenv(\"CIRCLECI\") == \"true\" or os.getenv(\"GITHUB_ACTIONS\") == \"true\"), reason=\"The Windows CI runner does not have enough RAM to run this test\")\r\n> @pytest.mark.parametrize(...)\r\n> def test_large_array_xd_with_np(...):\r\n> ...\r\n> ```\r\n\r\nCI on windows still stucks :(",
"@mariosasko Hi, could you please take a look at this issue",
"@mariosasko Hi, all checks have passed, and we are finally ready to merge this PR :)",
"@lhoestq @albertvillanova Perhaps other maintainers can take a look and merge this PR :)",
"same issus come from pyarrow.Is there a solution for this?\r\nfile parquet:50GB\r\ndatasets version: 2.14.4\r\npyarrow :12.0.1\r\n\r\nGenerating train split: 0 examples [01:22, ? examples/s]\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.10/site-packages/datasets/builder.py\", line 1925, in _prepare_split_single\r\n for _, table in generator:\r\n File \"/opt/conda/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 79, in _generate_tables\r\n for batch_idx, record_batch in enumerate(\r\n File \"pyarrow/_parquet.pyx\", line 1315, in iter_batches\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\nOSError: List index overflow.",
"when this feature adds to the newest version?",
"LargeListArray support is not ready yet, there is one remaining change:\r\n\r\n> I think the key is to add the large parameter to Sequence and update the functions you modified in this PR to use pa.list_() if large is False, and pa.large_list otherwise",
"Gents, any move on this. Convert largse list of dicts to Datasets is a nightmare and took all RAM possible. Is there any other alternative?\r\n\r\nThanks,\r\nSteve\r\n",
"Arrow large_list is supported since datasets 2.21.0. See: https://github.com/huggingface/datasets/releases/tag/2.21.0\r\n- #7019"
] |
1,330,889,854
| 4,799
|
video dataset loader/parser
|
closed
| 2022-08-07T01:54:12
| 2023-10-01T00:08:31
| 2022-08-09T16:42:51
|
https://github.com/huggingface/datasets/issues/4799
| null |
verbiiyo
| false
|
[
"Hi! We've just started discussing the video support in `datasets` (decoding backends, video feature type, etc.), so I believe we should have something tangible by the end of this year.\r\n\r\nAlso, if you have additional video features in mind that you would like to see, feel free to let us know",
"Coool thanks @mariosasko ",
"Hey @mariosasko, I was wondering if there's a way to load video data currently in the library? \r\nAlternatively is there a way I could hack it through the dataset.from_dict() method? I tried to hack it, but the issue I run into is that earlier I was doing a `cast_column()` call for the `Image` feature, but now I'm not sure about to do if I want the dataset to have the following keys when I call from_dict on it:\r\n`{\"caption\":[list of text captions], \"video_frames\": [list of image lists with one image list corresponding to one video]}`\r\n\r\nMaybe something like `cast_column(\"video_frames\", List(Image))` ..\r\n(This is assuming I have already extracted frames from video)"
] |
1,330,699,942
| 4,798
|
Shard generator
|
closed
| 2022-08-06T09:14:06
| 2022-10-03T15:35:10
| 2022-10-03T15:35:10
|
https://github.com/huggingface/datasets/pull/4798
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4798",
"html_url": "https://github.com/huggingface/datasets/pull/4798",
"diff_url": "https://github.com/huggingface/datasets/pull/4798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4798.patch",
"merged_at": null
}
|
marianna13
| true
|
[
"Hi, thanks!\r\n\r\n> I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to \"split\" these large datasets into chunks with equal size\r\n\r\n`map`, the method we use for processing in `datasets`, already does that if `batched=True`. And you can control the batch size with `batch_size`.\r\n\r\n> Even better - be able to run through these chunks one by one in simple and convenient way\r\n\r\nIt's not hard to do this \"manually\" with the existing API:\r\n```python\r\nbatch_size = <BATCH_SIZE>\r\nfor i in range(len(dset) // batch_size)\r\n shard = dset[i * batch_size:(i+1) * batch_size] # a dict of lists\r\n shard = Dataset.from_dict(shard)\r\n```\r\n(should be of similar performance to your implementation)\r\n\r\nStill, I think an API like that could be useful if implemented efficiently (see [this](https://discuss.huggingface.co/t/why-is-it-so-slow-to-access-data-through-iteration-with-hugginface-dataset/20385) discussion to understand what's the issue with `select`/`__getitem__` on which your implementation relies on), which can be done with `pa.Table.to_reader` in PyArrow 8.0.0+, .\r\n\r\n@lhoestq @albertvillanova wdyt? We could use such API to efficiently iterate over the batches in `map` before processing them.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4798). All of your documentation changes will be reflected on that endpoint.",
"This is more efficient since it doesn't bring the data in memory:\r\n```python\r\nfor i in range(len(dset) // batch_size)\r\n start = i * batch_size\r\n end = min((i+1) * batch_size, len(dset))\r\n shard = dset.select(range(start, end))\r\n```\r\n\r\n@marianna13 can you give more details on when it would be handy to have this shard generator ?",
"> This is more efficient since it doesn't bring the data in memory:\r\n> \r\n> ```python\r\n> for i in range(len(dset) // batch_size)\r\n> start = i * batch_size\r\n> end = min((i+1) * batch_size, len(dset))\r\n> shard = dset.select(range(start, end))\r\n> ```\r\n> \r\n> @marianna13 can you give more details on when it would be handy to have this shard generator ?\r\n\r\nSure! I used such generator when I needed to process a very large dataset (>1TB) in parallel, I've found out empirically that it's much more efficient to do that by processing only one part of the dataset with the shard generator. I tried to use a map with batching but it causesd oom errors, I tried to use the normal shard and here's what I came up with. So I thought it might be helpful to someone else!",
"I see thanks ! `map` should work just fine even at this scale, feel free to open an issue if you'd like to discuss your OOM issue.\r\n\r\nRegarding `shard_generator`, since it is pretty straightforward to get shards I'm not sure we need that extra Dataset method",
"Hi again! We've just added `_iter_batches(batch_size)` to the `Dataset` API for fast iteration over batches/chunks, so I think we can close this PR. Compared to this implementation, `_iter_batches` leverages `pa.Table.to_reader` for chunking, which makes it significantly faster."
] |
1,330,000,998
| 4,797
|
Torgo dataset creation
|
closed
| 2022-08-05T14:18:26
| 2022-08-09T18:46:00
| 2022-08-09T18:46:00
|
https://github.com/huggingface/datasets/pull/4797
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4797",
"html_url": "https://github.com/huggingface/datasets/pull/4797",
"diff_url": "https://github.com/huggingface/datasets/pull/4797.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4797.patch",
"merged_at": null
}
|
YingLi001
| true
|
[
"Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Create a dataset card](https://huggingface.co/docs/datasets/dataset_card)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nFeel free to ask if you need any additional support/help."
] |
1,329,887,810
| 4,796
|
ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset
|
open
| 2022-08-05T12:41:19
| 2024-11-29T16:35:17
| null |
https://github.com/huggingface/datasets/issues/4796
| null |
NielsRogge
| false
|
[
"@mariosasko I'm getting a similar issue when creating a Dataset from a Pandas dataframe, like so:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Image, Value\r\nimport pandas as pd\r\nimport requests\r\nimport PIL\r\n\r\n# we need to define the features ourselves\r\nfeatures = Features({\r\n 'a': Value(dtype='int32'),\r\n 'b': Image(),\r\n})\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = PIL.Image.open(requests.get(url, stream=True).raw)\r\n\r\ndf = pd.DataFrame({\"a\": [1, 2], \r\n \"b\": [image, image]})\r\n\r\ndataset = Dataset.from_pandas(df, features=features) \r\n```\r\nresults in \r\n\r\n```\r\nArrowInvalid: ('Could not convert <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7F7991A15C10> with type JpegImageFile: did not recognize Python value type when inferring an Arrow data type', 'Conversion failed for column b with type object')\r\n```\r\n\r\nWill the PR linked above also fix that?",
"I would expect this to work, but it doesn't. Shouldn't be too hard to fix tho (in a subsequent PR).",
"Hi @mariosasko just wanted to check in if there is a PR to follow for this. I was looking to create a demo app using this. If it's not working I can just use byte encoded images in the dataset which are not displayed. ",
"Hi @darraghdog! No PR yet, but I plan to fix this before the next release.",
"I was just pointed here by @mariosasko, meanwhile I found a workaround using `encode_example` like so:\r\n\r\n```\r\nfrom datasets import load_from_disk, Dataset\r\nDATASET_PATH = \"/hf/m4-master/data/cm4/cm4-10000-v0.1\"\r\nds1 = load_from_disk(DATASET_PATH)\r\nds2 = Dataset.from_dict(mapping={k: [] for k in ds1[99].keys()},\r\n features=ds1.features\r\n)\r\nfor i in range(2):\r\n # could add several representative items here\r\n row = ds1[99]\r\n row_encoded = ds2.features.encode_example(row)\r\n ds2 = ds2.add_item(row_encoded)\r\n```",
"Hmm, interesting. If I create the dataset on the fly:\r\n\r\n```\r\nfrom datasets import load_from_disk, Dataset\r\nDATASET_PATH = \"/hf/m4-master/data/cm4/cm4-10000-v0.1\"\r\nds1 = load_from_disk(DATASET_PATH)\r\nds2 = Dataset.from_dict(mapping={k: [v]*2 for k, v in ds1[99].items()},\r\n features=ds1.features)\r\n```\r\n\r\nit doesn't fail with the error in the OP, as `from_dict` performs `encode_batch`.\r\n\r\nHowever if I try to use this dataset it fails now with:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/multiprocess/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 524, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/fingerprint.py\", line 480, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2775, in _map_single\r\n batch = apply_function_on_filtered_inputs(\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2655, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2347, in decorated\r\n result = f(decorated_item, *args, **kwargs)\r\n File \"debug_leak2.py\", line 235, in split_pack_and_pad\r\n images.append(image_transform(image.convert(\"RGB\")))\r\nAttributeError: 'dict' object has no attribute 'convert'\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"debug_leak2.py\", line 418, in <module>\r\n train_loader, val_loader = get_dataloaders()\r\n File \"debug_leak2.py\", line 348, in get_dataloaders\r\n dataset = dataset.map(mapper, batch_size=32, batched=True, remove_columns=dataset.column_names, num_proc=4)\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 2500, in map\r\n transformed_shards[index] = async_result.get()\r\n File \"/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/multiprocess/pool.py\", line 771, in get\r\n raise self._value\r\nAttributeError: 'dict' object has no attribute 'convert'\r\n```\r\n\r\nbut if I create that same dataset one item at a time as in the previous comment's code snippet it doesn't fail.\r\n\r\nThe features of this dataset are set to:\r\n\r\n```\r\n{'texts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), \r\n'images': Sequence(feature=Image(decode=True, id=None), length=-1, id=None)}\r\n```",
"> @mariosasko I'm getting a similar issue when creating a Dataset from a Pandas dataframe, like so:\r\n> \r\n> ```\r\n> from datasets import Dataset, Features, Image, Value\r\n> import pandas as pd\r\n> import requests\r\n> import PIL\r\n> \r\n> # we need to define the features ourselves\r\n> features = Features({\r\n> 'a': Value(dtype='int32'),\r\n> 'b': Image(),\r\n> })\r\n> \r\n> url = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\n> image = PIL.Image.open(requests.get(url, stream=True).raw)\r\n> \r\n> df = pd.DataFrame({\"a\": [1, 2], \r\n> \"b\": [image, image]})\r\n> \r\n> dataset = Dataset.from_pandas(df, features=features) \r\n> ```\r\n> \r\n> results in\r\n> \r\n> ```\r\n> ArrowInvalid: ('Could not convert <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7F7991A15C10> with type JpegImageFile: did not recognize Python value type when inferring an Arrow data type', 'Conversion failed for column b with type object')\r\n> ```\r\n> \r\n> Will the PR linked above also fix that?\r\n\r\nIt looks like the problem still exists.\r\nAny news ? Any good workaround ?\r\n\r\nThank you",
"There is a workaround: \r\nCreate a loader python scrypt and upload the dataset to huggingface.\r\n\r\nHere is an example how to do that:\r\n\r\nhttps://huggingface.co/datasets/jamescalam/image-text-demo/tree/main\r\n\r\nand Here are videos with explanations:\r\n\r\nhttps://www.youtube.com/watch?v=lqK4ocAKveE and https://www.youtube.com/watch?v=ODdKC30dT8c",
"cc @mariosasko gentle ping for a fix :)",
"Any update on this? I'm still facing this issure. Any workaround?",
"I was facing the same issue. Downgrading datasets from 2.11.0 to 2.4.0 solved the issue. ",
"> Any update on this? I'm still facing this issure. Any workaround?\r\n\r\nI was able to resolve my issue with a quick workaround: \r\n\r\n```\r\nfrom collections import defaultdict\r\nfrom datasets import Dataset\r\n \r\ndata = defaultdict(list)\r\nfor idx in tqdm(range( len(dataloader)),desc=\"Captioning...\"):\r\n img = dataloader[idx]\r\n data['image'].append(img)\r\n data['text'].append(f\"{img_{idx}})\r\n \r\ndataset = Dataset.from_dict(data)\r\ndataset = dataset.filter(lambda example: example['image'] is not None)\r\ndataset = dataset.filter(lambda example: example['text'] is not None)\r\n \r\ndataset.push_to_hub(path-to-repo', private=False)\r\n```\r\n\r\nHope it helps!\r\nHappy coding",
"> > Any update on this? I'm still facing this issure. Any workaround?\r\n> \r\n> I was able to resolve my issue with a quick workaround:\r\n> \r\n> ```\r\n> from collections import defaultdict\r\n> from datasets import Dataset\r\n> \r\n> data = defaultdict(list)\r\n> for idx in tqdm(range( len(dataloader)),desc=\"Captioning...\"):\r\n> img = dataloader[idx]\r\n> data['image'].append(img)\r\n> data['text'].append(f\"{img_{idx}})\r\n> \r\n> dataset = Dataset.from_dict(data)\r\n> dataset = dataset.filter(lambda example: example['image'] is not None)\r\n> dataset = dataset.filter(lambda example: example['text'] is not None)\r\n> \r\n> dataset.push_to_hub(path-to-repo', private=False)\r\n> ```\r\n> \r\n> Hope it helps! Happy coding\r\n\r\nIt works!! ",
"> \r\n\r\nhow did this work, how to use this script or where to paste it?",
"I had a similar issue to @NielsRogge where I was unable to create a dataset from a Pandas DataFrame containing PIL.Images.\r\n\r\nI found another workaround that works in this case which involves converting the DataFrame to a python dictionary, and then creating a dataset from said python dictionary.\r\n\r\nThis is a generic example of my workaround. The example assumes that you have your data in a Pandas DataFrame variable called \"dataframe\" plus a dictionary of your data's features in a variable called \"features\".\r\n```\r\nimport datasets\r\n\r\ndictionary = dataframe.to_dict(orient='list')\r\ndataset = datasets.Dataset.from_dict(dictionary, features=features)\r\n```",
"cc @mariosasko this issue has been open for 2 years, would be great to resolve it :)",
"I have the same issue, my current workaround is saving the dataframe to a csv and then loading the dataset from the csv. Would also appreciate it a fix :)",
"> data = defaultdict(list)\r\n\r\nawesome, it really works~",
"I found something that can be used as solution.\r\n\r\nI have the same problem when I've try to load the images from a pamdas dataset\r\n\r\nIf you have all on a pandas dataset try \r\nDataset.from_dict( your_df.reset_index(drop=True).to_dict(orient='list'), split=set_your_split)\r\n\r\nAnd this avoid the error"
] |
1,329,525,732
| 4,795
|
Missing MBPP splits
|
closed
| 2022-08-05T06:51:01
| 2022-09-13T12:27:24
| 2022-09-13T12:27:24
|
https://github.com/huggingface/datasets/issues/4795
| null |
stadlerb
| false
|
[
"Thanks for reporting this as well, @stadlerb.\r\n\r\nI suggest waiting for the answer of the data owners... ",
"@albertvillanova The first author of the paper responded to the upstream issue:\r\n> Task IDs 11-510 are the 500 test problems. We use 90 problems (511-600) for validation and then remaining 374 for fine-tuning (601-974). The other problems can be used as desired, either for training or few-shot prompting (although this should be specified).",
"Thanks for the follow-up, @stadlerb.\r\n\r\nWould you be willing to open a Pull Request to address this issue? :wink: ",
"Opened a [PR](https://github.com/huggingface/datasets/pull/4943) to implement this--lmk if you have any feedback"
] |
1,328,593,929
| 4,792
|
Add DocVQA
|
open
| 2022-08-04T13:07:26
| 2022-08-08T05:31:20
| null |
https://github.com/huggingface/datasets/issues/4792
| null |
NielsRogge
| false
|
[
"Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```"
] |
1,328,571,064
| 4,791
|
Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english
|
closed
| 2022-08-04T12:49:16
| 2022-08-04T13:43:16
| 2022-08-04T13:43:16
|
https://github.com/huggingface/datasets/issues/4791
| null |
xplip
| false
|
[
"Thanks for reporting. It's a known issue that should be fixed soon. Meanwhile, I had to manually trigger the dataset viewer. It's OK now.\r\nNote that the extreme aspect ratio of the images generates another issue, that we're inspecting."
] |
1,328,546,904
| 4,790
|
Issue with fine classes in trec dataset
|
closed
| 2022-08-04T12:28:51
| 2022-08-22T16:14:16
| 2022-08-22T16:14:16
|
https://github.com/huggingface/datasets/issues/4790
| null |
albertvillanova
| false
|
[] |
1,328,409,253
| 4,789
|
Update doc upload_dataset.mdx
|
closed
| 2022-08-04T10:24:00
| 2022-09-09T16:37:10
| 2022-09-09T16:34:58
|
https://github.com/huggingface/datasets/pull/4789
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4789",
"html_url": "https://github.com/huggingface/datasets/pull/4789",
"diff_url": "https://github.com/huggingface/datasets/pull/4789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4789.patch",
"merged_at": "2022-09-09T16:34:58"
}
|
mishig25
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,328,246,021
| 4,788
|
Fix NonMatchingChecksumError in mbpp dataset
|
closed
| 2022-08-04T08:17:40
| 2022-08-04T17:34:00
| 2022-08-04T17:21:01
|
https://github.com/huggingface/datasets/pull/4788
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4788",
"html_url": "https://github.com/huggingface/datasets/pull/4788",
"diff_url": "https://github.com/huggingface/datasets/pull/4788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4788.patch",
"merged_at": "2022-08-04T17:21:01"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the quick response! Before noticing that you already had implemented the fix, I already had implemened my own version. I'd also suggest bumping the major version because the contents of the dataset changed, even if only slightly.\r\nI'll attach my version of the affected files: [mbpp-checksum-changes.zip](https://github.com/huggingface/datasets/files/9258161/mbpp-checksum-changes.zip).",
"Hi @stadlerb, thanks for your feedback.\r\n\r\nWe normally update the major version whenever there is a new dataset release, usually with a breaking change in schema. The patch version is updated whenever there is a small correction in the dataset that does not change its schema.\r\n\r\nAs a side note for future contributions, please note that this dataset is hosted in our library GitHub repository. Therefore, the PRs to GitHub-hosted datasets needs being done through GitHub.\r\n\r\nCurrently added datasets are hosted on the Hub and for them, PRs can be done through the Hub.",
"I just noticed another problem with the dataset: The [GitHub page](https://github.com/google-research/google-research/tree/master/mbpp) and the [paper](http://arxiv.org/abs/2108.07732) mention a train-test split, which is not reflected in the dataloader. I'll open a new issue regarding this later."
] |
1,328,243,911
| 4,787
|
NonMatchingChecksumError in mbpp dataset
|
closed
| 2022-08-04T08:15:51
| 2022-08-04T17:21:01
| 2022-08-04T17:21:01
|
https://github.com/huggingface/datasets/issues/4787
| null |
albertvillanova
| false
|
[] |
1,327,340,828
| 4,786
|
.save_to_disk('path', fs=s3) TypeError
|
closed
| 2022-08-03T14:49:29
| 2022-08-03T15:23:00
| 2022-08-03T15:23:00
|
https://github.com/huggingface/datasets/issues/4786
| null |
h-k-dev
| false
|
[] |
1,327,225,826
| 4,785
|
Require torchaudio<0.12.0 in docs
|
closed
| 2022-08-03T13:32:00
| 2022-08-03T15:07:43
| 2022-08-03T14:52:16
|
https://github.com/huggingface/datasets/pull/4785
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4785",
"html_url": "https://github.com/huggingface/datasets/pull/4785",
"diff_url": "https://github.com/huggingface/datasets/pull/4785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4785.patch",
"merged_at": "2022-08-03T14:52:16"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,326,395,280
| 4,784
|
Add Multiface dataset
|
open
| 2022-08-02T21:00:22
| 2022-08-08T14:42:36
| null |
https://github.com/huggingface/datasets/issues/4784
| null |
osanseviero
| false
|
[
"Hi @osanseviero I would like to add this dataset.",
"Hey @nandwalritik! Thanks for offering to help!\r\n\r\nThis dataset might be somewhat complex and I'm concerned about it being 65 TB, which would be quite expensive to host. @lhoestq @mariosasko I would love your input if you think it's worth adding this dataset.",
"Thanks for proposing this interesting dataset, @osanseviero.\r\n\r\nPlease note that the data files are already hosted in a third-party server: e.g. the index of data files for entity \"6795937\" is at https://fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com/MugsyDataRelease/v0.0/identities/6795937/index.html \r\n- audio.tar: https://fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com/MugsyDataRelease/v0.0/identities/6795937/audio.tar\r\n- ...\r\n\r\nTherefore, in principle, we don't need to host them on our Hub: it would be enough to just implement a loading script in the corresponding Hub dataset repo, e.g. \"facebook/multiface\"..."
] |
1,326,375,011
| 4,783
|
Docs for creating a loading script for image datasets
|
closed
| 2022-08-02T20:36:03
| 2022-09-09T17:08:14
| 2022-09-07T19:07:34
|
https://github.com/huggingface/datasets/pull/4783
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4783",
"html_url": "https://github.com/huggingface/datasets/pull/4783",
"diff_url": "https://github.com/huggingface/datasets/pull/4783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4783.patch",
"merged_at": "2022-09-07T19:07:34"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"IMO it would make more sense to add a \"Create image dataset\" page with two main sections - a no-code approach with `imagefolder` + metadata (preferred way), and with a loading script (advanced). It should be clear when to choose which. If we leave this as-is, the user who jumps straight to the Vision section could be under the impression that writing a loading script is the preferred way to share a vision dataset due to how this subsection starts:\r\n```\r\nWrite a dataset loading script to share a dataset.\r\n```\r\n \r\nAlso, I think a note explaining how to make a dataset gated/disable the viewer to hide the data would be beneficial (it's pretty common to require submitting a form to access a CV dataset).",
"Great suggestion @mariosasko! I added your suggestions, let me know what you think. For gated dataset access, I just added a tip referring users to the relevant docs since it's more of a Hub feature than `datasets` feature.",
"Thanks, looks much better now :). I would also move the sections explaining how to create an `imagefolder` for the specific task from the [loading page](https://raw.githubusercontent.com/huggingface/datasets/main/docs/source/image_load.mdx) to this one. IMO it makes more sense to have the basic info (imagefolder structure + `load_dataset` call) there + a link to this page for info on how to create an image folder dataset.",
"Good idea! Moved everything about `imagefolder` + metadata to the create an image dataset section since the `load_dataset` call is the same for different computer vision tasks. ",
"Thanks for all the feedbacks! 🥰\r\n\r\nWhat do you think about creating how to share an `ImageFolder` dataset in a separate PR? I think we should create a new section under `Vision` for how to share an image dataset.",
"I love it thanks ! I think moving forward we can use CSV instead of JSON Lines in the docs ;)"
] |
1,326,247,158
| 4,782
|
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648
|
closed
| 2022-08-02T18:36:05
| 2022-08-22T09:46:28
| 2022-08-20T02:11:53
|
https://github.com/huggingface/datasets/issues/4782
| null |
conceptofmind
| false
|
[
"Thanks for reporting @conceptofmind.\r\n\r\nCould you please give details about your environment? \r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```",
"Hi @albertvillanova ,\r\n\r\nHere is the environment information:\r\n```\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.27\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n```\r\nThanks,\r\n\r\nEnrico",
"I think this issue is solved here https://discuss.huggingface.co/t/minhash-deduplication/19992/12?u=loubnabnl, this only happens for very large datasets we will update it in CodeParrot code",
"Hi @loubnabnl,\r\n\r\nYes, the issue is solved in the discussion thread.\r\n\r\nI will close this issue.\r\n\r\nThank you again for all of your help.\r\n\r\nEnrico",
"Thanks @loubnabnl for pointing out the solution to this issue."
] |
1,326,114,161
| 4,781
|
Fix label renaming and add a battery of tests
|
closed
| 2022-08-02T16:42:07
| 2022-09-12T11:27:06
| 2022-09-12T11:24:45
|
https://github.com/huggingface/datasets/pull/4781
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4781",
"html_url": "https://github.com/huggingface/datasets/pull/4781",
"diff_url": "https://github.com/huggingface/datasets/pull/4781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4781.patch",
"merged_at": "2022-09-12T11:24:45"
}
|
Rocketknight1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Why don't we deprecate label renaming already instead ?",
"I think it'll break a lot of workflows if we deprecate it now! There isn't really a non-deprecated workflow yet - once we've added the `auto_rename_labels` option, then we can have `prepare_tf_dataset` on the `transformers` side use that, and then we can consider setting the default option to `False`, or beginning to deprecate it somehow.",
"I'm worried it's a bit of a waste of time to continue working on this behavior that shouldn't be here in the first place. Do you have a plan in mind ?",
"@lhoestq Broadly! The plan is:\r\n\r\n1) Create the `auto_rename_labels` flag with this PR and skip label renaming if it isn't set. Leave it as `True` for backward compatibility.\r\n2) Add the label renaming logic to `model.prepare_tf_dataset` in `transformers`. That method calls `to_tf_dataset()` right now. Once the label renaming logic is moved there, `model.prepare_tf_dataset` will set `auto_rename_labels=False` when calling `to_tf_dataset()`, and do label renaming itself.\r\n\r\nAfter step 2, `auto_rename_labels` is now only necessary for backward compatibility when users use `to_tf_dataset` directly. I want to leave it alone for a while because the `model.prepare_tf_dataset` workflow is very new. However, once it is established, we can deprecate `auto_rename_labels` and then finally remove it from the `datasets` code and keep it in `transformers` where it belongs.",
"I see ! Could it be possible to not add `auto_rename_labels` at all, since you want to remove it at the end ? Something roughly like this:\r\n1. show a warning in `to_tf_dataset` whevener a label is renamed automatically, saying that in the next major release this will be removed\r\n1. add the label renaming logic in `transformers` (to not have the warning)\r\n1. after some time, do a major release 3.0.0 and remove label renaming completely in `to_tf_dataset`\r\n\r\nWhat do you think ? cc @LysandreJik in case you have an opinion on this process.",
"@lhoestq I think that plan is mostly good, but if we make the change to `datasets` first then all users will keep getting deprecation warnings until we update the method in `transformers` and release a new version. \r\n\r\nI think we can follow your plan, but make the change to `transformers` first and wait for a new release before changing `datasets` - that way there are no visible warnings or API changes for users using `prepare_tf_dataset`. It also gives us more time to update the docs and try to move people to `prepare_tf_dataset` so they aren't confused by this!",
"Sounds good to me ! To summarize:\r\n1. add the label renaming logic in `transformers` + release\r\n1. show a warning in `to_tf_dataset` whevener a label is renamed automatically, saying that in the next major release this will be removed + minor release\r\n1. after some time, do a major release 3.0.0 and remove label renaming completely in `to_tf_dataset`",
"Yep, that's the plan! ",
"@lhoestq Are you okay with me merging this for now? ",
"Can you remove `auto_rename_labels` ? I don't think it's a good idea to add it if the plan is to remove it later",
"Right now, the `auto_rename_labels` behaviour happens in all cases! Making it an option is the first step in the process of disabling it (and moving the functionality to `transformers`) and then finally deprecating it."
] |
1,326,034,767
| 4,780
|
Remove apache_beam import from module level in natural_questions dataset
|
closed
| 2022-08-02T15:34:54
| 2022-08-02T16:16:33
| 2022-08-02T16:03:17
|
https://github.com/huggingface/datasets/pull/4780
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4780",
"html_url": "https://github.com/huggingface/datasets/pull/4780",
"diff_url": "https://github.com/huggingface/datasets/pull/4780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4780.patch",
"merged_at": "2022-08-02T16:03:17"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,325,997,225
| 4,779
|
Loading natural_questions requires apache_beam even with existing preprocessed data
|
closed
| 2022-08-02T15:06:57
| 2022-08-02T16:03:18
| 2022-08-02T16:03:18
|
https://github.com/huggingface/datasets/issues/4779
| null |
albertvillanova
| false
|
[] |
1,324,928,750
| 4,778
|
Update local loading script docs
|
closed
| 2022-08-01T20:21:07
| 2022-08-23T16:32:26
| 2022-08-23T16:32:22
|
https://github.com/huggingface/datasets/pull/4778
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4778",
"html_url": "https://github.com/huggingface/datasets/pull/4778",
"diff_url": "https://github.com/huggingface/datasets/pull/4778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4778.patch",
"merged_at": "2022-08-23T16:32:22"
}
|
stevhliu
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4778). All of your documentation changes will be reflected on that endpoint.",
"I would rather have a section in the docs that explains how to modify the script of an existing dataset (`inspect_dataset` + modification + `load_dataset`) instead of focusing on the GH datasets bundled with the source (only applicable for devs).",
"Good idea! I went with @mariosasko's suggestion to use `inspect_dataset` instead of cloning a dataset repository since it's a good opportunity to show off more of the library's lesser-known functions if that's ok with everyone :)",
"One advantage of cloning the repo is that it fetches potential data files referenced inside a script using relative paths, so if we decide to use `inspect_dataset`, we should at least add a tip to explain this limitation and how to circumvent it.",
"Oh you're right. Calling `load_dataset` on the modified script without having the files that come with it is not ideal. I agree it should be `git clone` instead - and inspect is for inspection only ^^'"
] |
1,324,548,784
| 4,777
|
Require torchaudio<0.12.0 to avoid RuntimeError
|
closed
| 2022-08-01T14:50:50
| 2022-08-02T17:35:14
| 2022-08-02T17:21:39
|
https://github.com/huggingface/datasets/pull/4777
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4777",
"html_url": "https://github.com/huggingface/datasets/pull/4777",
"diff_url": "https://github.com/huggingface/datasets/pull/4777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4777.patch",
"merged_at": "2022-08-02T17:21:39"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,324,493,860
| 4,776
|
RuntimeError when using torchaudio 0.12.0 to load MP3 audio file
|
closed
| 2022-08-01T14:11:23
| 2023-03-02T15:58:16
| 2023-03-02T15:58:15
|
https://github.com/huggingface/datasets/issues/4776
| null |
albertvillanova
| false
|
[
"Requiring torchaudio<0.12.0 isn't really a viable solution because that implies torch<0.12.0 which means no sm_86 CUDA support which means no RTX 3090 support in PyTorch.\r\n\r\nBut in my case, the error only occurs if `_fallback_load` resolves to `_fail_load` inside torchaudio 0.12.0 which is only the case if FFMPEG initialization failed: https://github.com/pytorch/audio/blob/b1f510fa5681e92ee82bdc6b2d1ed896799fc32c/torchaudio/backend/sox_io_backend.py#L36-L47\r\n\r\nThat means the proper solution for torchaudio>=0.12.0 is to check `torchaudio._extension._FFMPEG_INITIALIZED` and if it is False, then we need to remind the user to install a dynamically linked ffmpeg 4.1.8 and then maybe call `torchaudio._extension._init_ffmpeg()` to force a user-visible exception showing the missing ffmpeg dynamic library name.\r\n\r\nOn my system, installing \r\n\r\n- libavcodec.so.58 \r\n- libavdevice.so.58 \r\n- libavfilter.so.7 \r\n- libavformat.so.58 \r\n- libavutil.so.56 \r\n- libswresample.so.3 \r\n- libswscale.so.5\r\n\r\nfrom ffmpeg 4.1.8 made HF datasets 2.3.2 work just fine with torchaudio 0.12.1+cu116:\r\n\r\n```python3\r\nimport sox, torchaudio, datasets\r\nprint('torchaudio', torchaudio.__version__)\r\nprint('datasets', datasets.__version__)\r\ntorchaudio._extension._init_ffmpeg()\r\nprint(torchaudio._extension._FFMPEG_INITIALIZED)\r\nwaveform, sample_rate = torchaudio.load('/workspace/.cache/huggingface/datasets/downloads/extracted/8e5aa88585efa2a4c74c6664b576550d32b7ff9c3d1d17cc04f44f11338c3dc6/cv-corpus-8.0-2022-01-19/en/clips/common_voice_en_100038.mp3', format='mp3')\r\nprint(waveform.shape)\r\n```\r\n\r\n```\r\ntorchaudio 0.12.1+cu116\r\ndatasets 2.3.2\r\nTrue\r\ntorch.Size([1, 369792])\r\n```",
"Related: https://github.com/huggingface/datasets/issues/4889",
"Closing as we no longer use `torchaudio` for decoding MP3 files."
] |
1,324,136,486
| 4,775
|
Streaming not supported in Theivaprakasham/wildreceipt
|
closed
| 2022-08-01T09:46:17
| 2022-08-01T10:30:29
| 2022-08-01T10:30:29
|
https://github.com/huggingface/datasets/issues/4775
| null |
NitishkKarra
| false
|
[
"Thanks for reporting @NitishkKarra.\r\n\r\nThe root source of the issue is that streaming mode is not supported out-of-the-box for that dataset, because it contains a TAR file.\r\n\r\nWe have opened a discussion in the corresponding Hub dataset page, pointing out this issue: https://huggingface.co/datasets/Theivaprakasham/wildreceipt/discussions/1\r\n\r\nI'm closing this issue here, so this discussion is transferred there instead."
] |
1,323,375,844
| 4,774
|
Training hangs at the end of epoch, with set_transform/with_transform+multiple workers
|
open
| 2022-07-31T06:32:28
| 2022-07-31T06:36:43
| null |
https://github.com/huggingface/datasets/issues/4774
| null |
memray
| false
|
[] |
1,322,796,721
| 4,773
|
Document loading from relative path
|
closed
| 2022-07-29T23:32:21
| 2022-08-25T18:36:45
| 2022-08-25T18:34:23
|
https://github.com/huggingface/datasets/pull/4773
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4773",
"html_url": "https://github.com/huggingface/datasets/pull/4773",
"diff_url": "https://github.com/huggingface/datasets/pull/4773.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4773.patch",
"merged_at": "2022-08-25T18:34:23"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the feedback!\r\n\r\nI agree that adding it to `load_hub.mdx` is probably a bit too specific, especially for beginners reading the tutorials. Since this clarification is closely related to loading from the Hub (the only difference being the presence/absence of a loading script), I think it makes the most sense to keep it somewhere in `loading.mdx`. What do you think about adding a Warning in Loading >>> Hugging Face Hub that explains the difference between relative/absolute paths when there is a script?",
"What about updating the section about \"manual download\" ? I think it goes there no ?\r\n\r\nhttps://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download",
"Updated the manual download section :)",
"Thanks ! Pinging @albertvillanova to review this change, and then I think we're good to merge"
] |
1,322,693,123
| 4,772
|
AssertionError when using label_cols in to_tf_dataset
|
closed
| 2022-07-29T21:32:12
| 2022-09-12T11:24:46
| 2022-09-12T11:24:46
|
https://github.com/huggingface/datasets/issues/4772
| null |
lehrig
| false
|
[
"cc @Rocketknight1 ",
"Hi @lehrig, this is caused by the data collator renaming \"label\" to \"labels\". If you set `label_cols=[\"labels\"]` in the call it will work correctly. However, I agree that the cause of the bug is not obvious, so I'll see if I can make a PR to clarify things when the collator renames columns.",
"Thanks - and wow, that appears like a strange side-effect of the data collator. Is that really needed?\r\n\r\nWhy not make it more explicit? For example, extend `DefaultDataCollator` with an optional property `label_col_name` to be used as label column; only when it is not provided default to `labels` (and document that this happens) for backwards-compatibility? ",
"Haha, I honestly have no idea why our data collators rename `\"label\"` (the standard label column name in our datasets) to `\"labels\"` (the standard label column name input to our models). It's been a pain point when I design TF data pipelines, though, because I don't want to hardcode things like that - especially in `datasets`, because the renaming is something that happens purely at the `transformers` end. I don't think I could make the change in the data collators themselves at this point, because it would break backward compatibility for everything in PyTorch as well as TF.\r\n\r\nIn the most recent version of `transformers` we added a [prepare_tf_dataset](https://huggingface.co/docs/transformers/main_classes/model#transformers.TFPreTrainedModel.prepare_tf_dataset) method to our models which takes care of these details for you, and even chooses appropriate columns and labels for the model you're using. In future we might make that the officially recommended way to convert HF datasets to `tf.data.Dataset`.",
"Interesting, that'd be great especially for clarity. https://huggingface.co/docs/datasets/use_with_tensorflow#data-loading already improved clarity, yet, all those options will still confuse people. Looking forward to those advances in the hope there'll be only 1 way in the future ;)\r\n\r\nAnyways, I am happy for the time being with the work-around you provided. Thank you!"
] |
1,322,600,725
| 4,771
|
Remove dummy data generation docs
|
closed
| 2022-07-29T19:20:46
| 2022-08-03T00:04:01
| 2022-08-02T23:50:29
|
https://github.com/huggingface/datasets/pull/4771
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4771",
"html_url": "https://github.com/huggingface/datasets/pull/4771",
"diff_url": "https://github.com/huggingface/datasets/pull/4771.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4771.patch",
"merged_at": "2022-08-02T23:50:29"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,322,147,855
| 4,770
|
fix typo
|
closed
| 2022-07-29T11:46:12
| 2022-07-29T16:02:07
| 2022-07-29T16:02:07
|
https://github.com/huggingface/datasets/pull/4770
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4770",
"html_url": "https://github.com/huggingface/datasets/pull/4770",
"diff_url": "https://github.com/huggingface/datasets/pull/4770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4770.patch",
"merged_at": "2022-07-29T16:02:07"
}
|
Jiaxin-Wen
| true
|
[
"good catch thanks ! Can you check if the same typo is also present in `add_elasticsearch_index` ? It has a very similar signature",
"> good catch thanks ! Can you check if the same typo is also present in `add_elasticsearch_index` ? It has a very similar signature\r\n\r\nfixed"
] |
1,322,121,554
| 4,769
|
Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.
|
open
| 2022-07-29T11:18:24
| 2022-07-29T11:18:24
| null |
https://github.com/huggingface/datasets/issues/4769
| null |
zhuango
| false
|
[] |
1,321,913,645
| 4,768
|
Unpin rouge_score test dependency
|
closed
| 2022-07-29T08:17:40
| 2022-07-29T16:42:28
| 2022-07-29T16:29:17
|
https://github.com/huggingface/datasets/pull/4768
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4768",
"html_url": "https://github.com/huggingface/datasets/pull/4768",
"diff_url": "https://github.com/huggingface/datasets/pull/4768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4768.patch",
"merged_at": "2022-07-29T16:29:17"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,321,843,538
| 4,767
|
Add 2.4.0 version added to docstrings
|
closed
| 2022-07-29T07:01:56
| 2022-07-29T11:16:49
| 2022-07-29T11:03:58
|
https://github.com/huggingface/datasets/pull/4767
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4767",
"html_url": "https://github.com/huggingface/datasets/pull/4767",
"diff_url": "https://github.com/huggingface/datasets/pull/4767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4767.patch",
"merged_at": "2022-07-29T11:03:58"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,321,787,428
| 4,765
|
Fix version in map_nested docstring
|
closed
| 2022-07-29T05:44:32
| 2022-07-29T11:51:25
| 2022-07-29T11:38:36
|
https://github.com/huggingface/datasets/pull/4765
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4765",
"html_url": "https://github.com/huggingface/datasets/pull/4765",
"diff_url": "https://github.com/huggingface/datasets/pull/4765.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4765.patch",
"merged_at": "2022-07-29T11:38:36"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,321,295,961
| 4,764
|
Update CI badge
|
closed
| 2022-07-28T18:04:20
| 2022-07-29T11:36:37
| 2022-07-29T11:23:51
|
https://github.com/huggingface/datasets/pull/4764
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4764",
"html_url": "https://github.com/huggingface/datasets/pull/4764",
"diff_url": "https://github.com/huggingface/datasets/pull/4764.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4764.patch",
"merged_at": "2022-07-29T11:23:51"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,321,295,876
| 4,763
|
More rigorous shape inference in to_tf_dataset
|
closed
| 2022-07-28T18:04:15
| 2022-09-08T19:17:54
| 2022-09-08T19:15:41
|
https://github.com/huggingface/datasets/pull/4763
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4763",
"html_url": "https://github.com/huggingface/datasets/pull/4763",
"diff_url": "https://github.com/huggingface/datasets/pull/4763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4763.patch",
"merged_at": "2022-09-08T19:15:41"
}
|
Rocketknight1
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,321,261,733
| 4,762
|
Improve features resolution in streaming
|
closed
| 2022-07-28T17:28:11
| 2022-09-09T17:17:39
| 2022-09-09T17:15:30
|
https://github.com/huggingface/datasets/pull/4762
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4762",
"html_url": "https://github.com/huggingface/datasets/pull/4762",
"diff_url": "https://github.com/huggingface/datasets/pull/4762.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4762.patch",
"merged_at": "2022-09-09T17:15:30"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just took your comment into account @mariosasko , let me know if it's good for you now :)"
] |
1,321,068,411
| 4,761
|
parallel searching in multi-gpu setting using faiss
|
open
| 2022-07-28T14:57:03
| 2023-07-21T02:07:10
| null |
https://github.com/huggingface/datasets/issues/4761
| null |
Jiaxin-Wen
| false
|
[
"And I don't see any speed up when increasing the number of GPUs while calling `get_nearest_examples_batch`.",
"Hi ! Yes search_batch uses FAISS search which happens in parallel across the GPUs\r\n\r\n> And I don't see any speed up when increasing the number of GPUs while calling get_nearest_examples_batch.\r\n\r\nThat's unexpected, can you share the code you're running ?",
"here is the code snippet\r\n\r\n```python\r\n\r\n# add faiss index\r\nsource_dataset = load_dataset(source_path)\r\nqueries = load_dataset(query_path)\r\ngpu = [0,1,2,3]\r\nsource_dataset.add_faiss_index(\r\n \"embedding\",\r\n device=gpu,\r\n )\r\n\r\n\r\n# batch query\r\nbatch_size = 32\r\nfor i in tqdm(range(0, len(queries), batch_size)):\r\n if i + batch_size >= len(queries):\r\n batched_queries = queries[i:]\r\n else:\r\n batched_queries = queries[i:i+batch_size]\r\n\r\n batched_query_embeddings = np.stack([i for i in batched_queries['embedding']], axis=0)\r\n scores, candidates = source_dataset.get_nearest_examples_batch(\r\n \"embedding\",\r\n batched_query_embeddings,\r\n k=5\r\n )\r\n```",
"My version of datasets is `2.4.1.dev0`.",
"The code looks all good to me, do you see all the GPUs being utilized ? What version of faiss are you using ?",
"I can see the memory usage of all the GPUs.\r\nMy version of `faiss-gpu` is `1.7.2`",
"It looks all good to me then ^^ though you said you didn't experienced speed improvements by adding more GPUs ? What size is your source dataset and what time differences did you experience ?",
"query set: 1e6\r\nsource dataset: 1e6\r\nembedding size: 768\r\nindex: Flat\r\ntopk: 20\r\nGPU: V100\r\n\r\nThe time taken to traverse the query set once is about 1.5h, which is almost not influenced by the value of query batch size or the number of GPUs according to my experiments.",
"Hmmm the number of GPUs should divide the time, something is going wrong. Can you check that adding more GPU does divide the memory used per GPU ? Maybe it can be worth looking at similar issues in the FAISS repository or create a noew issue over there to understand what's going on",
"> Can you check that adding more GPU does divide the memory used per GPU \r\n\r\nThe memory used per GPU is unchanged while adding more GPU. Is this unexpected?\r\n\r\nI used to think that every GPU loads all the source vectors and the data parallelism is at the query level. 😆 ",
"> I used to think that every GPU loads all the source vectors and the data parallelism is at the query level. 😆\r\n\r\nOh indeed that's possible, I wasn't sure. Anyway you can check that calling get_nearest_examples_batch simply calls search under the hood: \r\n\r\nhttps://github.com/huggingface/datasets/blob/f90f71fbbb33889fe75a3ffc101cdf16a88a3453/src/datasets/search.py#L375",
"Here is a runnable script. \r\nMulti-GPU searching still does not work in my experiments.\r\n\r\n\r\n```python\r\nimport os\r\nfrom tqdm import tqdm\r\nimport numpy as np\r\nimport datasets\r\nfrom datasets import Dataset\r\n\r\nclass DPRSelector:\r\n\r\n def __init__(self, source, target, index_name, gpu=None):\r\n self.source = source\r\n self.target = target\r\n self.index_name = index_name\r\n\r\n cache_path = 'embedding.faiss'\r\n\r\n if not os.path.exists(cache_path):\r\n self.source.add_faiss_index(\r\n column=\"embedding\",\r\n index_name=index_name,\r\n device=gpu,\r\n )\r\n self.source.save_faiss_index(index_name, cache_path)\r\n else:\r\n self.source.load_faiss_index(\r\n index_name,\r\n cache_path,\r\n device=gpu\r\n )\r\n print('index builded!')\r\n\r\n def build_dataset(self, top_k, batch_size):\r\n print('start search')\r\n\r\n for i in tqdm(range(0, len(self.target), batch_size)):\r\n if i + batch_size >= len(self.target):\r\n batched_queries = self.target[i:]\r\n else:\r\n batched_queries = self.target[i:i+batch_size]\r\n\r\n\r\n batched_query_embeddings = np.stack([i for i in batched_queries['embedding']], axis=0)\r\n search_res = self.source.get_nearest_examples_batch(\r\n self.index_name,\r\n batched_query_embeddings,\r\n k=top_k\r\n )\r\n \r\n print('finish search')\r\n\r\n\r\ndef get_pseudo_dataset():\r\n pseudo_dict = {\"embedding\": np.zeros((1000000, 768), dtype=np.float32)}\r\n print('generate pseudo data')\r\n\r\n dataset = Dataset.from_dict(pseudo_dict)\r\n def list_to_array(data):\r\n return {\"embedding\": [np.array(vector, dtype=np.float32) for vector in data[\"embedding\"]]} \r\n dataset.set_transform(list_to_array, columns='embedding', output_all_columns=True)\r\n\r\n print('build dataset')\r\n return dataset\r\n\r\n\r\n\r\nif __name__==\"__main__\":\r\n\r\n np.random.seed(42)\r\n\r\n\r\n source_dataset = get_pseudo_dataset()\r\n target_dataset = get_pseudo_dataset()\r\n\r\n gpu = [0,1,2,3,4,5,6,7]\r\n selector = DPRSelector(source_dataset, target_dataset, \"embedding\", gpu=gpu)\r\n\r\n selector.build_dataset(top_k=20, batch_size=32)\r\n```",
"@lhoestq Hi, could you please test the code above if you have time? 😄 ",
"Maybe @albertvillanova you can take a look ? I won't be available in the following days",
"@albertvillanova Hi, can you help with this issue?",
"Hi @xwwwwww I'm investigating it, but I'm not an expert in Faiss. In principle, it is weird that your code does not work properly because it seems right...",
"Have you tried passing `gpu=-1` and check if there is a speedup?",
"> Have you tried passing `gpu=-1` and check if there is a speedup?\r\n\r\nyes, there is a speed up using GPU compared with CPU. ",
"When passing `device=-1`, ALL existing GPUs are used (multi GPU): this is the maximum speedup you can get. To know the number of total GPUs:\r\n```\r\nimport faiss\r\n\r\nngpus = faiss.get_num_gpus()\r\nprint(ngpus)\r\n```\r\n\r\nWhen passing a list of integers to `device`, then only that number of GPUs are used (multi GPU as well)\r\n- the speedup should be proportional (more or less) to the ratio of the number of elements passed to `device` over `ngpus`\r\n- if this is not the case, then there is an issue in the implementation of this use case (however, I have reviewed the code and in principle I can't find any evident bug)\r\n\r\nWhen passing a positive integer to `device`, then only a single GPU is used.\r\n- this time should be more or less proportional to the time when passing `device=-1` over `ngpus`",
"Thanks for your help!\r\nHave you run the code and replicated the same experimental results (i.e., no speedup while increasing the number of GPUs)?",
"@albertvillanova @lhoestq Sorry for the bother, is there any progress on this issue? 😃 ",
"I can confirm `add_faiss_index` calls `index = faiss.index_cpu_to_gpus_list(index, gpus=list(device))`.\r\n\r\nCould this be an issue with your environment ? Could you try running with 1 and 8 GPUs with a code similar to[ this one from the FAISS examples](https://github.com/facebookresearch/faiss/blob/main/tutorial/python/5-Multiple-GPUs.py) but using `gpu_index = faiss.index_cpu_to_gpus_list(cpu_index, gpus=list(device))`, and see if the speed changes ?",
"Hi, I test the FAISS example and the speed indeed changes. I set `nb=1000000`, `nq=1000000` and `d=64`\r\n\r\n| num GPUS | time cost |\r\n| -------- | --------- |\r\n| 1 | 28.53 |\r\n| 5 | 7.16 |\r\n\r\n\r\n\r\n",
"Ok the benchmark is great, not sure why it doesn't speed up the index in your case though. You can try running the benchmark with the same settings as your actual dataset\r\n```\r\nquery set: 1e6\r\nsource dataset: 1e6\r\nembedding size: 768\r\nindex: Flat\r\ntopk: 20\r\nGPU: V100\r\n```\r\n\r\nNote that you can still pass a FAISS index you built yourself to a dataset using https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Dataset.add_faiss_index_from_external_arrays",
"> Here is a runnable script. Multi-GPU searching still does not work in my experiments.\r\n> \r\n> ```python\r\n> import os\r\n> from tqdm import tqdm\r\n> import numpy as np\r\n> import datasets\r\n> from datasets import Dataset\r\n> \r\n> class DPRSelector:\r\n> \r\n> def __init__(self, source, target, index_name, gpu=None):\r\n> self.source = source\r\n> self.target = target\r\n> self.index_name = index_name\r\n> \r\n> cache_path = 'embedding.faiss'\r\n> \r\n> if not os.path.exists(cache_path):\r\n> self.source.add_faiss_index(\r\n> column=\"embedding\",\r\n> index_name=index_name,\r\n> device=gpu,\r\n> )\r\n> self.source.save_faiss_index(index_name, cache_path)\r\n> else:\r\n> self.source.load_faiss_index(\r\n> index_name,\r\n> cache_path,\r\n> device=gpu\r\n> )\r\n> print('index builded!')\r\n> \r\n> def build_dataset(self, top_k, batch_size):\r\n> print('start search')\r\n> \r\n> for i in tqdm(range(0, len(self.target), batch_size)):\r\n> if i + batch_size >= len(self.target):\r\n> batched_queries = self.target[i:]\r\n> else:\r\n> batched_queries = self.target[i:i+batch_size]\r\n> \r\n> \r\n> batched_query_embeddings = np.stack([i for i in batched_queries['embedding']], axis=0)\r\n> search_res = self.source.get_nearest_examples_batch(\r\n> self.index_name,\r\n> batched_query_embeddings,\r\n> k=top_k\r\n> )\r\n> \r\n> print('finish search')\r\n> \r\n> \r\n> def get_pseudo_dataset():\r\n> pseudo_dict = {\"embedding\": np.zeros((1000000, 768), dtype=np.float32)}\r\n> print('generate pseudo data')\r\n> \r\n> dataset = Dataset.from_dict(pseudo_dict)\r\n> def list_to_array(data):\r\n> return {\"embedding\": [np.array(vector, dtype=np.float32) for vector in data[\"embedding\"]]} \r\n> dataset.set_transform(list_to_array, columns='embedding', output_all_columns=True)\r\n> \r\n> print('build dataset')\r\n> return dataset\r\n> \r\n> \r\n> \r\n> if __name__==\"__main__\":\r\n> \r\n> np.random.seed(42)\r\n> \r\n> \r\n> source_dataset = get_pseudo_dataset()\r\n> target_dataset = get_pseudo_dataset()\r\n> \r\n> gpu = [0,1,2,3,4,5,6,7]\r\n> selector = DPRSelector(source_dataset, target_dataset, \"embedding\", gpu=gpu)\r\n> \r\n> selector.build_dataset(top_k=20, batch_size=32)\r\n> ```\r\n\r\nBy the way, have you run this toy example and replicated my experiment results? I think it is a more direct way to figure this out :)",
"Hi,\r\n\r\nI have a similar question and would like to know if there's any progress in this issue. \r\n\r\n`dataset.add_faiss_index(column=\"embedding\")`, this takes around 5minutes to add the index.\r\n\r\n`dataset.add_faiss_index(column=\"embedding\", device=-1)`, this ran for more than 10minutes and still didn't complete execution. \r\n\r\nNow, I don't understand why that's the case as I expected for GPU the indexing should be faster"
] |
1,320,878,223
| 4,760
|
Issue with offline mode
|
closed
| 2022-07-28T12:45:14
| 2025-05-04T16:44:59
| 2024-01-23T10:58:22
|
https://github.com/huggingface/datasets/issues/4760
| null |
SaulLu
| false
|
[
"Hi @SaulLu, thanks for reporting.\r\n\r\nI think offline mode is not supported for datasets containing only data files (without any loading script). I'm having a look into this...",
"Thanks for your feedback! \r\n\r\nTo give you a little more info, if you don't set the offline mode flag, the script will load the cache. I first noticed this behavior with the `evaluate` library, and while trying to understand the downloading flow I realized that I had a similar error with datasets.",
"This is an issue we have to fix.",
"This is related to https://github.com/huggingface/datasets/issues/3547",
"Still not fixed? ......",
"#5331 will be helpful to fix this, as it updates the cache directory template to be aligned with the other datasets",
"Any updates ?",
"I'm facing the same problem",
"This issue has been fixed in `datasets` 2.16 by https://github.com/huggingface/datasets/pull/6493. The cache is now working properly :)\r\n\r\nYou just have to update `datasets`:\r\n\r\n```\r\npip install -U datasets\r\n```",
"I'm on version 2.17.0, and this exact problem is still persisting.",
"Can you share some code to reproduce your issue ?\r\n\r\nAlso make sure your cache was populated with recent versions of `datasets`. Datasets cached with old versions may not be reloadable in offline mode, though we did our best to keep as much backward compatibility as possible.",
"I'm not sure if this is related @lhoestq but I am experiencing a similar issue when using offline mode:\r\n\r\n```bash\r\n$ python -c \"from datasets import load_dataset; load_dataset('openai_humaneval', split='test')\"\r\n$ HF_DATASETS_OFFLINE=1 python -c \"from datasets import load_dataset; load_dataset('openai_humaneval', split='test')\"\r\nUsing the latest cached version of the dataset since openai_humaneval couldn't be found on the Hugging Face Hub (offline mode is enabled).\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/load.py\", line 2556, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/load.py\", line 2265, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py\", line 122, in __init__\r\n config_name, version, hash = _find_hash_in_cache(\r\n File \"/dodrio/scratch/projects/2023_071/alignment-handbook/.venv/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py\", line 48, in _find_hash_in_cache\r\n raise ValueError(\r\nValueError: Couldn't find cache for openai_humaneval for config 'default'\r\nAvailable configs in the cache: ['openai_humaneval']\r\n```",
"Thanks for reporting @BramVanroy, I managed to reproduce and I opened a fix here: https://github.com/huggingface/datasets/pull/6741",
"Awesome, thanks for the quick fix @lhoestq! Looking forward to update my dependency version list.",
"> Thanks for reporting @BramVanroy, I managed to reproduce and I opened a fix here: #6741\r\n\r\nThanks a lot! I have faced the same problem. Can I use your fix code to directly replace the existing version code? I noticed that this fix has not been merged yet. Will it affect other functionalities?\r\n",
"I just merged the fix, you can install `datasets` from source or wait for the patch release which will be out in the coming days",
"Hi, a related issue here when loading a single file from a dataset, unable to access it in offline mode (datasets 3.2.0)\n\n```python\nimport os\n# os.environ[\"HF_HUB_OFFLINE\"] = \"1\"\nos.environ[\"HF_TOKEN\"] = \"xxxxxxxxxxxxxx\"\n\nimport datasets\n\ndataset_name = \"uonlp/CulturaX\"\ndata_files = \"fr/fr_part_00038.parquet\"\n\nds = datasets.load_dataset(dataset_name, split='train', data_files=data_files)\nprint(f\"Dataset loaded : {ds}\")\n```\nOnce the file has been cached, I rerun whit the HF_HUB_OFFLINE activated an get this error : \n```\nValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e'\nAvailable configs in the cache: ['default-2935e8cdcc21c613']\n```"
] |
1,320,783,300
| 4,759
|
Dataset Viewer issue for Toygar/turkish-offensive-language-detection
|
closed
| 2022-07-28T11:21:43
| 2022-07-28T13:17:56
| 2022-07-28T13:17:48
|
https://github.com/huggingface/datasets/issues/4759
| null |
tanyelai
| false
|
[
"I refreshed the dataset viewer manually, it's fixed now. Sorry for the inconvenience.\r\n<img width=\"1557\" alt=\"Capture d’écran 2022-07-28 à 09 17 39\" src=\"https://user-images.githubusercontent.com/1676121/181514666-92d7f8e1-ddc1-4769-84f3-f1edfdb902e8.png\">\r\n\r\n"
] |
1,320,602,532
| 4,757
|
Document better when relative paths are transformed to URLs
|
closed
| 2022-07-28T08:46:27
| 2022-08-25T18:34:24
| 2022-08-25T18:34:24
|
https://github.com/huggingface/datasets/issues/4757
| null |
albertvillanova
| false
|
[] |
1,319,687,044
| 4,755
|
Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size
|
open
| 2022-07-27T14:54:11
| 2023-12-13T19:34:43
| null |
https://github.com/huggingface/datasets/issues/4755
| null |
srobertjames
| false
|
[
"I've built a minimal example that shows this bug without `n_proc`. It seems like it's a problem any way of using **tokenizers, `overflow_to_sample_mapping`, and Dataset.map, with a small batch size**:\r\n\r\n```\r\nimport datasets\r\nimport transformers\r\npretrained = 'deepset/tinyroberta-squad2'\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(pretrained)\r\n\r\nquestions = ['Can you tell me why?', 'What time is it?']\r\ncontexts = ['This is context zero', 'Another paragraph goes here'] \r\n\r\ndef tok(questions, contexts):\r\n return tokenizer(text=questions,\r\n text_pair=contexts,\r\n truncation='only_second',\r\n return_overflowing_tokens=True,\r\n )\r\nprint(tok(questions, contexts)['overflow_to_sample_mapping'])\r\nassert tok(questions, contexts)['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=1)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # FAILS produces [0,0]\r\n```\r\n\r\nNote that even if the batch size would be larger, there will be instances where we will not have a lot of data, and end up using small batches. This can occur e.g. if `n_proc` causes batches to be underfill. I imagine it can also occur in other ways, e.g. the final leftover batch at the end.",
"A larger batch size does _not_ have this behavior:\r\n\r\n```\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=2)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n```",
"I was trying the [Question answering](https://huggingface.co/learn/nlp-course/chapter7/7#question-answering) tutorial on Hugging face when i faced the same problem. The preprocessing step is [here](https://huggingface.co/learn/nlp-course/chapter7/7#processing-the-validation-data). i have changed ```max_length=200, stride=50```,\r\n\r\n```\r\nvalidation_dataset = raw_datasets['validation'].select(range(8)).map(\r\n preprocess_validation_examples,\r\n batched=True,\r\n remove_columns=raw_datasets[\"validation\"].column_names,\r\n num_proc=1\r\n)\r\nprint(validation_dataset['overflow_to_sample_mapping'])\r\nprint(validation_dataset['example_id'])\r\n```\r\nresult\r\n\r\n```\r\n[0, 1, 2, 3, 4, 5, 6, 7]\r\n['56be4db0acb8001400a502ec', '56be4db0acb8001400a502ed', '56be4db0acb8001400a502ee', \r\n'56be4db0acb8001400a502ef', '56be4db0acb8001400a502f0', '56be8e613aeaaa14008c90d1', \r\n'56be8e613aeaaa14008c90d2', '56be8e613aeaaa14008c90d3']\r\n```\r\nwhen ```num_proc=2```, result - \r\n\r\n```\r\n[0, 1, 2, 3, 0, 1, 2, 3]\r\n['56be4db0acb8001400a502ec', '56be4db0acb8001400a502ed', '56be4db0acb8001400a502ee', \r\n'56be4db0acb8001400a502ef', '56be4db0acb8001400a502f0', '56be8e613aeaaa14008c90d1', \r\n'56be8e613aeaaa14008c90d2', '56be8e613aeaaa14008c90d3']\r\n```\r\n\r\nwhen ```num_proc=3```, result - \r\n\r\n```\r\n[0, 1, 2, 0, 1, 2, 0, 1]\r\n['56be4db0acb8001400a502ec', '56be4db0acb8001400a502ed', '56be4db0acb8001400a502ee', \r\n'56be4db0acb8001400a502ef', '56be4db0acb8001400a502f0', '56be8e613aeaaa14008c90d1', \r\n'56be8e613aeaaa14008c90d2', '56be8e613aeaaa14008c90d3']\r\n```\r\n\r\nThe```overflow_to_sample_mapping``` changes with ```num_proc```, but ```example_id``` field remains the same . It seems that each process in ```map``` has its own counter for overflow_to_sample_mapping. If you are using ```overflow_to_sample_mapping``` inside the ```preprocess_validation_examples``` function, then there is no issue."
] |
1,319,681,541
| 4,754
|
Remove "unkown" language tags
|
closed
| 2022-07-27T14:50:12
| 2022-07-27T15:03:00
| 2022-07-27T14:51:06
|
https://github.com/huggingface/datasets/pull/4754
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4754",
"html_url": "https://github.com/huggingface/datasets/pull/4754",
"diff_url": "https://github.com/huggingface/datasets/pull/4754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4754.patch",
"merged_at": "2022-07-27T14:51:06"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,319,571,745
| 4,753
|
Add `language_bcp47` tag
|
closed
| 2022-07-27T13:31:16
| 2022-07-27T14:50:03
| 2022-07-27T14:37:56
|
https://github.com/huggingface/datasets/pull/4753
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4753",
"html_url": "https://github.com/huggingface/datasets/pull/4753",
"diff_url": "https://github.com/huggingface/datasets/pull/4753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4753.patch",
"merged_at": "2022-07-27T14:37:56"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,319,464,409
| 4,752
|
DatasetInfo issue when testing multiple configs: mixed task_templates
|
open
| 2022-07-27T12:04:54
| 2022-08-08T18:20:50
| null |
https://github.com/huggingface/datasets/issues/4752
| null |
BramVanroy
| false
|
[
"I've narrowed down the issue to the `dataset_module_factory` which already creates a `dataset_infos.json` file down in the `.cache/modules/dataset_modules/..` folder. That JSON file already contains the wrong task_templates for `unfiltered`.",
"Ugh. Found the issue: apparently `datasets` was reusing the already existing `dataset_infos.json` that is inside `datasets/datasets/hebban-reviews`! Is this desired behavior?\r\n\r\nPerhaps when `--save_infos` and `--all_configs` are given, an existing `dataset_infos.json` file should first be deleted before continuing with the test? Because that would assume that the user wants to create a new infos file for all configs anyway.",
"Hi! I think this is a reasonable solution. Would you be interested in submitting a PR?"
] |
1,319,440,903
| 4,751
|
Added dataset information in clinic oos dataset card
|
closed
| 2022-07-27T11:44:28
| 2022-07-28T10:53:21
| 2022-07-28T10:40:37
|
https://github.com/huggingface/datasets/pull/4751
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4751",
"html_url": "https://github.com/huggingface/datasets/pull/4751",
"diff_url": "https://github.com/huggingface/datasets/pull/4751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4751.patch",
"merged_at": "2022-07-28T10:40:37"
}
|
arnav-ladkat
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,319,333,645
| 4,750
|
Easily create loading script for benchmark comprising multiple huggingface datasets
|
closed
| 2022-07-27T10:13:38
| 2022-07-27T13:58:07
| 2022-07-27T13:58:07
|
https://github.com/huggingface/datasets/issues/4750
| null |
JoelNiklaus
| false
|
[
"Hi ! I think the simplest is to copy paste the `_split_generators` code from the other datasets and do a bunch of if-else, as in the glue dataset: https://huggingface.co/datasets/glue/blob/main/glue.py#L467",
"Ok, I see. Thank you"
] |
1,318,874,913
| 4,748
|
Add image classification processing guide
|
closed
| 2022-07-27T00:11:11
| 2022-07-27T17:28:21
| 2022-07-27T17:16:12
|
https://github.com/huggingface/datasets/pull/4748
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4748",
"html_url": "https://github.com/huggingface/datasets/pull/4748",
"diff_url": "https://github.com/huggingface/datasets/pull/4748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4748.patch",
"merged_at": "2022-07-27T17:16:12"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,318,586,932
| 4,747
|
Shard parquet in `download_and_prepare`
|
closed
| 2022-07-26T18:05:01
| 2022-09-15T13:43:55
| 2022-09-15T13:41:26
|
https://github.com/huggingface/datasets/pull/4747
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4747",
"html_url": "https://github.com/huggingface/datasets/pull/4747",
"diff_url": "https://github.com/huggingface/datasets/pull/4747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4747.patch",
"merged_at": "2022-09-15T13:41:26"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is ready for review cc @mariosasko :) please let me know what you think !"
] |
1,318,486,599
| 4,746
|
Dataset Viewer issue for yanekyuk/wikikey
|
closed
| 2022-07-26T16:25:16
| 2022-09-08T08:15:22
| 2022-09-08T08:15:22
|
https://github.com/huggingface/datasets/issues/4746
| null |
ai-ashok
| false
|
[
"The dataset is empty, as far as I can tell: there are no files in the repository at https://huggingface.co/datasets/yanekyuk/wikikey/tree/main\r\n\r\nMaybe the viewer can display a better message for empty datasets",
"OK. Closing as it's not an error. We will work on making the error message a lot clearer."
] |
1,318,016,655
| 4,745
|
Allow `list_datasets` to include private datasets
|
closed
| 2022-07-26T10:16:08
| 2023-07-25T15:01:49
| 2023-07-25T15:01:49
|
https://github.com/huggingface/datasets/issues/4745
| null |
ola13
| false
|
[
"Thanks for opening this issue :)\r\n\r\nIf it can help, I think you can already use `huggingface_hub` to achieve this:\r\n```python\r\n>>> from huggingface_hub import HfApi\r\n>>> [ds_info.id for ds_info in HfApi().list_datasets(use_auth_token=token) if ds_info.private]\r\n['bigscience/xxxx', 'bigscience-catalogue-data/xxxxxxx', ... ]\r\n```\r\n\r\n---------\r\n\r\nThough the latest versions of `huggingface_hub` that contain this feature are not available on python 3.6, so maybe we should first drop support for python 3.6 (see #4460) to update `list_datasets` in `datasets` as well (or we would have to copy/paste some `huggingface_hub` code)",
"Great, thanks @lhoestq the workaround works! I think it would be intuitive to have the support directly in `datasets` but it makes sense to wait given that the workaround exists :)",
"i also think that going forward we should replace more and more implementations inside datasets with the corresponding ones from `huggingface_hub` (same as we're doing in `transformers`)",
"`datasets.list_datasets` is now deprecated in favor of `huggingface_hub.list_datasets` (returns private datasets when `token` is present), so I'm closing this issue."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.