id
int64
953M
3.35B
number
int64
2.72k
7.75k
title
stringlengths
1
290
state
stringclasses
2 values
created_at
timestamp[s]date
2021-07-26 12:21:17
2025-08-23 00:18:43
updated_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-23 12:34:39
closed_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-20 16:35:55
html_url
stringlengths
49
51
pull_request
dict
user_login
stringlengths
3
26
is_pull_request
bool
2 classes
comments
listlengths
0
30
1,170,066,235
3,929
Load a local dataset twice
closed
2022-03-15T18:59:26
2022-03-16T09:55:09
2022-03-16T09:54:06
https://github.com/huggingface/datasets/issues/3929
null
caush
false
[ "Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_dataset(\"csv\", data_dir=\"data\")\r\n```\r\n\r\nAlternatively, you may also use:\r\n```python\r\ndataset = load_dataset(\"data\")" ]
1,170,017,132
3,928
Frugal score deprecations
closed
2022-03-15T18:10:42
2022-03-17T08:37:24
2022-03-17T08:37:24
https://github.com/huggingface/datasets/issues/3928
null
ierezell
false
[ "Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. " ]
1,170,016,465
3,927
Update main readme
closed
2022-03-15T18:09:59
2022-03-29T10:13:47
2022-03-29T10:08:20
https://github.com/huggingface/datasets/pull/3927
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3927", "html_url": "https://github.com/huggingface/datasets/pull/3927", "diff_url": "https://github.com/huggingface/datasets/pull/3927.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3927.patch", "merged_at": "2022-03-29T10:08:20" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "What do you think @albertvillanova ?" ]
1,169,945,052
3,926
Doc maintenance
closed
2022-03-15T17:00:46
2022-03-15T19:27:15
2022-03-15T19:27:12
https://github.com/huggingface/datasets/pull/3926
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3926", "html_url": "https://github.com/huggingface/datasets/pull/3926", "diff_url": "https://github.com/huggingface/datasets/pull/3926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3926.patch", "merged_at": "2022-03-15T19:27:12" }
stevhliu
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3926). All of your documentation changes will be reflected on that endpoint." ]
1,169,913,769
3,925
Fix main_classes docs index
closed
2022-03-15T16:33:46
2022-03-22T13:49:11
2022-03-22T13:44:04
https://github.com/huggingface/datasets/pull/3925
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3925", "html_url": "https://github.com/huggingface/datasets/pull/3925", "diff_url": "https://github.com/huggingface/datasets/pull/3925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3925.patch", "merged_at": "2022-03-22T13:44:04" }
lhoestq
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm it's still not good \r\n![image](https://user-images.githubusercontent.com/42851186/158429361-e19ce25b-c259-4ded-8473-075deafdbb96.png)\r\n\r\nany idea what could cause this ?", "Ok fixed :)" ]
1,169,805,813
3,924
Document cases for github datasets
closed
2022-03-15T15:10:10
2022-04-05T18:33:15
2022-03-15T15:41:23
https://github.com/huggingface/datasets/pull/3924
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3924", "html_url": "https://github.com/huggingface/datasets/pull/3924", "diff_url": "https://github.com/huggingface/datasets/pull/3924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3924.patch", "merged_at": "2022-03-15T15:41:23" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.", "Yay!" ]
1,169,773,869
3,923
Add methods to IterableDatasetDict
closed
2022-03-15T14:46:03
2022-07-06T15:40:20
2022-03-15T16:45:06
https://github.com/huggingface/datasets/pull/3923
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3923", "html_url": "https://github.com/huggingface/datasets/pull/3923", "diff_url": "https://github.com/huggingface/datasets/pull/3923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3923.patch", "merged_at": "2022-03-15T16:45:06" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3923). All of your documentation changes will be reflected on that endpoint.", "Is this feature stale or needs any help to it ? If so I can quickly send a PR. Thanks\r\n\r\nCC : @lhoestq, @albertvillanova ", "These features have been merged and are already available, thanks :)", "Hello @lhoestq, I see that `IterableDataset` doesn't allow features like `take`, `len`, `slice` which can enable a lot of stuffs. Is it worth an addition ? Or is it intended that they didn't have those features ?", "IterableDataset objects don't have `len` or `slice` because they can be possibly unbounded (you don't know in advance how many items they contain). Though IterableDataset.take and IterableDataset.skip do exist." ]
1,169,761,293
3,922
Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset
closed
2022-03-15T14:36:28
2022-03-15T16:07:04
2022-03-15T16:07:03
https://github.com/huggingface/datasets/pull/3922
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3922", "html_url": "https://github.com/huggingface/datasets/pull/3922", "diff_url": "https://github.com/huggingface/datasets/pull/3922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3922.patch", "merged_at": "2022-03-15T16:07:02" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3922). All of your documentation changes will be reflected on that endpoint.", "Unrelated CI test failure. This PR can be merged." ]
1,169,749,338
3,921
Fix NonMatchingChecksumError in CRD3 dataset
closed
2022-03-15T14:27:14
2022-03-15T15:54:27
2022-03-15T15:54:26
https://github.com/huggingface/datasets/pull/3921
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3921", "html_url": "https://github.com/huggingface/datasets/pull/3921", "diff_url": "https://github.com/huggingface/datasets/pull/3921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3921.patch", "merged_at": "2022-03-15T15:54:26" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3921). All of your documentation changes will be reflected on that endpoint.", "Unrelated test failure. This PR can be merged." ]
1,169,532,807
3,920
'datasets.features' is not a package
closed
2022-03-15T11:14:23
2022-03-16T09:17:12
2022-03-16T09:17:12
https://github.com/huggingface/datasets/issues/3920
null
Arij-Aladel
false
[ "Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets", "The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply" ]
1,169,497,210
3,919
AttributeError: 'DatasetDict' object has no attribute 'features'
closed
2022-03-15T10:46:59
2022-03-17T04:16:14
2022-03-17T04:16:14
https://github.com/huggingface/datasets/issues/3919
null
jswapnil10
false
[ "You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`. \r\n\r\nFor example \r\n\r\n```python \r\nds = load_dataset('mnist')\r\nds.features\r\n```\r\nReturns \r\n```python\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()\r\n----> 1 ds.features\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nIf we look at the dataset variable, we see it is a `DatasetDict`:\r\n\r\n```python \r\nprint(ds)\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 60000\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nWe can grab the features from a split by indexing into `train`:\r\n```python\r\nds['train'].features\r\n{'image': Image(decode=True, id=None),\r\n 'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}\r\n```\r\n\r\nHope that helps ", "Yes, Thanks for that clarification," ]
1,169,366,117
3,918
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
closed
2022-03-15T08:53:45
2022-03-16T15:36:58
2022-03-15T14:01:25
https://github.com/huggingface/datasets/issues/3918
null
willowdong
false
[ "Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "You should force redownload:\r\n```python\r\ndataset = load_dataset(\"multi_news\", download_mode=\"force_redownload\")\r\ndataset_2 = load_dataset(\"reddit_tifu\", \"long\", download_mode=\"force_redownload\")", "Fixed by:\r\n- #3787 \r\n- #3843" ]
1,168,906,154
3,917
Create README.md
closed
2022-03-14T21:08:10
2022-03-17T17:45:39
2022-03-17T17:45:39
https://github.com/huggingface/datasets/pull/3917
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3917", "html_url": "https://github.com/huggingface/datasets/pull/3917", "diff_url": "https://github.com/huggingface/datasets/pull/3917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3917.patch", "merged_at": "2022-03-17T17:45:39" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3917). All of your documentation changes will be reflected on that endpoint." ]
1,168,869,191
3,916
Create README.md for GLUE
closed
2022-03-14T20:27:22
2022-03-15T17:06:57
2022-03-15T17:06:56
https://github.com/huggingface/datasets/pull/3916
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3916", "html_url": "https://github.com/huggingface/datasets/pull/3916", "diff_url": "https://github.com/huggingface/datasets/pull/3916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3916.patch", "merged_at": "2022-03-15T17:06:56" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3916). All of your documentation changes will be reflected on that endpoint." ]
1,168,848,101
3,915
Metric card template
closed
2022-03-14T20:07:08
2022-05-04T10:44:09
2022-05-04T10:37:06
https://github.com/huggingface/datasets/pull/3915
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3915", "html_url": "https://github.com/huggingface/datasets/pull/3915", "diff_url": "https://github.com/huggingface/datasets/pull/3915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3915.patch", "merged_at": "2022-05-04T10:37:06" }
emibaylor
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances inputs `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference in the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n", "Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference to the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n", "Thanks for your feedback, @mcmillanmajora ! I totally agree that we should write a post -- we were going to write one up when we are done with a good chunk of the metric cards, but we can also do that earlier :smile: \r\n\r\nWith regards to your more specific comments:\r\n\r\n- It is our intention to put what the metric was developed for (whether it is a specific task or dataset, for example). You can see the [WER](https://github.com/huggingface/datasets/tree/master/metrics/wer) metric card for that.\r\n- `input_field` works for me!\r\n- the values aren't always scores, it's more like the values the metric can take. And it does include the range of possible values, including the max and min, that are outputted.\r\n- I like the suggestion to add: 'Provide a range of examples that show both typical and atypical results' :hugs: \r\n- I have been putting specific use cases in 'Further references', just because there isn't always something to put there, especially for less popular metrics", "Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! ", "Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! " ]
1,168,777,880
3,914
Use templates for doc-builidng jobs
closed
2022-03-14T18:53:06
2022-03-17T15:02:59
2022-03-17T15:02:58
https://github.com/huggingface/datasets/pull/3914
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3914", "html_url": "https://github.com/huggingface/datasets/pull/3914", "diff_url": "https://github.com/huggingface/datasets/pull/3914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3914.patch", "merged_at": "2022-03-17T15:02:58" }
sgugger
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3914). All of your documentation changes will be reflected on that endpoint.", "You can ignore the CI failures btw, they're unrelated to this PR" ]
1,168,723,950
3,913
Deterministic split order in DatasetDict.map
closed
2022-03-14T17:58:37
2023-09-24T09:55:10
2022-03-15T10:45:15
https://github.com/huggingface/datasets/pull/3913
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3913", "html_url": "https://github.com/huggingface/datasets/pull/3913", "diff_url": "https://github.com/huggingface/datasets/pull/3913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3913.patch", "merged_at": null }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3913). All of your documentation changes will be reflected on that endpoint.", "I'm surprised this is needed because the order of the `dict` keys is deterministic as of Python 3.6 (documented in 3.7). Is there a reproducer for this behavior? I wouldn't make this change unless it's absolutely needed because `sorted` modifies the initial order of the keys.", "Indeed this doesn't fix the issue apparently. Actually this is probably because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer)." ]
1,168,720,098
3,912
add draft of registering function for pandas
closed
2022-03-14T17:54:29
2023-09-24T09:55:01
2023-01-24T12:57:10
https://github.com/huggingface/datasets/pull/3912
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3912", "html_url": "https://github.com/huggingface/datasets/pull/3912", "diff_url": "https://github.com/huggingface/datasets/pull/3912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3912.patch", "merged_at": null }
lvwerra
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3912). All of your documentation changes will be reflected on that endpoint.", "That's cool ! Though I would expect such an integration to only require `huggingface_hub`, not the full `datasets` library. \r\n Indeed if users want to use the `datasets` lib they could just to `Dataset.from_pandas(df).push_to_hub()` already. Therefore I would explore something that doesn't not necessarily requires `datasets`.\r\n\r\nFor other could storage solutions (S3, GCS, etc.), pandas allows users to pass URIs like `s3://bucket-name/path/data.csv` to the `read_xxx` and `to_xxx` (for csv, parquet, json, etc). It also support passing the **root directory** like `s3://bucket-name/dataset-dir` instead of a single file name.\r\n\r\nIn the Hugging Face Hub case, we have one dataset = one repository. We can enter pandas' paradigm by saying one dataset = one repository = one root directory. Here is what we could have:\r\n\r\n### push to Hub:\r\n```python\r\n\"\"\"\r\nDemo script for writing a pandas data frame to a CSV file on HF using fsspec-supported pandas APIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\nbooks_df = pd.DataFrame(\r\n data={\"Title\": [\"Book I\", \"Book II\", \"Book III\"], \"Price\": [56.6, 59.87, 74.54]},\r\n columns=[\"Title\", \"Price\"],\r\n)\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df.to_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n index=False,\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\n```\r\n\r\n### load from Hub:\r\n```python\r\n\"\"\"\r\nDemo script for reading a CSV file from HF into a pandas data frame using fsspec-supported pandas\r\nAPIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df = pd.read_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\nprint(books_df)\r\n```\r\n\r\nAnd you could do the same with Parquet data using `read/to_parquet` or other formats. Formats like CSV, Parquet or JSON Lines would work out of the box with `datasets`. This API would also allow anyone to use Dask with the Hugging Face Hub for example.\r\n\r\nWhat do you think ?", "I'm closing this PR as [`hffs`](https://github.com/huggingface/hffs) can now be used for reading/writing data frames from/to the Hub." ]
1,168,652,374
3,911
Create README.md for CER metric
closed
2022-03-14T16:54:51
2022-03-17T17:49:40
2022-03-17T17:45:54
https://github.com/huggingface/datasets/pull/3911
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3911", "html_url": "https://github.com/huggingface/datasets/pull/3911", "diff_url": "https://github.com/huggingface/datasets/pull/3911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3911.patch", "merged_at": "2022-03-17T17:45:54" }
sashavor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,168,579,694
3,910
Fix text loader to split only on universal newlines
closed
2022-03-14T15:54:58
2022-03-15T16:16:11
2022-03-15T16:16:09
https://github.com/huggingface/datasets/pull/3910
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3910", "html_url": "https://github.com/huggingface/datasets/pull/3910", "diff_url": "https://github.com/huggingface/datasets/pull/3910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3910.patch", "merged_at": "2022-03-15T16:16:09" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3910). All of your documentation changes will be reflected on that endpoint.", "Looks like the test needs to be updated for windows ^^'", "I don't think this is the same issue as in https://github.com/oscar-corpus/corpus/issues/18, where the OSCAR metadata has line offsets that use only `\\n` as the newline marker to count lines, not `\\r\\n` or `\\r`.\r\n\r\nIt looks like the OSCAR data loader is opening the data files with `gzip.open` directly and I don't think this text loader is used, but I'm not familiar with a lot of `datasets` internals so I could be mistaken?", "You are right @adrianeboyd.\r\n\r\nThis PR fixes #3729.\r\n\r\nAdditionally, this PR is somehow related to the OSCAR issue. However, the OSCAR issue have multiple root causes: one is the offset initialization (as you pointed out); other is similar to this case: Unicode newlines are not properly handled.\r\n\r\nI will make a change proposal for OSCAR this afternoon.", "@lhoestq I'm working on fixing the Windows tests on my Windows machine...", "I finally changed the approach in order to avoid having \"\\r\\n\" and \"\\r\" line breaks in Python `str` read from files on Windows/old Macintosh machines." ]
1,168,578,058
3,909
Error loading file audio when downloading the Common Voice dataset directly from the Hub
closed
2022-03-14T15:53:50
2023-03-02T15:31:27
2023-03-02T15:31:26
https://github.com/huggingface/datasets/issues/3909
null
aliceinland
false
[ "Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ?", "I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.\r\n\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor\r\nimport soundfile as sf\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_array(batch):\r\n speech, _ = sf.read(batch[\"file\"])\r\n batch[\"speech\"] = speech\r\n return batch\r\n\r\nlibrispeech_eval = librispeech_eval.map(map_to_array)\r\n\r\ndef map_to_pred(batch):\r\n features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=[\"speech\"])\r\n\r\nprint(\"WER:\", wer(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```\r\n\r\nThe code is taken directly from \"https://huggingface.co/facebook/s2t-small-librispeech-asr\".\r\n\r\nThe short error code is \"RuntimeError: Error opening '6930-75918-0000.flac': System error.\" (it can't find the first file), and I agree, I can't find the file either. The dataset has downloaded correctly (it says), but on the location, there are only \".arrow\" files, no \".flac\" files.\r\n\r\n**Error message:**\r\n\r\n```python\r\nRuntimeError Traceback (most recent call last)\r\nInput In [15], in <cell line: 16>()\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n---> 16 librispeech_eval = librispeech_eval.map(map_to_array)\r\n 18 def map_to_pred(batch):\r\n 19 features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1953, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1950 disable_tqdm = not logging.is_progress_bar_enabled()\r\n 1952 if num_proc is None or num_proc == 1:\r\n-> 1953 return self._map_single(\r\n 1954 function=function,\r\n 1955 with_indices=with_indices,\r\n 1956 with_rank=with_rank,\r\n 1957 input_columns=input_columns,\r\n 1958 batched=batched,\r\n 1959 batch_size=batch_size,\r\n 1960 drop_last_batch=drop_last_batch,\r\n 1961 remove_columns=remove_columns,\r\n 1962 keep_in_memory=keep_in_memory,\r\n 1963 load_from_cache_file=load_from_cache_file,\r\n 1964 cache_file_name=cache_file_name,\r\n 1965 writer_batch_size=writer_batch_size,\r\n 1966 features=features,\r\n 1967 disable_nullable=disable_nullable,\r\n 1968 fn_kwargs=fn_kwargs,\r\n 1969 new_fingerprint=new_fingerprint,\r\n 1970 disable_tqdm=disable_tqdm,\r\n 1971 desc=desc,\r\n 1972 )\r\n 1973 else:\r\n 1975 def format_cache_file_name(cache_file_name, rank):\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:519, in transmit_tasks.<locals>.wrapper(*args, **kwargs)\r\n 517 self: \"Dataset\" = kwargs.pop(\"self\")\r\n 518 # apply actual function\r\n--> 519 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 520 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 521 for dataset in datasets:\r\n 522 # Remove task templates if a column mapping of the template is no longer valid\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:486, in transmit_format.<locals>.wrapper(*args, **kwargs)\r\n 479 self_format = {\r\n 480 \"type\": self._format_type,\r\n 481 \"format_kwargs\": self._format_kwargs,\r\n 482 \"columns\": self._format_columns,\r\n 483 \"output_all_columns\": self._output_all_columns,\r\n 484 }\r\n 485 # apply actual function\r\n--> 486 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 487 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 488 # re-apply format to the output\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)\r\n 452 kwargs[fingerprint_name] = update_fingerprint(\r\n 453 self._fingerprint, transform, kwargs_for_fingerprint\r\n 454 )\r\n 456 # Call actual function\r\n--> 458 out = func(self, *args, **kwargs)\r\n 460 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n 462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2318, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)\r\n 2316 if not batched:\r\n 2317 for i, example in enumerate(pbar):\r\n-> 2318 example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n 2319 if update_data:\r\n 2320 if i == 0:\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2218, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)\r\n 2216 if with_rank:\r\n 2217 additional_args += (rank,)\r\n-> 2218 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n 2219 if update_data is None:\r\n 2220 # Check if the function returns updated examples\r\n 2221 update_data = isinstance(processed_inputs, (Mapping, pa.Table))\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1913, in Dataset.map.<locals>.decorate.<locals>.decorated(item, *args, **kwargs)\r\n 1909 decorated_item = (\r\n 1910 Example(item, features=self.features) if not batched else Batch(item, features=self.features)\r\n 1911 )\r\n 1912 # Use the LazyDict internally, while mapping the function\r\n-> 1913 result = f(decorated_item, *args, **kwargs)\r\n 1914 # Return a standard dict\r\n 1915 return result.data if isinstance(result, LazyDict) else result\r\n\r\nInput In [15], in map_to_array(batch)\r\n 11 def map_to_array(batch):\r\n---> 12 speech, _ = sf.read(batch[\"file\"])\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:256, in read(file, frames, start, stop, dtype, always_2d, fill_value, out, samplerate, channels, format, subtype, endian, closefd)\r\n 170 def read(file, frames=-1, start=0, stop=None, dtype='float64', always_2d=False,\r\n 171 fill_value=None, out=None, samplerate=None, channels=None,\r\n 172 format=None, subtype=None, endian=None, closefd=True):\r\n 173 \"\"\"Provide audio data from a sound file as NumPy array.\r\n 174 \r\n 175 By default, the whole file is read from the beginning, but the\r\n (...)\r\n 254 \r\n 255 \"\"\"\r\n--> 256 with SoundFile(file, 'r', samplerate, channels,\r\n 257 subtype, endian, format, closefd) as f:\r\n 258 frames = f._prepare_read(start, stop, frames)\r\n 259 data = f.read(frames, dtype, always_2d, fill_value, out)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)\r\n 626 self._mode = mode\r\n 627 self._info = _create_info_struct(file, mode, samplerate, channels,\r\n 628 format, subtype, endian)\r\n--> 629 self._file = self._open(file, mode_int, closefd)\r\n 630 if set(mode).issuperset('r+') and self.seekable():\r\n 631 # Move write position to 0 (like in Python file objects)\r\n 632 self.seek(0)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1183, in SoundFile._open(self, file, mode_int, closefd)\r\n 1181 else:\r\n 1182 raise TypeError(\"Invalid file: {0!r}\".format(self.name))\r\n-> 1183 _error_check(_snd.sf_error(file_ptr),\r\n 1184 \"Error opening {0!r}: \".format(self.name))\r\n 1185 if mode_int == _snd.SFM_WRITE:\r\n 1186 # Due to a bug in libsndfile version <= 1.0.25, frames != 0\r\n 1187 # when opening a named pipe in SFM_WRITE mode.\r\n 1188 # See http://github.com/erikd/libsndfile/issues/77.\r\n 1189 self._info.frames = 0\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1357, in _error_check(err, prefix)\r\n 1355 if err != 0:\r\n 1356 err_str = _snd.sf_error_number(err)\r\n-> 1357 raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))\r\n\r\nRuntimeError: Error opening '6930-75918-0000.flac': System error.\r\n```\r\n\r\n**Package versions:**\r\n```python\r\npython: 3.9\r\ntransformers: 4.17.0\r\ndatasets: 2.0.0\r\nSoundFile: 0.10.3.post1\r\n```\r\n", "Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0][\"audio\"][\"array\"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)\r\n\r\ncc @patrickvonplaten we will need to update the readme at [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) as well as https://huggingface.co/docs/transformers/model_doc/speech_to_text", "Thanks!\r\n\r\nAnd sorry for posting this problem in what turned on to be an unrelated thread.\r\n\r\nI rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.\r\n\r\nThe rewritten code:\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor, Wav2Vec2Processor\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_pred(batch):\r\n audio = batch[\"audio\"]\r\n features = processor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"], padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred)#, batched=True, batch_size=8)\r\n\r\nprint(\"WER:\", wer.compute(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```", "I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for \"transcription\". You can fix it by adding `[0]` at the end of this line to get the string:\r\n```python\r\nbatch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]\r\n```", "Updating as many model cards now as I can find", "https://github.com/huggingface/transformers/pull/16611", "We no longer use `torchaudio` for decoding MP3 files, and the problem with model cards has been addressed, so I'm closing this issue." ]
1,168,576,963
3,908
Update README.md for SQuAD v2 metric
closed
2022-03-14T15:53:10
2022-03-15T17:04:11
2022-03-15T17:04:11
https://github.com/huggingface/datasets/pull/3908
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3908", "html_url": "https://github.com/huggingface/datasets/pull/3908", "diff_url": "https://github.com/huggingface/datasets/pull/3908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3908.patch", "merged_at": "2022-03-15T17:04:10" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3908). All of your documentation changes will be reflected on that endpoint." ]
1,168,575,998
3,907
Update README.md for SQuAD metric
closed
2022-03-14T15:52:31
2022-03-15T17:04:20
2022-03-15T17:04:19
https://github.com/huggingface/datasets/pull/3907
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3907", "html_url": "https://github.com/huggingface/datasets/pull/3907", "diff_url": "https://github.com/huggingface/datasets/pull/3907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3907.patch", "merged_at": "2022-03-15T17:04:19" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3907). All of your documentation changes will be reflected on that endpoint." ]
1,168,496,328
3,906
NonMatchingChecksumError on Spider dataset
closed
2022-03-14T14:54:53
2022-03-15T07:09:51
2022-03-15T07:09:51
https://github.com/huggingface/datasets/issues/3906
null
kolk
false
[ "Hi @kolk, thanks for reporting.\r\n\r\nIndeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:\r\n- #3787 \r\n\r\nWe just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4\r\n\r\nPlease, feel free to update your local `datasets` version, so that you get the fix:\r\n```shell\r\npip install -U datasets\r\n```" ]
1,168,320,568
3,905
Perplexity Metric Card
closed
2022-03-14T12:39:40
2022-03-16T19:38:56
2022-03-16T19:38:56
https://github.com/huggingface/datasets/pull/3905
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3905", "html_url": "https://github.com/huggingface/datasets/pull/3905", "diff_url": "https://github.com/huggingface/datasets/pull/3905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3905.patch", "merged_at": "2022-03-16T19:38:56" }
emibaylor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.", "I'm wondering if we should add that perplexity can be used for analyzing datasets as well", "Otherwise, looks good! Good job, @emibaylor !" ]
1,167,730,095
3,904
CONLL2003 Dataset not available
closed
2022-03-13T23:46:15
2023-06-28T18:08:16
2022-03-17T08:21:32
https://github.com/huggingface/datasets/issues/3904
null
omarespejel
false
[ "Thanks for reporting, @omarespejel.\r\n\r\nI'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip\r\n\r\nMight it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?\r\nCould you please try loading the dataset again and tell if the problem persists?", "@omarespejel I'm closing this issue. Feel free to reopen it if the problem persists.", "getting same issue. Can't find any solution.", "I am getting the same issue. I use google colab with CPU.\r\nThe code I used is exactly the same as described above.\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"conll2003\")\r\n```\r\n\r\nThe produced error:\r\n![image](https://github.com/huggingface/datasets/assets/9371628/d87f7fb0-ef58-4755-abb5-f8f92c51fe02)\r\n\r\nNote: This error is different from what was initially described in this thread. This is because I use CPU. When I use GPU I reproduce the same initial error of the thread.\r\n\r\nMoreover, I receive the following warning:\r\n```\r\nWARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}\r\nDownloading and preparing dataset conll2003/conll2003 to /root/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98...\r\nWARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}\r\n```\r\n" ]
1,167,521,627
3,903
Add Biwi Kinect Head Pose dataset.
closed
2022-03-13T08:59:21
2022-05-31T17:02:19
2022-05-31T12:15:58
https://github.com/huggingface/datasets/pull/3903
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3903", "html_url": "https://github.com/huggingface/datasets/pull/3903", "diff_url": "https://github.com/huggingface/datasets/pull/3903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3903.patch", "merged_at": "2022-05-31T12:15:58" }
dnaveenr
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the detailed explanation of the structure!\r\n\r\n1. IMO it makes the most sense to yield one example for each person (so the total of 24 examples), so the features dict should be similar to this:\r\n \r\n ```python\r\n features = Features({\r\n \"rgb\": Sequence(Image()), # for the png frames\r\n \"rgb_cal\": {\"intrisic_mat\": Array2D(shape=(3, 3), dtype=\"float32\"), \"extrinsic_mat\": {\"rotation\": Array2D(shape=(3, 3), dtype=\"float32\"), \"translation\": Sequence(Value(\"float32\", length=3)}},\r\n \"depth\": Sequence(Value(\"string\")), # for the depth frames\r\n \"depth_cal\": the same as \"rgb_cal\",\r\n \"head_pose_gt\": Sequence({\"center\": Sequence(Value(\"float32\", length=3), \"rotation\": Array2D(shape=(3, 3), dtype=\"float32\")}),\r\n \"head_template\": Value(\"string\"), # for the person's obj file\r\n\r\n })\r\n ```\r\n We can add a \"Data Processing\" section to the card to explain how to parse the files.\r\n\r\n\r\n2. Yes, it's ok to parse the files as long as it doesn't take too much time/memory (e.g., it's ok to parse the `*_pose.txt` or `*.cal` files, but it's better to leave the `*_depth.bin` or `*.obj` files unprocessed and yield the paths to them)", "Thanks for the suggestions @mariosasko, yielding one example for each person would make things much easier.\r\nOkay. I'll look at parsing the files and then displaying the information.", "Added the following : \r\n- Features, I have included sequence_number and subject_id along with the features you had suggested.\r\n- Tested loading of the dataset along with dummy_data and full_data tests.\r\n- Created the dataset_infos.json file.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Cards with more details.\r\n- [x] \"Data Processing\" section\r\n\r\nAny inputs on what to include in the \"Data Processing\" section ?\r\n", "@mariosasko Please could you review this when you get time. Thank you.", "In the Data Processing section, I've added example code for a compressed binary depth image file. Updated the Readme as well. ", "@mariosasko / @lhoestq , Please could you review this when you get time. Thank you.", "Created an issue here: https://github.com/huggingface/datasets/issues/4152", "Got it. Thanks for the comments. I've collapsed the C++ code in the readme and added the suggestions.", "Hi ! The `AttributeError ` bug has been fixed, feel free to merge `master` into your branch ;)", "I haven't been able to figure out why CI is failing, the error shown is : \r\n\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Parsing:\r\nE list index out of range\r\nE The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE list index out of range\r\n```\r\n\r\nAny inputs would be helpful.", "I think it's because there are tabulations in the c++ code, can you replace them with regular spaces please ?\r\n\r\n(then in another PR we can maybe fix the Readme parser to support text indented with tabulations)", "@lhoestq , initially the idea was to have one example = one image with an additional field mentioning the frame_number. But each subject, we had a head template, calibration information for the depth and the color camera which was common to all the examples for that subject. Also, the images were continuous frames.\r\n@mariosasko suggested this structure and it made sense to group the images together for a particular subject.", "> Don't you think it would be more practical to have one example = one image in this dataset ?\r\n\r\nHaving one example = one image would be good but since we have a head template, calibration information for the depth and the color camera which is common to all the images for that subject and the images being continuous frames, I think it makes sense to group the images together for each subject. This will make the feature representation easier.\r\n\r\n", "Ok I see, sounds good then. Users can still separate the images if they want to", "The CI fails are unrelated to this PR and fixed on master, merging !", "Great. Thanks @lhoestq , I think we can close this issue now. ( #3822 )" ]
1,167,403,377
3,902
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
closed
2022-03-12T21:22:03
2023-02-09T14:53:49
2022-03-22T07:10:41
https://github.com/huggingface/datasets/issues/3902
null
arunasank
false
[ "Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`", "Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).\r\n\r\nIn order to fix this, you should update `fsspec` from within the \"problematic\" Python virtual env:\r\n```\r\npip install -U \"fsspec[http]>=2021.05.0\"", "I'm closing this issue, @arunasank.\r\n\r\nFeel free to re-open it if the problem persists. ", "from lightgbm import LGBMModel,LGBMClassifier, plot_importance\r\nafter importing lib getting (partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) error, can help me", "@deepakmahtha I think you are not using `datasets`: this is the GitHub repository of Hugging Face Datasets.\r\n\r\nIf you are using `lightgbm`, you should report the issue to their repository instead.\r\n\r\nAnyway, we have proposed a possible fix just in a comment above: to update fsspec.\r\nhttps://github.com/huggingface/datasets/issues/3902#issuecomment-1066517824" ]
1,167,339,773
3,901
Dataset viewer issue for IndicParaphrase- the preview doesn't show
closed
2022-03-12T16:56:05
2022-04-12T12:10:50
2022-04-12T12:10:49
https://github.com/huggingface/datasets/issues/3901
null
ratishsp
false
[ "It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture d’écran 2022-04-12 à 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n" ]
1,167,224,903
3,900
Add MetaShift dataset
closed
2022-03-12T08:44:18
2022-04-01T16:59:48
2022-04-01T15:16:30
https://github.com/huggingface/datasets/pull/3900
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3900", "html_url": "https://github.com/huggingface/datasets/pull/3900", "diff_url": "https://github.com/huggingface/datasets/pull/3900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3900.patch", "merged_at": "2022-04-01T15:16:30" }
dnaveenr
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Please could you review this when you get time. Thank you.", "Thanks a lot for your inputs @mariosasko .\r\n> Maybe we can add the generated meta-graphs to the card as images (with attributions)?\r\n\r\nYes. We can do this for the default set of classes. Will add this.\r\n\r\n> Would be cool if we could have them as additional configs. Also, maybe we could have configs that expose [image metadata](https://github.com/Weixin-Liang/MetaShift/tree/main/dataset/meta_data) from the https://nlp.stanford.edu/data/gqa/sceneGraphs.zip file (this file is downloaded in the script but not used).\r\n\r\nI'll try adding the bonus section as additional config. \r\nRegarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n", "> Regarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n\r\nOh, I forgot to mention that. Let's add a `Dataset Usage` section to the card to document the params (similar to this: https://huggingface.co/datasets/electricity_load_diagrams#dataset-usage). Also, feel free to add the constants that can be tuned as config params (e.g. `IMAGE_SUBSET_SIZE_THRESHOLD` or the `5` in `len(subject_data) <= 5`).", "Okay. Got it. Will add these and constants as config parameters.\r\n\r\nThe image metadata from scene graphs looks like this : \r\n```json\r\n{\r\n \"2407890\": {\r\n \"width\": 640,\r\n \"height\": 480,\r\n \"location\": \"living room\",\r\n \"weather\": none,\r\n \"objects\": {\r\n \"271881\": {\r\n \"name\": \"chair\",\r\n \"x\": 220,\r\n \"y\": 310,\r\n \"w\": 50,\r\n \"h\": 80,\r\n \"attributes\": [\"brown\", \"wooden\", \"small\"],\r\n \"relations\": {\r\n \"32452\": {\r\n \"name\": \"on\",\r\n \"object\": \"275312\"\r\n },\r\n \"32452\": {\r\n \"name\": \"near\",\r\n \"object\": \"279472\"\r\n } \r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n``load_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...], image_metadata=True)``\r\nHow do we showcase/display the image metadata(json) information ?\r\n", "> How do we showcase/display the image metadata(json) information ?\r\n\r\nWe can add the JSON fields as keys to the features dict:\r\n```python\r\n if self.config.image_metadata:\r\n features.update({\"width\": Value(\"int\"), \"height\": Value(\"int\"), \"location\": Value(\"string\"), ...}) \r\n```\r\n\r\nP.S. Would rename `image_metadata` to `with_image_metadata` ", "I have added the following : \r\n- Added the meta-graphs to the card as images under the Section \"Dataset Meta-Graphs\".\r\n- Generate the Attributes-Dataset using config parameter. [ [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]\r\n- Expose image metadata using config parameter.\r\nFormat of the image metadata is as follows : [Link](https://cs.stanford.edu/people/dorarad/gqa/download.html)\r\nI have modified the \"Objects\" which is dict to a list of dicts with an additional parameter named object_id. \r\nI have defined the structure as follows : \r\n```\r\n{\r\n \"width\": datasets.Value(\"int64\"),\r\n \"height\": datasets.Value(\"int64\"),\r\n \"location\": datasets.Value(\"string\"),\r\n \"weather\": datasets.Value(\"string\"),\r\n \"objects\": datasets.Sequence(\r\n {\r\n \"object_id\": datasets.Value(\"string\"),\r\n \"name\": datasets.Value(\"string\"),\r\n \"x\": datasets.Value(\"int64\"),\r\n \"y\": datasets.Value(\"int64\"),\r\n \"w\": datasets.Value(\"int64\"),\r\n \"h\": datasets.Value(\"int64\"),\r\n \"attributes\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"relations\": datasets.Sequence(\r\n {\r\n \"name\": datasets.Value(\"string\"),\r\n \"object\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n }\r\n ),\r\n}\r\n```\r\nProblem is that objects is not being shown as list of dicts. The output looks as follows : \r\n\r\n> metashift_dataset['train'][0]\r\n\r\n```json \r\n{'image_id': '2338755', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x281 at 0x7F066C5A49D0>, 'label': 0, 'context': 'ground', 'width': 500, 'height': 281, 'location': None, 'weather': None, 'objects': {'object_id': ['3070704', '3070705', '3070706', '2416713', '3070702', '2790660', '3063157', '2354960', '2037127', '2392939', '2912743', '2125407', '2735257', '3260906', '2351018', '3288269', '3699852', '2734378', '3421201', '2863115'], 'name': ['bicycle', 'bicycle', 'bicycle', 'boot', 'bicycle', 'motorcycle', 'pepperoni', 'head', 'building', 'wall', 'shorts', 'people', 'wheel', 'bricks', 'man', 'cat', 'boot', 'door', 'ground', 'building'], 'x': [137, 371, 458, 215, 468, 399, 368, 245, 0, 140, 260, 284, 138, 451, 339, 187, 210, 26, 0, 313], 'y': [116, 86, 94, 150, 91, 80, 107, 22, 0, 44, 109, 69, 145, 226, 69, 22, 230, 0, 119, 0], 'w': [197, 27, 15, 73, 24, 53, 9, 37, 289, 46, 43, 30, 74, 28, 35, 116, 53, 107, 500, 55], 'h': [126, 25, 38, 128, 43, 50, 16, 44, 158, 73, 51, 52, 97, 15, 73, 252, 46, 147, 162, 77], 'attributes': [[], [], [], ['white'], [], [], [], [], [], [], [], [], [], [], [], ['white'], ['white'], ['large', 'black'], ['brick'], []], 'relations': [{'name': ['to the left of'], 'object': ['3260906']}, {'name': ['to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['3070706', '2351018', '2125407', '2790660', '2037127', '3070702', '3288269']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the right of'], 'object': ['2351018', '3070705', '3070702', '2790660', '3063157']}, {'name': ['to the right of'], 'object': ['2735257']}, {'name': ['to the right of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['2351018', '2790660', '3070706', '3070705', '3063157']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['3070705', '2351018', '3070702', '3070706', '3063157', '2125407', '2037127', '3288269']}, {'name': ['to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['2037127', '3070706', '3070702', '2912743', '3288269', '2790660', '2125407']}, {'name': ['to the left of', 'to the right of'], 'object': ['2863115', '2734378']}, {'name': ['to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['3070705', '2351018', '3063157', '2125407', '2790660', '2863115']}, {'name': ['to the left of', 'to the right of', 'to the left of'], 'object': ['2125407', '2734378', '3288269']}, {'name': ['to the left of', 'on', 'to the left of'], 'object': ['2351018', '3288269', '3063157']}, {'name': ['to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'to the left of'], 'object': ['3063157', '2351018', '2037127', '3070705', '2392939', '2790660']}, {'name': ['to the left of', 'to the left of'], 'object': ['2416713', '3288269']}, {'name': ['to the right of'], 'object': ['3070704']}, {'name': ['to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'walking down'], 'object': ['2037127', '2790660', '2125407', '3070705', '3070706', '2912743', '3070702', '3288269', '3421201']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2392939', '2734378', '2790660', '2735257', '3063157', '3070705', '2351018', '2863115']}, {'name': [], 'object': []}, {'name': ['of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2037127', '2354960', '3288269', '2392939']}, {'name': [], 'object': []}, {'name': ['to the right of', 'to the right of', 'to the right of'], 'object': ['2037127', '3288269', '2354960']}]}}\r\n```\r\nExpected output of image_metadata would be : \r\n```\r\n{'height': 281,\r\n 'location': None,\r\n 'objects': [{'attributes': [],\r\n 'h': 126,\r\n 'name': 'bicycle',\r\n 'object_id': '3070704',\r\n 'relations': [{'name': 'to the left of', 'object': '3260906'}],\r\n 'w': 197,\r\n 'x': 137,\r\n 'y': 116},\r\n {'attributes': [],\r\n 'h': 25,\r\n 'name': 'bicycle',\r\n 'object_id': '3070705',\r\n 'relations': [{'name': 'to the left of', 'object': '3070706'},\r\n {'name': 'to the right of', 'object': '2351018'},\r\n {'name': 'to the right of', 'object': '2125407'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '3070702'},\r\n {'name': 'to the right of', 'object': '3288269'}],\r\n 'w': 27,\r\n 'x': 371,\r\n 'y': 86},\r\n {'attributes': ['white'],\r\n 'h': 252,\r\n 'name': 'cat',\r\n 'object_id': '3288269',\r\n 'relations': [{'name': 'to the right of', 'object': '2392939'},\r\n {'name': 'to the right of', 'object': '2734378'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2735257'},\r\n {'name': 'to the left of', 'object': '3063157'},\r\n {'name': 'to the left of', 'object': '3070705'},\r\n {'name': 'to the left of', 'object': '2351018'},\r\n {'name': 'to the left of', 'object': '2863115'}],\r\n 'w': 116,\r\n 'x': 187,\r\n 'y': 22},\r\n {'attributes': ['white'],\r\n 'h': 46,\r\n 'name': 'boot',\r\n 'object_id': '3699852',\r\n 'relations': [],\r\n 'w': 53,\r\n 'x': 210,\r\n 'y': 230},\r\n .\r\n .\r\n .\r\n {'attributes': ['large', 'black'],\r\n 'h': 147,\r\n 'name': 'door',\r\n 'object_id': '2734378',\r\n 'relations': [{'name': 'of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '2354960'},\r\n {'name': 'to the left of', 'object': '3288269'},\r\n {'name': 'to the left of', 'object': '2392939'}],\r\n 'w': 107,\r\n 'x': 26,\r\n 'y': 0},\r\n {'attributes': ['brick'],\r\n 'h': 162,\r\n 'name': 'ground',\r\n 'object_id': '3421201',\r\n 'relations': [],\r\n 'w': 500,\r\n 'x': 0,\r\n 'y': 119},\r\n {'attributes': [],\r\n 'h': 77,\r\n 'name': 'building',\r\n 'object_id': '2863115',\r\n 'relations': [{'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the right of', 'object': '3288269'},\r\n {'name': 'to the right of', 'object': '2354960'}],\r\n 'w': 55,\r\n 'x': 313,\r\n 'y': 0}],\r\n 'weather': None,\r\n 'width': 500}\r\n\r\n```\r\n\r\nMay I know how to get the list of dicts representation correctly ?\r\n\r\n---\r\nTo-Do : \r\n\r\n- [x] Generate dataset_infos.json file.\r\n- [x] Add “Dataset Usage” section in the cards and write about the config parameters. \r\n- [x] Add the constants that can be tuned as config params.\r\n", "> Problem is that objects is not being shown as list of dicts. The output looks as follows :\r\n\r\nThat's expected. We convert a sequence of dictionaries to a dictionary of sequences to keep the formatting aligned with Tensorflow Datasets. You could disable this behavior by replacing `\"objects\": datasets.Sequence(object_fields_dict)` with `\"objects\": [object_fields_dict]`, but that's not what we usually do, so let's keep it like that. \r\n\r\nAlso, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the `src` attribute (and specify `alt` in case the URLs go down).\r\n\r\nI'll do a proper review again after you are finished with the dummy data.", "> That's expected.\r\n\r\nOkay. Got it. Thanks. I thought I was doing something wrong.\r\n\r\n> Also, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the src attribute (and specify alt in case the URLs go down).\r\n\r\nSure. Where do we host these images ? Can I upload them to any free image hosting platform or is there any particular website you use ?\r\n\r\n> I'll do a proper review again after you are finished with the dummy data.\r\n\r\nSure. Thanks. I'm working on this part. Will update you.\r\n", "Update : \r\n- I have generated the dataset_infos.json file.\r\n\r\n> I suggest you try to generate the dataset_infos.json file first, and then I can help with the dummy data.\r\n\r\nI am having issues creating the dummy data. I get the following which I use the command : \r\n\r\n`datasets-cli dummy_data datasets/metashift`\r\n\r\n```\r\nDataset metashift with config MetashiftConfig(name='metashift', version=1.0.0, data_dir=None, data_files=None, description=None) seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/datasets/commands/dummy_data.py\", line 324, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/datasets/commands/dummy_data.py\", line 407, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```", "> Feel free to host the images online (on imgur for example) :)\r\n\r\nSure. Will do that.\r\n\r\nThanks for the explanation regarding the dummy data zip files. I will try it out and let you know.", "Instead of uploading the images to a hosting service, you can directly reference their GitHub URLs (open the image in the MetaShift repo -> click Download -> copy the image URL). For instance, this is the URL of one of the images:`https://raw.githubusercontent.com/Weixin-Liang/MetaShift/main/docs/figures/Cat-MetaGraph.jpg`. Also, feel free to replace `main` with the most recent commit hash in the copied URLs to make them more robust.", "@mariosasko I've actually created metagraphs for all the default classes other than those present in the GitHub Repo and included all of them. :) The Repo has them only for two classes.\r\n\r\nIn case we want to limit the no.of meta graphs included, we can stick to the github URLs from the repo itself.\r\n", "Update : \r\n- I could add the dummy data and get the dummy data test to work. Since we have a preprocessing step on the dataset, one of the .pkl file size is on the higher side. This was done for the tests to pass. I hope that is okay. The dummy.zip file size is about 273K.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Structure in the data cards to include Data Instances when config parameters are used.\r\n\r\nPlease could you review when you get time. Thank you.", "Thanks a lot for your suggestions, Mario. The thing I learnt from the review is that I need to make better sentence formations. I will keep this in mind. :) ", "Thanks a lot for your support. @mariosasko and @lhoestq .\r\n\r\n> Super impressed by your work on this, congrats :)\r\n\r\nIts my first dataset contribution to the 🤗 Datasets library, I'm super excited. Thank you. :)\r\n\r\nAlso, I think we can close this request issue now, [#3813](https://github.com/huggingface/datasets/issues/3813)" ]
1,166,931,812
3,899
Add exact match metric
closed
2022-03-11T22:21:40
2022-03-21T16:10:03
2022-03-21T16:05:35
https://github.com/huggingface/datasets/pull/3899
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3899", "html_url": "https://github.com/huggingface/datasets/pull/3899", "diff_url": "https://github.com/huggingface/datasets/pull/3899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3899.patch", "merged_at": "2022-03-21T16:05:34" }
emibaylor
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,166,778,250
3,898
Create README.md for WER metric
closed
2022-03-11T19:29:09
2022-03-15T17:05:00
2022-03-15T17:04:59
https://github.com/huggingface/datasets/pull/3898
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3898", "html_url": "https://github.com/huggingface/datasets/pull/3898", "diff_url": "https://github.com/huggingface/datasets/pull/3898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3898.patch", "merged_at": "2022-03-15T17:04:59" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3898). All of your documentation changes will be reflected on that endpoint.", "For ASR you can probably ping @patrickvonplaten ", "Ah only noticed now that ` # Values from popular papers` is from a template. @lhoestq @sashavor - not really sure if this section is useful in general really. \r\n\r\nIMO, it's more confusing/misleading than it helps. E.g. a value of 0.03 WER on a fake read-out audio dataset is not better than a WER of 0.3 on a real-world noisy, conversational audio dataset. I think the same holds true for other metrics no? I can think of very little metrics where a metric value is not dataset dependent. E.g. perplexity is super dataset dependent, summarization metrics like ROUGE as well, ...\r\n\r\nAlso, I don't really see what this section tries to achieve - is the idea here to give the reader some papers that use this metric to better understand in which context it is used? Should we maybe rename the section to `Popular papers making use of this metric` or something? \r\n\r\n", "I put \"Values from popular papers\" as a subsection of \"Output values\" -- I hope that's a compromise that works for everyone :hugs: " ]
1,166,715,104
3,897
Align tqdm control/cache control with Transformers
closed
2022-03-11T18:12:22
2022-03-14T15:01:10
2022-03-14T15:01:08
https://github.com/huggingface/datasets/pull/3897
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3897", "html_url": "https://github.com/huggingface/datasets/pull/3897", "diff_url": "https://github.com/huggingface/datasets/pull/3897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3897.patch", "merged_at": "2022-03-14T15:01:08" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3897). All of your documentation changes will be reflected on that endpoint." ]
1,166,628,270
3,896
Missing google file for `multi_news` dataset
closed
2022-03-11T16:38:10
2022-03-15T12:30:23
2022-03-15T12:30:23
https://github.com/huggingface/datasets/issues/3896
null
severo
false
[ "reported by @abidlabs ", "related to https://github.com/huggingface/datasets/pull/3843?", "`datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.\r\n\r\nWhen loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :)", "That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...\r\nOnce merged, that will fix this issue. ", "OK. Should fix the viewer for 50 datasets\r\n\r\n<img width=\"148\" alt=\"Capture d’écran 2022-03-14 à 11 51 02\" src=\"https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png\">\r\n" ]
1,166,619,182
3,895
Fix code examples indentation
closed
2022-03-11T16:29:04
2022-03-11T17:34:30
2022-03-11T17:34:29
https://github.com/huggingface/datasets/pull/3895
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3895", "html_url": "https://github.com/huggingface/datasets/pull/3895", "diff_url": "https://github.com/huggingface/datasets/pull/3895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3895.patch", "merged_at": "2022-03-11T17:34:29" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895). All of your documentation changes will be reflected on that endpoint.", "Still not rendered properly: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping", "My last commit should have fixed it, I don't know why the dev doc build is not showing my last changes", "Let me merge this and we can see on `master` how it renders, until the dev doc build is fixed" ]
1,166,611,270
3,894
[docs] make dummy data creation optional
closed
2022-03-11T16:21:34
2022-03-11T17:27:56
2022-03-11T17:27:55
https://github.com/huggingface/datasets/pull/3894
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3894", "html_url": "https://github.com/huggingface/datasets/pull/3894", "diff_url": "https://github.com/huggingface/datasets/pull/3894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3894.patch", "merged_at": "2022-03-11T17:27:55" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.", "The dev doc build rendering doesn't seem to be updated with my last commit for some reason", "Merging it anyway since I'd like to share this page with users 🙃 " ]
1,166,551,684
3,893
Add default branch for doc building
closed
2022-03-11T15:24:27
2022-03-11T15:34:35
2022-03-11T15:34:34
https://github.com/huggingface/datasets/pull/3893
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3893", "html_url": "https://github.com/huggingface/datasets/pull/3893", "diff_url": "https://github.com/huggingface/datasets/pull/3893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3893.patch", "merged_at": "2022-03-11T15:34:34" }
sgugger
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3893). All of your documentation changes will be reflected on that endpoint.", "Yes! And when we discovered on the Transformers side that this check fails on the GitHub actions, we added a config attribute to have a default. Setting in Transformers fixed the issue of the doc being deployed to main, so porting the fix here too :-)" ]
1,166,227,003
3,892
Fix CLI test checksums
closed
2022-03-11T10:04:04
2022-03-15T12:28:24
2022-03-15T12:28:23
https://github.com/huggingface/datasets/pull/3892
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3892", "html_url": "https://github.com/huggingface/datasets/pull/3892", "diff_url": "https://github.com/huggingface/datasets/pull/3892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3892.patch", "merged_at": "2022-03-15T12:28:23" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3892). All of your documentation changes will be reflected on that endpoint.", "Feel free to merge if it's good for you :)", "I've added a test @lhoestq. Once all green, I'll merge. ", "Last failing tests do not have nothing to do with this PR." ]
1,165,503,732
3,891
Fix race condition in doc build
closed
2022-03-10T17:17:10
2022-03-10T17:23:00
2022-03-10T17:17:30
https://github.com/huggingface/datasets/pull/3891
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3891", "html_url": "https://github.com/huggingface/datasets/pull/3891", "diff_url": "https://github.com/huggingface/datasets/pull/3891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3891.patch", "merged_at": "2022-03-10T17:17:30" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3891). All of your documentation changes will be reflected on that endpoint." ]
1,165,502,838
3,890
Update beans download urls
closed
2022-03-10T17:16:16
2022-03-15T16:47:30
2022-03-15T15:26:48
https://github.com/huggingface/datasets/pull/3890
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3890", "html_url": "https://github.com/huggingface/datasets/pull/3890", "diff_url": "https://github.com/huggingface/datasets/pull/3890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3890.patch", "merged_at": "2022-03-15T15:26:47" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3890). All of your documentation changes will be reflected on that endpoint.", "@albertvillanova Thanks for investigating and fixing that issue. I regenerated the `dataset_infos.json` file." ]
1,165,456,083
3,889
Cannot load beans dataset (Couldn't reach the dataset)
closed
2022-03-10T16:34:08
2022-03-15T15:26:47
2022-03-15T15:26:47
https://github.com/huggingface/datasets/issues/3889
null
ivsanro1
false
[ "Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :)" ]
1,165,435,529
3,888
IterableDataset columns and feature types
open
2022-03-10T16:19:12
2022-11-29T11:39:24
null
https://github.com/huggingface/datasets/issues/3888
null
lhoestq
false
[ "#self-assign", "@alvarobartt I've assigned you the issue since I'm not actively working on it.", "Cool thanks @mariosasko I'll try to fix it in the upcoming days, thanks!", "@lhoestq so in order to address what’s not completed in this issue, do you think it makes sense to add a param `features` to `IterableDataset.map` so that the output features right after the `map` are defined there? ", "Yes that would be ideal IMO, thanks again for the help :)", "@lhoestq cool then if you agree I can work on that! I’ll also update the docs accordingly once done, thanks!", "I've already started with a PR as a draft @lhoestq, should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so? Thanks!", "> should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so?\r\n\r\nRight now one can use `ds = ds._resolve_features()` do to so. It can be used after `map` or `load_dataset` if the features are not known. Maybe we can make this method public ?" ]
1,165,380,852
3,887
ImageFolder improvements
closed
2022-03-10T15:34:46
2022-03-11T15:06:11
2022-03-11T15:06:11
https://github.com/huggingface/datasets/pull/3887
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3887", "html_url": "https://github.com/huggingface/datasets/pull/3887", "diff_url": "https://github.com/huggingface/datasets/pull/3887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3887.patch", "merged_at": "2022-03-11T15:06:11" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3887). All of your documentation changes will be reflected on that endpoint." ]
1,165,223,319
3,886
Retry HfApi call inside push_to_hub when 504 error
closed
2022-03-10T13:24:40
2022-03-16T09:00:56
2022-03-15T16:19:50
https://github.com/huggingface/datasets/pull/3886
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3886", "html_url": "https://github.com/huggingface/datasets/pull/3886", "diff_url": "https://github.com/huggingface/datasets/pull/3886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3886.patch", "merged_at": "2022-03-15T16:19:50" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3886). All of your documentation changes will be reflected on that endpoint.", "I made it more robust by increasing the wait time, and I also added some logs when a request is retried. Let me know if it's ok for you", "At the end you did not set the agreed max value of 60s. \r\n\r\nMoreover, with the new numbers, there is a slight contradiction: although you set max_retries=5, we will only make 4 retries at most because of the combined values of `base_wait_time` and `max_wait_time`.", "Yea I thought that in total we could wait 1min, but if we have a max_wait_time of 20sec between each request it's fine IMO\r\n\r\n> Moreover, with the new numbers, there is a slight contradiction: although you set max_retries=5, we will only make 4 retries at most because of the combined values of base_wait_time and max_wait_time.\r\n\r\nWhat makes you think this ? If the exponential wait time becomes bigger than `max_wait_time` then it still does the retry, but after a wait time of `max_wait_time`", "Sorry, I meant 4 retries **with exponential backoff**; the fifth one is with constant backoff.", "OK, and one question: do you think that the retries do not affect the time the server needs to be operational again and able to process the request? I guess that if does not affect, then the cause are other users' requests, or others; not our specific request.\r\n\r\nJust to be sure: \r\n- Then 20s at most between consecutive requests do not impact the server.\r\n- And we expect after a total of 5 retries (within a total 50s of wait time + request processing/uploading time), the server should be able to come back to normality.", "> do you think that the retries do not affect the time the server needs for being able to process the request (I guess in this case the cause are other users' requests, or other causes; not our specific request).\r\n\r\nYes I don't think the retries would affect the server, I think the cause of the 504 errors is elsewhere\r\n\r\n> Just to be sure:\r\n>\r\n> Then 20s at most between consecutive requests do not impact the server.\r\n> And we expect after a total of 5 retries (within a total 50s of wait time + request processing/uploading time), the server should be able to come back to normality.\r\n\r\nYes I think it's fine for now, we can still adapt this later if needed", "Will be curious to see the impact of this in terms of upload reliability! Don't forget to let us know when you have more data. cc @huggingface/moon-landing-back " ]
1,165,102,209
3,885
Fix some shuffle docs
closed
2022-03-10T11:29:15
2022-03-10T14:16:29
2022-03-10T14:16:28
https://github.com/huggingface/datasets/pull/3885
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3885", "html_url": "https://github.com/huggingface/datasets/pull/3885", "diff_url": "https://github.com/huggingface/datasets/pull/3885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3885.patch", "merged_at": "2022-03-10T14:16:28" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3885). All of your documentation changes will be reflected on that endpoint." ]
1,164,924,314
3,884
Fix bug in METEOR metric due to nltk version
closed
2022-03-10T08:44:20
2022-03-10T09:03:40
2022-03-10T09:03:39
https://github.com/huggingface/datasets/pull/3884
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3884", "html_url": "https://github.com/huggingface/datasets/pull/3884", "diff_url": "https://github.com/huggingface/datasets/pull/3884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3884.patch", "merged_at": "2022-03-10T09:03:39" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3884). All of your documentation changes will be reflected on that endpoint." ]
1,164,663,229
3,883
The metric Meteor doesn't work for nltk ==3.6.4
closed
2022-03-10T02:28:27
2022-03-10T09:03:39
2022-03-10T09:03:39
https://github.com/huggingface/datasets/issues/3883
null
zhaowei-wang-nlp
false
[ "Hi @zhaowei-wang98, thanks for reporting.\r\n\r\nWe are fixing it... " ]
1,164,595,388
3,882
Image process doc
closed
2022-03-10T00:32:10
2022-03-15T15:24:16
2022-03-15T15:24:09
https://github.com/huggingface/datasets/pull/3882
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3882", "html_url": "https://github.com/huggingface/datasets/pull/3882", "diff_url": "https://github.com/huggingface/datasets/pull/3882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3882.patch", "merged_at": "2022-03-15T15:24:09" }
stevhliu
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3882). All of your documentation changes will be reflected on that endpoint." ]
1,164,452,005
3,881
How to use Image folder
closed
2022-03-09T21:18:52
2022-03-11T08:45:52
2022-03-11T08:45:52
https://github.com/huggingface/datasets/issues/3881
null
rozeappletree
false
[ "Even this from docs throw same error\r\n```\r\ndataset = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\n\r\n```", "Hi @INF800,\r\n\r\nPlease note that the `imagefolder` feature enhancement was just recently merged to our master branch (https://github.com/huggingface/datasets/commit/207be676bffe9d164740a41a883af6125edef135), but has not yet been released.\r\n\r\nWe are planning to make the 2.0 release of our library in the coming days and then that feature will be available by updating your `datasets` library from PyPI.\r\n\r\nIn the meantime, you can incorporate that feature if you install our library from our GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n\r\nThen:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ds = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\nUsing custom data configuration default-7eb4e80d960deb18\r\nDownloading and preparing dataset image_folder/default to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60...\r\nDownloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 690.19it/s]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 852.85it/s]\r\nDataset image_folder downloaded and prepared to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60. Subsequent calls will reuse this data.\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDataset({\r\n features: ['image', 'label'],\r\n num_rows: 25000\r\n})\r\n```", "Hey @albertvillanova. Does this load entire dataset in memory? Because I am facing huge trouble with loading very big datasets (OOM errors)", "Can you provide the error stack trace? The loader only stores the `data_files` dict, which can get big after globbing. Then, the OOM error would mean you don't have enough memory to keep all the paths to the image files. You can circumvent this by generating an archive and loading the dataset from there. Maybe we can optimize the globbing part in our data files resolution at some point, cc @lhoestq for visibility.", "Hey, memory error is resolved. It was fluke.\r\n\r\nBut there is another issue. Currently `load_dataset(\"imagefolder\", data_dir=\"./path/to/train\",)` takes only `train` as arg to `split` parameter.\r\n\r\nI am creating vaildation dataset using\r\n\r\n```\r\nds_valid = datasets.DatasetDict(valid=load_dataset(\"imagefolder\", data_dir=\"./path/to/valid\",)['train'])\r\n```", "`data_dir=\"path/to/folder\"` is a shorthand syntax fox `data_files={\"train\": \"path/to/folder/**\"}`, so use `data_files` in that case instead:\r\n```python\r\nds = load_dataset(\"imagefolder\", data_files={\"train\": \"path/to/train/**\", \"test\": \"path/to/test/**\", \"valid\": \"path/to/valid/**\"})\r\n```", "And there was another issue. I loaded black and white images (jpeg file). Using load dataset. It reads it as PIL jpeg data format. But instead of converting it into 3 channel tensor, input to collator function is coming as a single channel tensor.", "We don't apply any additional preprocessing on top of `PIL.Image.open(image_file)`, so you need to do the conversion yourself:\r\n\r\n```python\r\ndef to_rgb(batch):\r\n batch[\"image\"] = [img.convert(\"RGB\") for img in batch[\"image\"]]\r\n return batch\r\n\r\nds_rgb = ds.map(to_rgb, batched=True)\r\n```\r\n\r\nPlease use our Forum for questions of this kind in the future." ]
1,164,406,008
3,880
Change the framework switches to the new syntax
closed
2022-03-09T20:29:10
2022-03-15T14:13:28
2022-03-15T14:13:27
https://github.com/huggingface/datasets/pull/3880
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3880", "html_url": "https://github.com/huggingface/datasets/pull/3880", "diff_url": "https://github.com/huggingface/datasets/pull/3880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3880.patch", "merged_at": "2022-03-15T14:13:27" }
sgugger
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3880). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3880). All of your documentation changes will be reflected on that endpoint." ]
1,164,311,612
3,879
SQuAD v2 metric: create README.md
closed
2022-03-09T18:47:56
2022-03-10T16:48:59
2022-03-10T16:48:59
https://github.com/huggingface/datasets/pull/3879
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3879", "html_url": "https://github.com/huggingface/datasets/pull/3879", "diff_url": "https://github.com/huggingface/datasets/pull/3879.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3879.patch", "merged_at": "2022-03-10T16:48:58" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3879). All of your documentation changes will be reflected on that endpoint." ]
1,164,305,335
3,878
Update cats_vs_dogs size
closed
2022-03-09T18:40:56
2022-09-30T08:47:43
2022-03-10T14:21:23
https://github.com/huggingface/datasets/pull/3878
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3878", "html_url": "https://github.com/huggingface/datasets/pull/3878", "diff_url": "https://github.com/huggingface/datasets/pull/3878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3878.patch", "merged_at": "2022-03-10T14:21:23" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3878). All of your documentation changes will be reflected on that endpoint.", "Maybe `NonMatchingSplitsSizesError` errors should also tell the user to try using a more recent version of the dataset to get the fixes ?", "@lhoestq Good idea. Will open a new PR to improve the error messages of NonMatchingSplitsSizesError, NonMatchingChecksumsError, ...", "It seems there is still a problem. I am using datasets version 2.5.1. \r\nI just typed `ds = load_dataset(\"cats_vs_dogs\")` and get the error below.\r\n\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=3893603, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=3891612, num_examples=23410, dataset_name='cats_vs_dogs')}]\r\n```\r\nIt looks like the dataset still only has 23,410 examples....\r\n", "Thanks for reporting, I opened https://github.com/huggingface/datasets/pull/5047" ]
1,164,146,311
3,877
Align metadata to DCAT/DCAT-AP
open
2022-03-09T16:12:25
2022-03-09T16:33:42
null
https://github.com/huggingface/datasets/issues/3877
null
EmidioStani
false
[]
1,164,045,075
3,876
Fix download_mode in dataset_module_factory
closed
2022-03-09T14:54:33
2022-03-10T08:47:00
2022-03-10T08:46:59
https://github.com/huggingface/datasets/pull/3876
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3876", "html_url": "https://github.com/huggingface/datasets/pull/3876", "diff_url": "https://github.com/huggingface/datasets/pull/3876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3876.patch", "merged_at": "2022-03-10T08:46:59" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3876). All of your documentation changes will be reflected on that endpoint." ]
1,164,029,673
3,875
Module namespace cleanup for v2.0
closed
2022-03-09T14:43:07
2022-03-11T15:42:06
2022-03-11T15:42:05
https://github.com/huggingface/datasets/pull/3875
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3875", "html_url": "https://github.com/huggingface/datasets/pull/3875", "diff_url": "https://github.com/huggingface/datasets/pull/3875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3875.patch", "merged_at": "2022-03-11T15:42:05" }
mariosasko
true
[ "will it solve https://github.com/huggingface/datasets-preview-backend/blob/4c542a74244045929615640ccbba5a902c344c5a/pyproject.toml#L85-L89?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3875). All of your documentation changes will be reflected on that endpoint.", "@severo No, this PR doesn't fix that issue in the current state. We can fix it by adding `__all__` to `datasets/__init__.py` and `datasets/formatting/__init__.py`. However, this would require updating `__all__` for each new function/class definition, which could become cumbersome, and we can't do this dynamically because `mypy` is a static type checker.\r\n\r\n@lhoestq @albertvillanova WDYT?", "Feel free to merge this one if it's good for you :)" ]
1,164,013,511
3,874
add MSE and MAE metrics - V2
closed
2022-03-09T14:30:16
2022-03-09T17:20:42
2022-03-09T17:18:20
https://github.com/huggingface/datasets/pull/3874
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3874", "html_url": "https://github.com/huggingface/datasets/pull/3874", "diff_url": "https://github.com/huggingface/datasets/pull/3874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3874.patch", "merged_at": "2022-03-09T17:18:20" }
dnaveenr
true
[ "@mariosasko New PR here. I'm not sure how to add you as a co-author here. Also I see flake8 tests are failing, any inputs on how to resolve this ?\r\nAlso, let me know if any other changes are required. Thank you.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3874). All of your documentation changes will be reflected on that endpoint.", "Great. Thank you.", "Thanks so much for this 🙏 💯 " ]
1,163,961,578
3,873
Create SQuAD metric README.md
closed
2022-03-09T13:47:08
2022-03-10T16:45:57
2022-03-10T16:45:57
https://github.com/huggingface/datasets/pull/3873
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3873", "html_url": "https://github.com/huggingface/datasets/pull/3873", "diff_url": "https://github.com/huggingface/datasets/pull/3873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3873.patch", "merged_at": "2022-03-10T16:45:57" }
sashavor
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3873). All of your documentation changes will be reflected on that endpoint.", "Oh one last thing I almost forgot, I think I would add a section \"Examples\" with examples of inputs and outputs and in particular: an example giving maximal values, an examples giving minimal values and maybe a standard examples from SQuAD. What do you think?" ]
1,163,853,026
3,872
HTTP error 504 Server Error: Gateway Time-out
closed
2022-03-09T12:03:37
2022-03-15T16:19:50
2022-03-15T16:19:50
https://github.com/huggingface/datasets/issues/3872
null
illiyas-sha
false
[ "is pushing directly with git (and git-lfs) an option for you?", "I have installed git-lfs and doing this push with that\r\n", "yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`?", "Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line", "cc @lhoestq @albertvillanova @LysandreJik because maybe I'm giving dumb advice here 😅 ", "`push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.\r\n\r\nRegarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`." ]
1,163,714,113
3,871
add pandas to env command
closed
2022-03-09T09:48:51
2022-03-09T11:21:38
2022-03-09T11:21:37
https://github.com/huggingface/datasets/pull/3871
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3871", "html_url": "https://github.com/huggingface/datasets/pull/3871", "diff_url": "https://github.com/huggingface/datasets/pull/3871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3871.patch", "merged_at": "2022-03-09T11:21:37" }
patrickvonplaten
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3871). All of your documentation changes will be reflected on that endpoint.", "Think failures are unrelated - feel free to merge whenever you want :-)" ]
1,163,633,239
3,870
Add wikitablequestions dataset
closed
2022-03-09T08:27:43
2022-03-14T11:19:24
2022-03-14T11:16:19
https://github.com/huggingface/datasets/pull/3870
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3870", "html_url": "https://github.com/huggingface/datasets/pull/3870", "diff_url": "https://github.com/huggingface/datasets/pull/3870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3870.patch", "merged_at": "2022-03-14T11:16:19" }
SivilTaram
true
[ "@lhoestq Would you mind reviewing it when you're available? Thanks!\r\n", "> Awesome thanks for adding this dataset ! :) The dataset script and dataset cards look pretty good\r\n> \r\n> It looks like your `dummy_data.zip` files are quite big though (>1MB each), do you think we can reduce their sizes ? This way this git repository doesn't become too big\r\n\r\nI have manually reduced the `dummy_data.zip` and its current size is about 54KB. Hope it is fine for you!", "@lhoestq I think the dataset is ready to merge now. Any follow-up question is welcome :-D", "> Thanks ! It looks all good now :)\r\n\r\nAwesome! Thanks for your quick response!" ]
1,163,434,800
3,869
Making the Hub the place for datasets in Portuguese
open
2022-03-09T03:06:18
2022-03-09T09:04:09
null
https://github.com/huggingface/datasets/issues/3869
null
omarespejel
false
[ "Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many datasets, I would suggest to either create an issue for their datasets, or even better, we are trying to push to upload datasets as community datasets instead of adding them to the core library as guided in https://huggingface.co/docs/datasets/share. That would have the additional benefit that the dataset would live under the NILC organization.\r\n\r\n@lhoestq correct me if I'm wrong please 😄 " ]
1,162,914,114
3,868
Ignore duplicate keys if `ignore_verifications=True`
closed
2022-03-08T17:14:56
2022-03-09T13:50:45
2022-03-09T13:50:44
https://github.com/huggingface/datasets/pull/3868
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3868", "html_url": "https://github.com/huggingface/datasets/pull/3868", "diff_url": "https://github.com/huggingface/datasets/pull/3868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3868.patch", "merged_at": "2022-03-09T13:50:44" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3868). All of your documentation changes will be reflected on that endpoint.", "Cool thanks ! Could you add a test please ?" ]
1,162,896,605
3,867
Update for the rename doc-builder -> hf-doc-utils
closed
2022-03-08T16:58:25
2023-09-24T09:54:44
2022-03-08T17:30:45
https://github.com/huggingface/datasets/pull/3867
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3867", "html_url": "https://github.com/huggingface/datasets/pull/3867", "diff_url": "https://github.com/huggingface/datasets/pull/3867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3867.patch", "merged_at": null }
sgugger
true
[ "why utils? it's a builder no?", "~~@julien-c there was a vote 🙂 https://huggingface.slack.com/archives/C021H1P1HKR/p1646405136644739~~\r\n\r\noh I see you already commeented in the thread as well", "Thanks ! It looks all good to me (provided `hf-doc-utils` is the name we keep in the end). I'm fine with this name, and `hf-doc-builder` is also fine IMHO", "ok, this is definitely not a hill I'll die on =) @mishig25 @sgugger " ]
1,162,833,848
3,866
Bring back imgs so that forsk dont get broken
closed
2022-03-08T16:01:31
2022-03-08T17:37:02
2022-03-08T17:37:01
https://github.com/huggingface/datasets/pull/3866
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3866", "html_url": "https://github.com/huggingface/datasets/pull/3866", "diff_url": "https://github.com/huggingface/datasets/pull/3866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3866.patch", "merged_at": "2022-03-08T17:37:01" }
mishig25
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3866). All of your documentation changes will be reflected on that endpoint.", "I think we just need to keep `datasets_logo_name.jpg` and `course_banner.png` because they appear in the README.md of the forks of `datasets`. The other images can be removed", "Force pushed those two imgs only" ]
1,162,821,908
3,865
Add logo img
closed
2022-03-08T15:50:59
2023-09-24T09:54:31
2022-03-08T16:01:59
https://github.com/huggingface/datasets/pull/3865
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3865", "html_url": "https://github.com/huggingface/datasets/pull/3865", "diff_url": "https://github.com/huggingface/datasets/pull/3865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3865.patch", "merged_at": null }
mishig25
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3865). All of your documentation changes will be reflected on that endpoint.", "Superceded by https://github.com/huggingface/datasets/pull/3866" ]
1,162,804,942
3,864
Update image dataset tags
closed
2022-03-08T15:36:32
2022-03-08T17:04:47
2022-03-08T17:04:46
https://github.com/huggingface/datasets/pull/3864
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3864", "html_url": "https://github.com/huggingface/datasets/pull/3864", "diff_url": "https://github.com/huggingface/datasets/pull/3864.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3864.patch", "merged_at": "2022-03-08T17:04:46" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3864). All of your documentation changes will be reflected on that endpoint." ]
1,162,802,857
3,863
Update code blocks
closed
2022-03-08T15:34:43
2022-03-09T16:45:30
2022-03-09T16:45:29
https://github.com/huggingface/datasets/pull/3863
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3863", "html_url": "https://github.com/huggingface/datasets/pull/3863", "diff_url": "https://github.com/huggingface/datasets/pull/3863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3863.patch", "merged_at": "2022-03-09T16:45:29" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3863). All of your documentation changes will be reflected on that endpoint." ]
1,162,753,733
3,862
Manipulate columns on IterableDataset (rename columns, cast, etc.)
closed
2022-03-08T14:53:57
2022-03-10T16:40:22
2022-03-10T16:40:21
https://github.com/huggingface/datasets/pull/3862
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3862", "html_url": "https://github.com/huggingface/datasets/pull/3862", "diff_url": "https://github.com/huggingface/datasets/pull/3862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3862.patch", "merged_at": "2022-03-10T16:40:21" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3862). All of your documentation changes will be reflected on that endpoint.", "> IIUC we check if columns are present/not present directly in the yielded examples and not in info.features because info.features can be None (after map, for instance)?\r\n\r\nYes exactly\r\n\r\n> We should develop a solution that ensures info.features is never None. For example, one approach would be to infer them from examples in map and make them promotable from Value(\"null\") to a specific type, in case of None values.\r\n\r\nI agree this would be useful. Though inferring the type requires to start streaming some data, which takes a few seconds (compared to being instantaneous right now).\r\n\r\nLet's discuss this in a new issue maybe ?" ]
1,162,702,044
3,861
big_patent cased version
closed
2022-03-08T14:08:55
2023-04-21T14:32:03
2023-04-21T14:32:03
https://github.com/huggingface/datasets/issues/3861
null
slvcsl
false
[ "To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.\r\n\r\nSee the paper describing the issue here:\r\nhttps://aclanthology.org/2022.gem-1.34/", "Thanks for proposing the addition of the cased version of this dataset and for pinging again recently.\r\n\r\nI have just merged a PR that adds the cased version: https://huggingface.co/datasets/big_patent/discussions/3\r\n\r\nThe cased version (2.1.2) is the default one:\r\n```python\r\nds = load_dataset(\"big_patent\", \"all\")\r\n```\r\n\r\nTo use the 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:\r\n```python\r\nds = load_dataset(\"big_patent\", codes=\"all\", version=\"1.0.0\")\r\n```\r\n\r\nClosed by: https://huggingface.co/datasets/big_patent/discussions/3" ]
1,162,623,329
3,860
Small doc fixes
closed
2022-03-08T12:55:39
2022-03-08T17:37:13
2022-03-08T17:37:13
https://github.com/huggingface/datasets/pull/3860
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3860", "html_url": "https://github.com/huggingface/datasets/pull/3860", "diff_url": "https://github.com/huggingface/datasets/pull/3860.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3860.patch", "merged_at": "2022-03-08T17:37:13" }
mishig25
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3860). All of your documentation changes will be reflected on that endpoint.", "There are still some `.. code-block:: python` (e.g. see [this](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping)) directives in our codebase, so maybe we can remove those as well as part of this PR." ]
1,162,559,333
3,859
Unable to dowload big_patent (FileNotFoundError)
closed
2022-03-08T11:47:12
2022-03-08T13:04:09
2022-03-08T13:04:04
https://github.com/huggingface/datasets/issues/3859
null
slvcsl
false
[ "Hi @slvcsl, thanks for reporting.\r\n\r\nYesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.\r\nhttps://pypi.org/project/datasets/#history\r\n\r\nPlease, feel free to update `datasets` library to the latest version: \r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then you should force redownload of the data file to update your local cache: \r\n```python\r\nds = load_dataset(\"big_patent\", \"g\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\n- Note that before the fix, you just downloaded and cached the Google Drive virus scan warning page, instead of the data file\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe already fixed it. See:\r\n- #3787 \r\n" ]
1,162,526,688
3,858
Udpate index.mdx margins
closed
2022-03-08T11:11:52
2022-03-08T12:57:57
2022-03-08T12:57:56
https://github.com/huggingface/datasets/pull/3858
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3858", "html_url": "https://github.com/huggingface/datasets/pull/3858", "diff_url": "https://github.com/huggingface/datasets/pull/3858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3858.patch", "merged_at": "2022-03-08T12:57:56" }
gary149
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3858). All of your documentation changes will be reflected on that endpoint." ]
1,162,525,353
3,857
Order of dataset changes due to glob.glob.
open
2022-03-08T11:10:30
2022-03-14T11:08:22
null
https://github.com/huggingface/datasets/issues/3857
null
patrickvonplaten
false
[ "I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`" ]
1,162,522,034
3,856
Fix push_to_hub with null images
closed
2022-03-08T11:07:09
2022-03-08T15:22:17
2022-03-08T15:22:16
https://github.com/huggingface/datasets/pull/3856
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3856", "html_url": "https://github.com/huggingface/datasets/pull/3856", "diff_url": "https://github.com/huggingface/datasets/pull/3856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3856.patch", "merged_at": "2022-03-08T15:22:16" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint." ]
1,162,448,589
3,855
Bad error message when loading private dataset
closed
2022-03-08T09:55:17
2022-07-11T15:06:40
2022-07-11T15:06:40
https://github.com/huggingface/datasets/issues/3855
null
patrickvonplaten
false
[ "We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !", "Resolved via https://github.com/huggingface/datasets/pull/4536" ]
1,162,434,199
3,854
load only England English dataset from common voice english dataset
closed
2022-03-08T09:40:52
2024-03-23T12:40:58
2022-03-09T08:13:33
https://github.com/huggingface/datasets/issues/3854
null
amanjaiswal777
false
[ "Hi @amanjaiswal777,\r\n\r\nFirst note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.\r\n\r\nCurrently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation\r\n\r\nFor example, to get their latest Common Voice relase (8.0):\r\n- Go to the dataset page and request access permission (Mozilla Foundation requires this for people willing to use their datasets): https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0\r\n- Looking at the dataset card, you can check that data instances have, among other fields, the ones you are interested in: \"accent\", \"age\",... \r\n- Then you can load their \"en\" language dataset as usual, besides passing your authentication token (more info on auth token here: https://huggingface.co/docs/hub/security)\r\n ```python\r\n from datasets import load_dataset\r\n ds_en = load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True)\r\n ```\r\n- Finally, you can filter only the data instances you are interested in (more info on `filter` here: https://huggingface.co/docs/datasets/process#select-and-filter):\r\n ```python\r\n ds_england_en = ds_en.filter(lambda item: item[\"accent\"] == \"England English\")\r\n ```\r\n\r\nFeel free to reopen this issue if you need further assistance.", "Hey @albertvillanova trying the same approach as you with the common_voice_16_1 dataset. What I'm trying to do is to filter the valencian accent in the catalan subset. Gave me this error and I have everything it asks for decoding mp3:\r\n![image](https://github.com/huggingface/datasets/assets/96977715/7ec02483-e728-4358-9372-ba74ec1b7fd4)\r\n\r\n![image](https://github.com/huggingface/datasets/assets/96977715/c10fcf23-a141-4dba-a88d-89e293acfe67)\r\n\r\n" ]
1,162,386,592
3,853
add ontonotes_conll dataset
closed
2022-03-08T08:53:42
2022-03-15T10:48:02
2022-03-15T10:48:02
https://github.com/huggingface/datasets/pull/3853
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3853", "html_url": "https://github.com/huggingface/datasets/pull/3853", "diff_url": "https://github.com/huggingface/datasets/pull/3853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3853.patch", "merged_at": "2022-03-15T10:48:02" }
richarddwang
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3853). All of your documentation changes will be reflected on that endpoint.", "The CI fail is unrelated to this dataset, merging :)" ]
1,162,252,337
3,852
Redundant add dataset information and dead link.
closed
2022-03-08T05:57:05
2022-03-08T16:54:36
2022-03-08T16:54:36
https://github.com/huggingface/datasets/pull/3852
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3852", "html_url": "https://github.com/huggingface/datasets/pull/3852", "diff_url": "https://github.com/huggingface/datasets/pull/3852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3852.patch", "merged_at": "2022-03-08T16:54:36" }
dnaveenr
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3852). All of your documentation changes will be reflected on that endpoint." ]
1,162,137,998
3,851
Load audio dataset error
closed
2022-03-08T02:16:04
2022-09-27T12:13:55
2022-03-08T11:20:06
https://github.com/huggingface/datasets/issues/3851
null
lemoner20
false
[ "Hi @lemoner20, thanks for reporting.\r\n\r\nI'm sorry but I cannot reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset, load_metric, Audio\r\n ...: raw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\")\r\n ...: print(raw_datasets[0][\"audio\"])\r\nDownloading builder script: 30.2kB [00:00, 13.0MB/s] \r\nDownloading metadata: 38.0kB [00:00, 16.6MB/s] \r\nDownloading and preparing dataset superb/ks (download: 1.45 GiB, generated: 9.64 MiB, post-processed: Unknown size, total: 1.46 GiB) to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.49G/1.49G [00:37<00:00, 39.3MB/s]\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.3M/71.3M [00:01<00:00, 36.1MB/s]\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:41<00:00, 20.67s/it]\r\nExtracting data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:28<00:00, 14.24s/it]\r\nDataset superb downloaded and prepared to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9. Subsequent calls will reuse this data.\r\n{'path': '.../.cache/huggingface/datasets/downloads/extracted/8571921d3088b48f58f75b2e514815033e1ffbd06aa63fd4603691ac9f1c119f/_background_noise_/doing_the_dishes.wav', 'array': array([ 0. , 0. , 0. , ..., -0.00592041,\r\n -0.00405884, -0.00253296], dtype=float32), 'sampling_rate': 16000}\r\n``` \r\n\r\nWhich version of `datasets` are you using? Could you please fill in the environment info requested in the bug report template? You can run the command `datasets-cli env` and copy-and-paste its output below\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:", "@albertvillanova Thanks for your reply. The environment info below\r\n\r\n## Environment info\r\n- `datasets` version: 1.18.3\r\n- Platform: Linux-4.19.91-007.ali4000.alios7.x86_64-x86_64-with-debian-buster-sid\r\n- Python version: 3.6.12\r\n- PyArrow version: 6.0.1", "Thanks @lemoner20,\r\n\r\nI cannot reproduce your issue in datasets version 1.18.3 either.\r\n\r\nMaybe redownloading the data file may work if you had already cached this dataset previously. Could you please try passing \"force_redownload\"?\r\n```python\r\nraw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\", download_mode=\"force_redownload\")", "Thanks, @albertvillanova,\r\n\r\nI install the python package of **librosa=0.9.1** again, it works now!\r\n\r\n\r\n", "Cool!", "@albertvillanova, you can actually reproduce the error if you reach the cell `common_voice_train[0][\"path\"]` of this [notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=_0kRndSvqaKk). Error gets solved after updating the versions of the libraries used in there.", "@jvel07, thanks for reporting and finding a solution.\r\n\r\nMaybe we could tell @patrickvonplaten about the version pinning issue in his notebook.", "Should I update the version of datasets @albertvillanova ? " ]
1,162,126,030
3,850
[feat] Add tqdm arguments
closed
2022-03-08T01:53:25
2022-12-16T05:34:07
2022-12-16T05:34:07
https://github.com/huggingface/datasets/pull/3850
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3850", "html_url": "https://github.com/huggingface/datasets/pull/3850", "diff_url": "https://github.com/huggingface/datasets/pull/3850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3850.patch", "merged_at": null }
penguinwang96825
true
[]
1,162,091,075
3,849
Add "Adversarial GLUE" dataset to datasets library
closed
2022-03-08T00:47:11
2022-03-28T11:17:14
2022-03-28T11:12:04
https://github.com/huggingface/datasets/pull/3849
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3849", "html_url": "https://github.com/huggingface/datasets/pull/3849", "diff_url": "https://github.com/huggingface/datasets/pull/3849.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3849.patch", "merged_at": "2022-03-28T11:12:04" }
jxmorris12
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq can you review when you have some time?", "Hi @lhoestq -- thanks so much for your review! I just added the stuff you requested to the README.md, including an example from the dataset, the table of contents, and lots of section headers with \"More Information Needed\" below. Let me know if there's anything else I need to do!", "Feel free to also merge `master` into your branch to get the latest updates for the tests ;)", "thanks @lhoestq - just made all the updates you requested!" ]
1,162,076,902
3,848
NonMatchingChecksumError when checksum is None
closed
2022-03-08T00:24:12
2022-03-15T14:37:26
2022-03-15T12:28:23
https://github.com/huggingface/datasets/issues/3848
null
jxmorris12
false
[ "Hi @jxmorris12, thanks for reporting.\r\n\r\nThe objective of `verify_checksums` is to check that both checksums are equal. Therefore if one is None and the other is non-None, they are not equal, and the function accordingly raises a NonMatchingChecksumError. That behavior is expected.\r\n\r\nThe question is: how did you generate the expected checksum? Normally, it should not be None. To properly generate it (it is contained in the `dataset_infos.json` file), you should have runned: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md\r\n```shell\r\ndatasets-cli test <your-dataset-folder> --save_infos --all_configs\r\n```\r\n\r\nOn the other hand, you should take into account that the generation of this file is NOT mandatory for personal/community datasets (we only require it for \"canonical\" datasets, i.e., datasets added to our library GitHub repository: https://github.com/huggingface/datasets/tree/master/datasets). Therefore, other option would be just to delete the `dataset_infos.json` file. If that file is not present, the function `verify_checksums` is not executed.\r\n\r\nFinally, you can circumvent the `verify_checksums` function by passing `ignore_verifications=True` to `load_dataset`:\r\n```python\r\nload_dataset(..., ignore_verifications=True)\r\n``` ", "Thanks @albertvillanova!\r\n\r\nThat's fine. I did run that command when I was adding a new dataset. Maybe because the command crashed in the middle, the checksum wasn't stored properly. I don't know where the bug is happening. But either (i) `verify_checksums` should properly handle this edge case, where the passed checksum is None or (ii) the `datasets-cli test` shouldn't generate a corrupted dataset_infos.json file.\r\n\r\nJust a more high-level thing, I was trying to follow the instructions for adding a dataset in the CONTRIBUTING.md, so if running that command isn't even necessary, that should probably be mentioned in the document, right? But that's somewhat of a moot point, since something isn't working quite right internally if I was able to get into this corrupted state in the first place, just by following those instructions.", "Hi @jxmorris12,\r\n\r\nDefinitely, your `dataset_infos.json` was corrupted (and wrongly contains expected None checksum). \r\n\r\nWhile we further investigate how this can happen and fix it, feel free to delete your `dataset_infos.json` file and recreate it with:\r\n```shell\r\ndatasets-cli test <your-dataset-folder> --save_infos --all_configs\r\n```\r\n\r\nAlso note that `verify_checksum` is working as expected: if it receives a None and and a non-None checksums as input pair, it must raise an exception: they are not equal. That is not a bug.", "At a higher level, also note that we are preparing the release of `datasets` version 2.0, and some docs are being updated...\r\n\r\nIn order to add a dataset, I think the most updated instructions are in our official documentation pages: https://huggingface.co/docs/datasets/share", "Thanks for the info. Maybe you can update the contributing.md if it's not up-to-date.", "Hi @jxmorris12, we have discovered the bug why `None` checksums wrongly appeared when generating the `dataset_infos.json` file:\r\n- #3892\r\n\r\nThe fix will be accessible once this PR merged. And we are planning to do our 2.0 release today.\r\n\r\nWe are also working on updating all our docs for our release today.", "Thanks @albertvillanova - congrats on the release!" ]
1,161,856,417
3,847
Datasets' cache not re-used
open
2022-03-07T19:55:15
2025-05-19T11:58:55
null
https://github.com/huggingface/datasets/issues/3847
null
gejinchen
false
[ "<s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.\r\n\r\nTo fix this we can try making the order of the splits deterministic for map.</s>", "Actually this is not because of the order of the splits, but most likely because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer).\r\n\r\nThis is a bit trickier to fix, we can explore fixing this next week maybe", "Sorry didn't have the bandwidth to take care of this yet - will re-assign when I'm diving into it again !", "I had this issue with `run_speech_recognition_ctc.py` for wa2vec2.0 fine-tuning. I made a small change and the hash for the function (which includes tokenisation) is now the same before and after pre-porocessing. With the hash being the same, the caching works as intended.\r\n\r\nBefore:\r\n```\r\n def prepare_dataset(batch):\r\n # load audio\r\n sample = batch[audio_column_name]\r\n\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n # encode targets\r\n additional_kwargs = {}\r\n if phoneme_language is not None:\r\n additional_kwargs[\"phonemizer_lang\"] = phoneme_language\r\n\r\n batch[\"labels\"] = tokenizer(batch[\"target_text\"], **additional_kwargs).input_ids\r\n\r\n return batch\r\n\r\n with training_args.main_process_first(desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n prepare_dataset,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n```\r\nAfter:\r\n```\r\n def prepare_dataset(batch, feature_extractor, tokenizer):\r\n # load audio\r\n sample = batch[audio_column_name]\r\n\r\n inputs = feature_extractor(sample[\"array\"], sampling_rate=sample[\"sampling_rate\"])\r\n batch[\"input_values\"] = inputs.input_values[0]\r\n batch[\"input_length\"] = len(batch[\"input_values\"])\r\n\r\n # encode targets\r\n additional_kwargs = {}\r\n if phoneme_language is not None:\r\n additional_kwargs[\"phonemizer_lang\"] = phoneme_language\r\n\r\n batch[\"labels\"] = tokenizer(batch[\"target_text\"], **additional_kwargs).input_ids\r\n\r\n return batch\r\n\r\n pd = lambda batch: prepare_dataset(batch, feature_extractor, tokenizer)\r\n\r\n with training_args.main_process_first(desc=\"dataset map preprocessing\"):\r\n vectorized_datasets = raw_datasets.map(\r\n pd,\r\n remove_columns=next(iter(raw_datasets.values())).column_names,\r\n num_proc=num_workers,\r\n desc=\"preprocess datasets\",\r\n )\r\n```", "Not sure why the second one would work and not the first one - they're basically the same with respect to hashing. In both cases the function is hashed recursively, and therefore the feature_extractor and the tokenizer are hashed the same way.\r\n\r\nWith which tokenizer or feature extractor are you experiencing this behavior ?\r\n\r\nDo you also experience this ?\r\n> Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache.", "Thanks ! Hopefully this can be useful to others, and also to better understand and improve hashing/caching ", "`tokenizer.save_pretrained(training_args.output_dir)` produces a different tokenizer hash when loaded on restart of the script. When I was debugging before I was terminating the script prior to this command, then rerunning. \r\n\r\nI compared the tokenizer items on the first and second runs, there are two different items:\r\n1st:\r\n```\r\n('_additional_special_tokens', [AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True)])\r\n\r\n...\r\n\r\n('tokens_trie', <transformers.tokenization_utils.Trie object at 0x7f4d6d0ddb38>)\r\n```\r\n\r\n2nd:\r\n```\r\n('_additional_special_tokens', [AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=True), AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=True)])\r\n\r\n...\r\n\r\n('tokens_trie', <transformers.tokenization_utils.Trie object at 0x7efc23dcce80>)\r\n```\r\n\r\n On every run of this the special tokens are being added on, and the hash is different on the `tokens_trie`. The increase in the special tokens category could be cleaned, but not sure about the hash for the `tokens_trie`. What might work is that the call for the tokenizer encoding can be translated into a function that strips any unnecessary information out, but that's a guess.\r\n", "Thanks for investigating ! Does that mean that `save_pretrained`() produces non-deterministic tokenizers on disk ? Or is it `from_pretrained()` which is not deterministic given the same files on disk ?\r\n\r\nI think one way to fix this would be to make save/from_pretrained deterministic, or make the pickling of `transformers.tokenization_utils.Trie` objects deterministic (this could be implemented in `transformers`, but maybe let's discuss in an issue in `transformers` before opening a PR)", "Late to the party but everything should be deterministic (afaik at least).\r\n\r\nBut `Trie` is a simple class object, so afaik it's hash function is linked to its `id(self)` so basically where it's stored in memory, so super highly non deterministic. Could that be the issue ?", "> But Trie is a simple class object, so afaik it's hash function is linked to its id(self) so basically where it's stored in memory, so super highly non deterministic. Could that be the issue ?\r\n\r\nWe're computing the hash of the pickle dump of the class so it should be fine, as long as the pickle dump is deterministic", "I've ported wav2vec2.0 fine-tuning into Optimum-Graphcore which is where I found the issue. The majority of the script was copied from the Transformers version to keep it similar, [here is the tokenizer loading section from the source](https://github.com/huggingface/transformers/blob/f0982682bd6fd0b438dda79ec45f3a8fac83a985/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L531).\r\n\r\nIn the last comment I have two loaded tokenizers, one from run 'N' of the script and one from 'N+1'. I think what's happening is that when you add special tokens (e.g. PAD and UNK) another AddedToken object is appended when tokenizer is saved regardless of whether special tokens are there already. \r\n\r\nIf there is a AddedTokens cleanup at load/save this could solve the issue, but then is Trie going to cause hash to be different? I'm not sure. ", "Which Python version are you using ?\r\n\r\nThe trie is basically a big dict of dics, so deterministic nature depends on python version:\r\nhttps://stackoverflow.com/questions/2053021/is-the-order-of-a-python-dictionary-guaranteed-over-iterations\r\n\r\nMaybe the investigation is actually not finding the right culprit though (the memory id is changed, but `datasets` is not using that to compare, so maybe we need to be looking within `datasets` so see where the comparison fails)", "Similar issue found on `BartTokenizer`. You can bypass the bug by loading a fresh new tokenizer everytime.\r\n\r\n```\r\n dataset = dataset.map(lambda x: tokenize_func(x, BartTokenizer.from_pretrained(xxx)),\r\n num_proc=num_proc, desc='Tokenize')\r\n```", "Linking in https://github.com/huggingface/datasets/issues/6179#issuecomment-1701244673 with an explanation.", "I got the same problem while using Wav2Vec2CTCTokenizer in a distributed experiment (many processes), and found that the problem was localized in the serialization (pickle dump) of the field `tokenizer.tokens_trie._tokens` (just a python set). I focussed into the set serialization and found it is not deterministic:\r\n\r\n```\r\nfrom datasets.fingerprint import Hasher\r\nfrom pickle import dumps,loads\r\n\r\n# used just once to get a serialized literal\r\n#print(dumps(set(\"abc\")))\r\nserialized = b'\\x80\\x04\\x95\\x11\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8f\\x94(\\x8c\\x01a\\x94\\x8c\\x01c\\x94\\x8c\\x01b\\x94\\x90.'\r\n\r\nmyset = loads(serialized)\r\nprint(f'{myset=} {Hasher.hash(myset)}')\r\nprint(serialized == dumps(myset))\r\n```\r\n\r\nEvery time you run the python script (different processes) you get a random result. @lhoestq does it make any sense?", "OK, I assume python's set is just a hash table implementation that uses internally the hash() function. The problem is that python's hash() is not deterministic. I believe that setting the environment variable PYTHONHASHSEED to a fixed value, you can force it to be deterministic. I tried it (file `set_pickle_dump.py`):\r\n\r\n```\r\n#!/usr/bin/python3\r\n\r\nfrom datasets.fingerprint import Hasher\r\nfrom pickle import dumps,loads\r\n\r\n# used just once to get a serialized literal (with environment variable PYTHONHASHSEED set to 42)\r\n#print(dumps(set(\"abc\")))\r\nserialized = b'\\x80\\x04\\x95\\x11\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8f\\x94(\\x8c\\x01b\\x94\\x8c\\x01c\\x94\\x8c\\x01a\\x94\\x90.'\r\n\r\nmyset = loads(serialized)\r\nprint(f'{myset=} {Hasher.hash(myset)}')\r\nprint(serialized == dumps(myset))\r\n```\r\n\r\nand now every run (`PYTHONHASHSEED=42 ./set_pickle_dump.py`) gets tthe same result. I tried then to test it with the tokenizer (file `test_tokenizer.py`):\r\n\r\n```\r\n#!/usr/bin/python3\r\nfrom transformers import AutoTokenizer\r\nfrom datasets.fingerprint import Hasher\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('model')\r\nprint(f'{type(tokenizer)=}')\r\nprint(f'{Hasher.hash(tokenizer)=}')\r\n```\r\n\r\nexecuted as `PYTHONHASHSEED=42 ./test_tokenizer.py` and now the tokenizer fingerprint is allways the same!\r\n", "Thanks for reporting. I opened a PR here to propose a fix: https://github.com/huggingface/datasets/pull/6318 and doesn't require setting `PYTHONHASHSEED`\r\n\r\nCan you try to install `datasets` from this branch and tell me if it fixes the issue ?", "I patched (*) the file `datasets/utils/py_utils.py` and cache is working propperly now. Thanks!\r\n\r\n(*): I am running my experiments inside a docker container that depends on `huggingface/transformers-pytorch-gpu:latest`, so pattched the file instead of rebuilding the container from scratch", "Fixed by #6318.", "The OP issue hasn't been fixed, re-opening", "I think the Trie()._tokens of PreTrainedTokenizer need to be a sorted set So that the results of `hash_bytes(dumps(tokenizer))` are consistent every time", "I believe the issue may be linked to [tokenization_utils.py#L507](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L507),specifically in the line where self.tokens_trie.add(token.content) is called. The function _update_trie appears to modify an unordered set. Consequently, this line:\r\n`value = hash_bytes(dumps(tokenizer.tokens_trie._tokens))`\r\ncan lead to inconsistencies when rerunning the code.\r\n\r\nThis, in turn, results in inconsistent outputs for both `hash_bytes(dumps(function))` at [arrow_dataset.py#L3053](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L3053) and\r\n`hasher.update(transform_args[key])` at [fingerprint.py#L323](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L323)\r\n\r\n```\r\ndataset_kwargs = {\r\n \"shard\": raw_datasets,\r\n \"function\": tokenize_function,\r\n}\r\ntransform = format_transform_for_fingerprint(Dataset._map_single)\r\nkwargs_for_fingerprint = format_kwargs_for_fingerprint(Dataset._map_single, (), dataset_kwargs)\r\nkwargs_for_fingerprint[\"fingerprint_name\"] = \"new_fingerprint\"\r\nnew_fingerprint = update_fingerprint(raw_datasets._fingerprint, transform, kwargs_for_fingerprint)\r\n```\r\n", "Alternatively, does the \"dumps\" function require separate processing for the set?", "We did a fix that does sorting whenever we hash sets. The fix is available on `main` if you want to try it out. We'll do a new release soon :)", "Is there a documentation chapter that discusses in which cases you should expect your dataset preprocessing to be cached. Including do's and don'ts for the preprocessing functions? I think Datasets team does amazing job at tacking this issue on their side, but it would be great to have some guidelines on the user side as well.\r\n\r\nIn our current project we have two cases (text-to-text classification and summarization) and in one of them the cache is sometimes reused when it's not supposed to be reused while in the other it's never used at all 😅", "You can find some docs here :) \r\nhttps://huggingface.co/docs/datasets/about_cache", "I still encounter this problem, dataset.map() run multiple time in different process...", "Hi ! on which tokenizer ? and which `datasets` and `dill` versions ?" ]
1,161,810,226
3,846
Update faiss device docstring
closed
2022-03-07T19:06:59
2022-03-07T19:21:23
2022-03-07T19:21:22
https://github.com/huggingface/datasets/pull/3846
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3846", "html_url": "https://github.com/huggingface/datasets/pull/3846", "diff_url": "https://github.com/huggingface/datasets/pull/3846.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3846.patch", "merged_at": "2022-03-07T19:21:22" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3846). All of your documentation changes will be reflected on that endpoint." ]
1,161,739,483
3,845
add RMSE and MAE metrics.
closed
2022-03-07T17:53:24
2022-03-09T16:50:03
2022-03-09T16:50:03
https://github.com/huggingface/datasets/pull/3845
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3845", "html_url": "https://github.com/huggingface/datasets/pull/3845", "diff_url": "https://github.com/huggingface/datasets/pull/3845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3845.patch", "merged_at": null }
dnaveenr
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3845). All of your documentation changes will be reflected on that endpoint.", "@mariosasko I've reopened it here. Please suggest any changes if required. Thank you.", "Thanks for suggestions. :) I have added update the KWARGS_DESCRIPTION for the missing params and also changed RMSE to MSE.\r\nWhile testing, I noticed that when the input is a list of lists, we get an error :\r\n`TypeError: float() argument must be a string or a number, not 'list'`\r\nCould you suggest the datasets.Value() attribute to support both list of floats and list of lists containing floats ?\r\n", "Just add a new config to cover that case. You can do this by replacing the current `features` dict with:\r\n```python\r\nfeatures=datasets.Features(\r\n {\r\n \"predictions\": datasets.Sequence(datasets.Value(\"float\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"float\")),\r\n }\r\n if self.config_name == \"multioutput\"\r\n else {\r\n \"predictions\": datasets.Value(\"float\"),\r\n \"references\": datasets.Value(\"float\"),\r\n }\r\n),\r\n```\r\nFeel free to suggest a better name for the config than `multioutput`", "Also, could you please move the changes to a new branch and open a PR from there (for the 3rd time 😄) because the diff shows changes from unrelated PRs (maybe due to rebasing?).", "Thanks for the input, I have added new config to support multi-dimensional lists and updated the examples as well.\r\n\r\nSure. Will do that and open a new PR for these changes." ]
1,161,686,754
3,844
Add rmse and mae metrics.
closed
2022-03-07T17:06:38
2022-03-07T17:24:32
2022-03-07T17:15:06
https://github.com/huggingface/datasets/pull/3844
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3844", "html_url": "https://github.com/huggingface/datasets/pull/3844", "diff_url": "https://github.com/huggingface/datasets/pull/3844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3844.patch", "merged_at": null }
dnaveenr
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3844). All of your documentation changes will be reflected on that endpoint.", "@dnaveenr This PR is in pretty good shape, so feel free to reopen it." ]
1,161,397,812
3,843
Fix Google Drive URL to avoid Virus scan warning in streaming mode
closed
2022-03-07T13:09:19
2022-03-15T12:30:25
2022-03-15T12:30:23
https://github.com/huggingface/datasets/pull/3843
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3843", "html_url": "https://github.com/huggingface/datasets/pull/3843", "diff_url": "https://github.com/huggingface/datasets/pull/3843.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3843.patch", "merged_at": "2022-03-15T12:30:23" }
mariosasko
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3843). All of your documentation changes will be reflected on that endpoint.", "Cool ! Looks like it breaks `test_streaming_gg_drive_gzipped` for some reason..." ]
1,161,336,483
3,842
Align IterableDataset.shuffle with Dataset.shuffle
closed
2022-03-07T12:10:46
2022-03-07T19:03:43
2022-03-07T19:03:42
https://github.com/huggingface/datasets/pull/3842
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3842", "html_url": "https://github.com/huggingface/datasets/pull/3842", "diff_url": "https://github.com/huggingface/datasets/pull/3842.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3842.patch", "merged_at": "2022-03-07T19:03:42" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3842). All of your documentation changes will be reflected on that endpoint.", "We should also add `generator` as a param to `shuffle` to fully align the APIs, no?", "I added the `generator` argument.\r\n\r\nI had to make a few other adjustments to make it work. In particular when you call `set_epoch()` on a streaming dataset, it updates the underlying random generator by using a new effective seed. The effective seed is generated using the previous generator and the epoch number." ]
1,161,203,842
3,841
Pyright reportPrivateImportUsage when `from datasets import load_dataset`
closed
2022-03-07T10:24:04
2023-02-18T19:14:03
2023-02-13T13:48:41
https://github.com/huggingface/datasets/issues/3841
null
lkhphuc
false
[ "Hi! \r\n\r\nThis issue stems from `datasets` having `py.typed` defined (see https://github.com/microsoft/pyright/discussions/3764#discussioncomment-3282142) - to avoid it, we would either have to remove `py.typed` (added to be compliant with PEP-561) or export the names with `__all__`/`from .submodule import name as name`.\r\n\r\nTransformers is fine as it no longer has `py.typed` (removed in https://github.com/huggingface/transformers/pull/18485)\r\n\r\nWDYT @lhoestq @albertvillanova @polinaeterna \r\n\r\n@sgugger's point makes sense - we should either be \"properly typed\" (have py.typed + mypy tests) or drop `py.typed` as Transformers did (I like this option better).\r\n\r\n(cc @Wauplin since `huggingface_hub` has the same issue.)", "I'm fine with dropping it, but autotrain people won't be happy @SBrandeis ", "> (cc @Wauplin since huggingface_hub has the same issue.)\r\n\r\nHmm maybe we have the same issue but I haven't been able to reproduce something similar to `\"load_dataset\" is not exported from module \"datasets\"` message (using VSCode+Pylance -that is powered by Pyright). `huggingface_hub` contains a `py.typed` file but the package itself is actually typed. We are running `mypy` in our CI tests since ~3 months and so far it seems to be ok. But happy to change if it causes some issues with linters.\r\n\r\nAlso the top-level [`__init__.py`](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py) is quite different in `hfh` than `datasets` (at first glance). We have a section at the bottom to import all high level methods/classes in a `if TYPE_CHECKING` block.", "@Wauplin I only get the error if I use Pyright's CLI tool or the Pyright extension (not sure why, but Pylance also doesn't report this issue on my machine)\r\n\r\n> Also the top-level [`__init__.py`](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py) is quite different in `hfh` than `datasets` (at first glance). We have a section at the bottom to import all high level methods/classes in a `if TYPE_CHECKING` block.\r\n\r\nI tried to fix the issue with `TYPE_CHECKING`, but it still fails if `py.typed` is present.", "@mariosasko thank for the tip. I have been able to reproduce the issue as well. I would be up for including a (huge) static `__all__` variable in the `__init__.py` (since the file is already generated automatically in `hfh`) but honestly I don't think it's worth the hassle. \r\n\r\nI'll delete the `py.typed` file in `huggingface_hub` to be consistent between HF libraries. I opened a PR here: https://github.com/huggingface/huggingface_hub/pull/1329", "I am getting this error in google colab today:\r\n\r\n![image](https://user-images.githubusercontent.com/3464445/219883967-c7193a23-0388-4ba3-b00c-a53883fb6512.png)\r\n\r\nThe code runs just fine too." ]
1,161,183,773
3,840
Pin responses to fix CI for Windows
closed
2022-03-07T10:06:53
2022-03-07T10:12:36
2022-03-07T10:07:24
https://github.com/huggingface/datasets/pull/3840
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3840", "html_url": "https://github.com/huggingface/datasets/pull/3840", "diff_url": "https://github.com/huggingface/datasets/pull/3840.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3840.patch", "merged_at": "2022-03-07T10:07:24" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3840). All of your documentation changes will be reflected on that endpoint." ]
1,161,183,482
3,839
CI is broken for Windows
closed
2022-03-07T10:06:42
2022-05-20T14:13:43
2022-03-07T10:07:24
https://github.com/huggingface/datasets/issues/3839
null
albertvillanova
false
[]
1,161,137,406
3,838
Add a data type for labeled images (image segmentation)
open
2022-03-07T09:38:15
2024-05-29T16:50:55
null
https://github.com/huggingface/datasets/issues/3838
null
severo
false
[]
1,161,109,031
3,837
Release: 1.18.4
closed
2022-03-07T09:13:29
2022-03-07T11:07:35
2022-03-07T11:07:02
https://github.com/huggingface/datasets/pull/3837
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3837", "html_url": "https://github.com/huggingface/datasets/pull/3837", "diff_url": "https://github.com/huggingface/datasets/pull/3837.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3837.patch", "merged_at": null }
albertvillanova
true
[]
1,161,072,531
3,836
Logo float left
closed
2022-03-07T08:38:34
2022-03-07T20:21:11
2022-03-07T09:14:11
https://github.com/huggingface/datasets/pull/3836
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3836", "html_url": "https://github.com/huggingface/datasets/pull/3836", "diff_url": "https://github.com/huggingface/datasets/pull/3836.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3836.patch", "merged_at": "2022-03-07T09:14:11" }
mishig25
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3836). All of your documentation changes will be reflected on that endpoint.", "Weird, the logo doesn't seem to be floating on my side (using Chrome) at https://huggingface.co/docs/datasets/master/en/index", "https://huggingface.co/docs/datasets/index\r\n\r\nThe needed css change from moon-landing just got deployed" ]
1,161,029,205
3,835
The link given on the gigaword does not work
closed
2022-03-07T07:56:42
2022-03-15T12:30:23
2022-03-15T12:30:23
https://github.com/huggingface/datasets/issues/3835
null
martin6336
false
[]
1,160,657,937
3,834
Fix dead dataset scripts creation link.
closed
2022-03-06T16:45:48
2022-03-07T12:12:07
2022-03-07T12:12:07
https://github.com/huggingface/datasets/pull/3834
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3834", "html_url": "https://github.com/huggingface/datasets/pull/3834", "diff_url": "https://github.com/huggingface/datasets/pull/3834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3834.patch", "merged_at": "2022-03-07T12:12:07" }
dnaveenr
true
[]
1,160,543,713
3,833
Small typos in How-to-train tutorial.
closed
2022-03-06T07:49:49
2022-03-07T12:35:33
2022-03-07T12:13:17
https://github.com/huggingface/datasets/pull/3833
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3833", "html_url": "https://github.com/huggingface/datasets/pull/3833", "diff_url": "https://github.com/huggingface/datasets/pull/3833.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3833.patch", "merged_at": "2022-03-07T12:13:17" }
lkhphuc
true
[]
1,160,503,446
3,832
Making Hugging Face the place to go for Graph NNs datasets
open
2022-03-06T03:02:58
2022-03-14T07:45:38
null
https://github.com/huggingface/datasets/issues/3832
null
omarespejel
false
[ "It will be indeed really great to add support to GNN datasets. Big :+1: for this initiative.", "@napoles-uach identifies the [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression). \r\n\r\nAdded to the Tasks in the initial issue.", "Thanks Omar, that is a great collection!", "Great initiative! Let's keep this issue for these 3 datasets, but moving forward maybe let's create a new issue per dataset :rocket: great work @napoles-uach and @omarespejel!" ]
1,160,501,000
3,831
when using to_tf_dataset with shuffle is true, not all completed batches are made
closed
2022-03-06T02:43:50
2022-03-08T15:18:56
2022-03-08T15:18:56
https://github.com/huggingface/datasets/issues/3831
null
greenned
false
[ "Maybe @Rocketknight1 can help here", "Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`.", "@Rocketknight1 Oh, thank you. I didn't get **drop_remainder** Have a nice day!", "No problem!\r\n" ]
1,160,181,404
3,830
Got error when load cnn_dailymail dataset
closed
2022-03-05T01:43:12
2022-03-07T06:53:41
2022-03-07T06:53:41
https://github.com/huggingface/datasets/issues/3830
null
wgong0510
false
[ "Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1 import datasets\r\n 2 \r\n----> 3 train_data = datasets.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\n\r\n5 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)\r\n 1705 ignore_verifications=ignore_verifications,\r\n 1706 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1707 use_auth_token=use_auth_token,\r\n 1708 )\r\n 1709 \r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 593 if not downloaded_from_gcs:\r\n 594 self._download_and_prepare(\r\n--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 596 )\r\n 597 # Sync info\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 659 split_dict = SplitDict(dataset_name=self.name)\r\n 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 662 \r\n 663 # Checksums verification\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _split_generators(self, dl_manager)\r\n 253 def _split_generators(self, dl_manager):\r\n 254 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 256 # Generate shared vocabulary\r\n 257 \r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _subset_filenames(dl_paths, split)\r\n 154 else:\r\n 155 logger.fatal(\"Unsupported split: %s\", split)\r\n--> 156 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 157 dm = _find_files(dl_paths, \"dm\", urls)\r\n 158 return cnn + dm\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _find_files(dl_paths, publisher, url_dict)\r\n 133 else:\r\n 134 logger.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 135 files = sorted(os.listdir(top_dir))\r\n 136 \r\n 137 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n```", "Hi @jon-tow, thanks for reporting. And hi @dynamicwebpaige, thanks for your investigation. \r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today (indeed, we were planning to do it last Friday).\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nCC: @lhoestq " ]