url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
list
https://api.github.com/repos/huggingface/datasets/issues/3983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3983/comments
https://api.github.com/repos/huggingface/datasets/issues/3983/events
https://github.com/huggingface/datasets/issues/3983
1,175,759,412
I_kwDODunzps5GFKo0
3,983
Infinitely attempting lock
[]
closed
false
null
1
2022-03-21T18:11:57Z
2022-05-06T16:12:18Z
2022-05-06T16:12:18Z
null
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /dbfs/transformers/tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --log_level debug \ --cache_dir /dbfs/transformers/cache ``` All goes well until acquiring a lock -- ``` 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... ``` and so on. I imagine this has to do with DBFS -- is there a way to tackle this?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3983/timeline
null
completed
null
null
false
[ "Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest vers...
https://api.github.com/repos/huggingface/datasets/issues/1386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1386/comments
https://api.github.com/repos/huggingface/datasets/issues/1386/events
https://github.com/huggingface/datasets/pull/1386
760,365,505
MDExOlB1bGxSZXF1ZXN0NTM1MjA5NDUx
1,386
Add RecipeNLG Dataset (manual download)
[]
closed
false
null
1
2020-12-09T14:13:19Z
2020-12-10T16:58:22Z
2020-12-10T16:58:21Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1386/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1386/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1386.diff", "html_url": "https://github.com/huggingface/datasets/pull/1386", "merged_at": "2020-12-10T16:58:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1386.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1386" }
true
[ "@lhoestq yes. I asked the authors for direct link but unfortunately we need to fill a form (captcha)" ]
https://api.github.com/repos/huggingface/datasets/issues/5244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5244/comments
https://api.github.com/repos/huggingface/datasets/issues/5244/events
https://github.com/huggingface/datasets/issues/5244
1,450,019,225
I_kwDODunzps5WbYmZ
5,244
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
5
2022-11-15T16:02:10Z
2022-11-23T14:02:30Z
null
null
### Feature request Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source. It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_manager ### Motivation It is possible to share a dataset hosted on another platform by writing a dataset loading script. It works perfectly for publicly available resources. For resources that require authentication, you can provide a [download_custom](https://huggingface.co/docs/datasets/package_reference/builder_classes#datasets.DownloadManager) method to the download_manager. Unfortunately, this function doesn't work with **dataset streaming**. A solution so as to allow dataset streaming from private sources would be a more flexible _get_authentication_headers_for_url_ function. ### Your contribution Would you be interested in this improvement ? If so I could provide a PR. I've got something working locally, but it's not very clean, I'd need some guidance regarding integration.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5244/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5244/timeline
null
null
null
null
false
[ "Hi ! What kind of private source ? We're exploring adding support for cloud storage and URIs like s3://, gs:// etc. with authentication in the download manager", "Hello! It's a google cloud storage, so gs://, but I'm using it with https.\r\nBeing able to provide a file system like [here](https://huggingface.co/d...
https://api.github.com/repos/huggingface/datasets/issues/5818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5818/comments
https://api.github.com/repos/huggingface/datasets/issues/5818/events
https://github.com/huggingface/datasets/issues/5818
1,695,052,555
I_kwDODunzps5lCHML
5,818
Ability to update a dataset
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
3
2023-05-04T01:08:13Z
2023-05-04T20:43:39Z
null
null
### Feature request The ability to load a dataset, add or change something, and save it back to disk. Maybe it's possible, but I can't work out how to do it, e.g. this fails: ```py import datasets dataset = datasets.load_from_disk("data/test1") dataset = dataset.add_item({"text": "A new item"}) dataset.save_to_disk("data/test1") ``` With the error: ``` PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself. ``` ### Motivation My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again. Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow. The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing. ### Your contribution na
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5818/timeline
null
null
null
null
false
[ "This [reply](https://discuss.huggingface.co/t/how-do-i-add-things-rows-to-an-already-saved-dataset/27423) from @mariosasko on the forums may be useful :)", "In this case, I think we can avoid the `PermissionError` by unpacking the underlying `ConcatenationTable` and saving only the newly added data blocks (in ne...
https://api.github.com/repos/huggingface/datasets/issues/63
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/63/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/63/comments
https://api.github.com/repos/huggingface/datasets/issues/63/events
https://github.com/huggingface/datasets/pull/63
614,666,365
MDExOlB1bGxSZXF1ZXN0NDE1MTczODU5
63
[Dataset scripts] add all datasets scripts
[]
closed
false
null
0
2020-05-08T10:50:15Z
2020-05-08T17:39:22Z
2020-05-08T11:34:00Z
null
As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes. @mariamabarham @lhoestq @thomwolf - what do you think? If this is ok for you, I can sync up the master with the `add_dataset` branch: https://github.com/huggingface/nlp/pull/37 so that master is up to date.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/63/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/63/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/63.diff", "html_url": "https://github.com/huggingface/datasets/pull/63", "merged_at": "2020-05-08T11:34:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/63.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/63" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1573/comments
https://api.github.com/repos/huggingface/datasets/issues/1573/events
https://github.com/huggingface/datasets/pull/1573
767,011,938
MDExOlB1bGxSZXF1ZXN0NTM5ODYyNjcx
1,573
adding dataset for diplomacy detection-2
[]
closed
false
null
0
2020-12-14T23:21:37Z
2020-12-14T23:36:57Z
2020-12-14T23:36:57Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1573/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1573.diff", "html_url": "https://github.com/huggingface/datasets/pull/1573", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1573" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2117/comments
https://api.github.com/repos/huggingface/datasets/issues/2117/events
https://github.com/huggingface/datasets/issues/2117
841,535,283
MDU6SXNzdWU4NDE1MzUyODM=
2,117
load_metric from local "glue.py" meet error 'NoneType' object is not callable
[]
closed
false
null
3
2021-03-26T02:35:22Z
2021-08-25T21:44:05Z
2021-03-26T02:40:26Z
null
actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset(path='/home/glue.py', name=actual_task) metric = load_metric(path='/home/glue.py', name=actual_task) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-7ab77a465d81> in <module> 1 actual_task = "mnli" if task == "mnli-mm" else task 2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task) ----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task) ~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 508 keep_in_memory=keep_in_memory, 509 experiment_id=experiment_id, --> 510 **metric_init_kwargs, 511 ) 512 TypeError: 'NoneType' object is not callable Please help
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2117/timeline
null
completed
null
null
false
[ "@Frankie123421 what was the resolution to this?", "> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric", "thank you!" ]
https://api.github.com/repos/huggingface/datasets/issues/5689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5689/comments
https://api.github.com/repos/huggingface/datasets/issues/5689/events
https://github.com/huggingface/datasets/pull/5689
1,648,956,349
PR_kwDODunzps5NVMuI
5,689
Support streaming Beam datasets from HF GCS preprocessed data
[]
closed
false
null
4
2023-03-31T08:44:24Z
2023-04-12T05:57:55Z
2023-04-12T05:50:31Z
null
This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage: - natural_questions - wiki40b - wikipedia This is done by streaming from the prepared Arrow files in HF Google Cloud Storage. This will fix their corresponding dataset viewers. Related to: - https://github.com/huggingface/datasets-server/pull/988#discussion_r1150767138 Related to: - https://huggingface.co/datasets/natural_questions/discussions/4 - https://huggingface.co/datasets/wiki40b/discussions/2 - https://huggingface.co/datasets/wikipedia/discussions/9 CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5689/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5689.diff", "html_url": "https://github.com/huggingface/datasets/pull/5689", "merged_at": "2023-04-12T05:50:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5689.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5689" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"wikipedia\", \"20220301.en\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nOut[2]: \r\n{'id': '12',\r\n 'url': 'https://en....
https://api.github.com/repos/huggingface/datasets/issues/3122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3122/comments
https://api.github.com/repos/huggingface/datasets/issues/3122/events
https://github.com/huggingface/datasets/issues/3122
1,031,787,509
I_kwDODunzps49f9P1
3,122
OSError with a custom dataset loading script
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
8
2021-10-20T20:08:39Z
2021-11-23T09:55:38Z
2021-11-23T09:55:38Z
null
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory structure, yet I am only getting an error with janes_tag. ## Steps to reproduce the bug ```python dataset = datasets.load_dataset('classla/janes_tag', split='validation') ``` ## Expected results Dataset correctly loaded. ## Actual results Traceback (most recent call last): File "C:/mypath/test.py", line 91, in <module> load_and_print('janes_tag') File "C:/mypath/test.py", line 32, in load_and_print dataset = datasets.load_dataset('classla/{}'.format(ds_name), split='validation') File "C:\mypath\venv\lib\site-packages\datasets\load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 704, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: 'C:\\mypath\\.cache\\huggingface\\datasets\\downloads\\2c9996e44bdc5af9c89bffb9e6d7a3e42fdb2f56bacab45de13b20f3032ea7ca\\data\\train_all.conllup' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.5 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3122/timeline
null
completed
null
null
false
[ "Hi,\r\n\r\nthere is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data`...
https://api.github.com/repos/huggingface/datasets/issues/3491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3491/comments
https://api.github.com/repos/huggingface/datasets/issues/3491/events
https://github.com/huggingface/datasets/issues/3491
1,089,918,018
I_kwDODunzps5A9tRC
3,491
Update version of pib dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
0
2021-12-28T14:03:58Z
2021-12-29T08:42:57Z
2021-12-29T08:42:57Z
null
On the Hub we have v0, while there exists v1.3. Related to bigscience-workshop/data_tooling#130
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3491/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/861/comments
https://api.github.com/repos/huggingface/datasets/issues/861/events
https://github.com/huggingface/datasets/issues/861
744,753,458
MDU6SXNzdWU3NDQ3NTM0NTg=
861
Possible Bug: Small training/dataset file creates gigantic output
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d876e3", "default": true...
closed
false
null
7
2020-11-17T13:48:59Z
2021-03-30T14:04:04Z
2021-03-22T12:04:55Z
null
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/861/timeline
null
completed
null
null
false
[ "The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is w...
https://api.github.com/repos/huggingface/datasets/issues/1143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1143/comments
https://api.github.com/repos/huggingface/datasets/issues/1143/events
https://github.com/huggingface/datasets/pull/1143
757,448,920
MDExOlB1bGxSZXF1ZXN0NTMyODI0NzMx
1,143
Add the Winograd Schema Challenge
[]
closed
false
null
0
2020-12-04T22:26:59Z
2020-12-09T15:11:31Z
2020-12-09T09:32:34Z
null
Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples. - https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html The data format was a bit of a nightmare but I think I got it to a workable format.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1143/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1143/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1143.diff", "html_url": "https://github.com/huggingface/datasets/pull/1143", "merged_at": "2020-12-09T09:32:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/1143.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1143" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5156/comments
https://api.github.com/repos/huggingface/datasets/issues/5156/events
https://github.com/huggingface/datasets/issues/5156
1,421,667,125
I_kwDODunzps5UvOs1
5,156
Unable to download dataset using Azure Data Lake Gen 2
[]
closed
false
null
2
2022-10-25T00:43:18Z
2022-11-17T23:37:09Z
2022-11-17T23:37:08Z
null
### Describe the bug When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed: ``` Traceback (most recent call last): File "download_hf_dataset.py", line 143, in <module> main() File "download_hf_dataset.py", line 102, in main builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/datasets/builder.py", line 671, in download_and_prepare fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/core.py", line 639, in get_fs_token_paths fs = cls(**options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'account_name' ``` If I don't pass the storage_options argument (leave it as None), it requires the credentials used in ADL Gen 1: `TypeError: __init__() missing 3 required positional arguments: 'tenant_id', 'client_id', and 'client_secret'` Thus, it is not possible to download a dataset from the cloud using Azure Data Lake (adl) Gen2. ### Steps to reproduce the bug Assuming that you have an account on Azure and at Storage Account that can be used for reproduce: 1. Create a dict with the format to connect to Azure Data Lake Gen 2 ``` storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY) # gen 2 filesystem ``` 2. Create a dataset builder for any HF hosted dataset ``` builder = load_dataset_builder(dataset_name) ``` 3. Try to download the dataset passing the storage_options as an argument ``` save_dir = 'adl://my_save_dir' builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") ``` ### Expected behavior Not seeing the error mentioned above and being able to download the dataset to the provided path on ADL ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5156/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5156/timeline
null
completed
null
null
false
[ "Hi ! From the `adlfs` docs, there are two filesystems you can use:\r\n> To use the Gen1 filesystem:\r\n> - known_implementations[‘adl’] = {‘class’: ‘adlfs.AzureDatalakeFileSystem’}\r\n> \r\n> To use the Gen2 filesystem:\r\n> - known_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n\r\nIf I'm no...
https://api.github.com/repos/huggingface/datasets/issues/2673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2673/comments
https://api.github.com/repos/huggingface/datasets/issues/2673/events
https://github.com/huggingface/datasets/pull/2673
947,300,008
MDExOlB1bGxSZXF1ZXN0NjkyMzAxMTgw
2,673
Fix potential DuplicatedKeysError in SQuAD
[]
closed
false
null
0
2021-07-19T06:08:00Z
2021-07-19T07:08:03Z
2021-07-19T07:08:03Z
null
DONE: - Fix potential DiplicatedKeysError by ensuring keys are unique. - Align examples in the docs with SQuAD code. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2673/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2673.diff", "html_url": "https://github.com/huggingface/datasets/pull/2673", "merged_at": "2021-07-19T07:08:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2673.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2673" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2160/comments
https://api.github.com/repos/huggingface/datasets/issues/2160/events
https://github.com/huggingface/datasets/issues/2160
849,052,921
MDU6SXNzdWU4NDkwNTI5MjE=
2,160
data_args.preprocessing_num_workers almost freezes
[]
closed
false
null
2
2021-04-02T07:56:13Z
2021-04-02T10:14:32Z
2021-04-02T10:14:31Z
null
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up. thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2160/timeline
null
completed
null
null
false
[ "Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ ...
https://api.github.com/repos/huggingface/datasets/issues/4405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4405/comments
https://api.github.com/repos/huggingface/datasets/issues/4405/events
https://github.com/huggingface/datasets/issues/4405
1,248,574,087
I_kwDODunzps5Ka7qH
4,405
[TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-05-25T18:56:43Z
2022-06-07T14:27:20Z
2022-06-07T14:27:20Z
null
## Describe the bug I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features. ## Steps to reproduce the bug ```python import os from typing import ( List, Dict, ) from collections import ( defaultdict, ) from dataclasses import ( dataclass, ) from datasets import ( load_dataset, ) @dataclass class ConllConverter: path: str name: str cache_dir: str def __post_init__( self, ): self.dataset = load_dataset( path=self.path, name=self.name, cache_dir=self.cache_dir, ) def convert( self, ): class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature # label_set = list(set([ # label.split("-")[1] if label != "O" else label for label in class_label.names # ])) def prepare_chunk(token, entity): assert len(token) == len(entity) # Sequence length length = len(token) # Variable used entity_chunk = defaultdict(list) idx = flag = 0 # While loop while idx < length: if entity[idx] == "O": flag += 1 idx += 1 else: iob_tp, lab_tp = entity[idx].split("-") assert iob_tp == "B" idx += 1 while idx < length and entity[idx].startswith("I-"): idx += 1 entity_chunk[lab_tp].append(token[flag: idx]) flag = idx entity_chunk = dict(entity_chunk) # for label in label_set: # if label != "O" and label not in entity_chunk.keys(): # entity_chunk[label] = None return entity_chunk def prepare_features( batch: Dict[str, List], ) -> Dict[str, List]: sentence = [ sent for doc_sent in batch["sentences"] for sent in doc_sent ] feature = { "sentence": list(), } for sent in sentence: token = sent["words"] entity = class_label.int2str(sent["named_entities"]) entity_chunk = prepare_chunk(token, entity) sent_feat = { "token": token, "entity": entity, "entity_chunk": entity_chunk, } feature["sentence"].append(sent_feat) return feature column_names = self.dataset.column_names["train"] dataset = self.dataset.map( function=prepare_features, with_indices=False, batched=True, batch_size=3, remove_columns=column_names, num_proc=1, ) dataset.save_to_disk( dataset_dict_path=os.path.join("data", self.path, self.name) ) if __name__ == "__main__": converter = ConllConverter( path="conll2012_ontonotesv5", name="english_v4", cache_dir="cache", ) converter.convert() ``` ## Expected results I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format. ## Actual results <details> <summary>Traceback</summary> ```python Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single writer.write_batch(batch) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module> converter.convert() File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert dataset = self.dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map { File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp> k: dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map transformed_shards[index] = async_result.get() File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get raise self._value TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu 18.04 - Python version: 3.9.7 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4405/timeline
null
completed
null
null
false
[ "And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform norma...
https://api.github.com/repos/huggingface/datasets/issues/4359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4359/comments
https://api.github.com/repos/huggingface/datasets/issues/4359/events
https://github.com/huggingface/datasets/pull/4359
1,237,149,578
PR_kwDODunzps434Pb6
4,359
Fix Version equality
[]
closed
false
null
1
2022-05-16T13:19:26Z
2022-05-24T16:25:37Z
2022-05-24T16:17:14Z
null
I think `Version` equality should align with other similar cases in Python, like: ```python In [1]: "a" == 5, "a" == None Out[1]: (False, False) In [2]: "a" != 5, "a" != None Out[2]: (True, True) ``` With this PR, we will get: ```python In [3]: Version("1.0.0") == 5, Version("1.0.0") == None Out[3]: (False, False) In [4]: Version("1.0.0") != 5, Version("1.0.0") != None Out[4]: (True, True) ``` Note I found this issue when `doc-builder` tried to compare: ```python if param.default != inspect._empty ``` where `param.default` is an instance of `Version`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4359/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4359/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4359.diff", "html_url": "https://github.com/huggingface/datasets/pull/4359", "merged_at": "2022-05-24T16:17:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/4359.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4359" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/505/comments
https://api.github.com/repos/huggingface/datasets/issues/505/events
https://github.com/huggingface/datasets/pull/505
678,791,400
MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4
505
tmp_file referenced before assignment
[]
closed
false
null
2
2020-08-13T23:27:33Z
2020-08-14T13:42:46Z
2020-08-14T13:42:46Z
null
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/505/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/505/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/505.diff", "html_url": "https://github.com/huggingface/datasets/pull/505", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/505.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/505" }
true
[ "Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)", "I'm closing this one as I created the other PR." ]
https://api.github.com/repos/huggingface/datasets/issues/3190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3190/comments
https://api.github.com/repos/huggingface/datasets/issues/3190/events
https://github.com/huggingface/datasets/issues/3190
1,041,153,631
I_kwDODunzps4-Dr5f
3,190
combination of shuffle and filter results in a bug
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2021-11-01T13:07:29Z
2021-11-02T10:50:49Z
2021-11-02T10:50:49Z
null
## Describe the bug Hi, I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any suggestions as a temporary fix is appreciated @lhoestq. Thanks. Best regards Rabeeh ## Steps to reproduce the bug ```python import numpy as np import datasets datasets = datasets.load_dataset('super_glue', 'rte', script_version="master") shuffled_data = datasets["train"].shuffle(seed=42) for label in range(2): print("label ", label) data = shuffled_data.filter(lambda example: int(example['label']) == label) print("length ", len(data), np.unique(data['label'])) ``` ## Expected results Filtering per label, should only return the data with that specific label. ## Actual results As you can see, filtered data per label, has still two labels of [0, 1] ``` label 0 length 1249 [0 1] label 1 length 1241 [0 1] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: linux - Python version: 3.7.11 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3190/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3190/timeline
null
completed
null
null
false
[ "I cannot reproduce this on master and pyarrow==4.0.1.\r\n", "Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13\r\n\r\nCan you try to update `datasets` and try again ?", "Thanks a lot, fixes with 1.13" ]
https://api.github.com/repos/huggingface/datasets/issues/2087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2087/comments
https://api.github.com/repos/huggingface/datasets/issues/2087/events
https://github.com/huggingface/datasets/pull/2087
836,587,392
MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2
2,087
Update metadata if dataset features are modified
[]
closed
false
null
4
2021-03-20T02:05:23Z
2021-04-09T09:25:33Z
2021-04-09T09:25:33Z
null
This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features. Fixes #2083
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2087/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2087/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2087.diff", "html_url": "https://github.com/huggingface/datasets/pull/2087", "merged_at": "2021-04-09T09:25:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2087.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2087" }
true
[ "@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.", "Awesome thank you !\r\nYes this approach with a wrapper is good :)", "@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip i...
https://api.github.com/repos/huggingface/datasets/issues/329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/329/comments
https://api.github.com/repos/huggingface/datasets/issues/329/events
https://github.com/huggingface/datasets/issues/329
648,446,979
MDU6SXNzdWU2NDg0NDY5Nzk=
329
[Bug] FileLock dependency incompatible with filesystem
[]
closed
false
null
9
2020-06-30T19:45:31Z
2022-09-08T20:58:37Z
2020-06-30T21:33:06Z
null
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/329/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/329/timeline
null
completed
null
null
false
[ "Hi, can you give details on your environment/os/packages versions/etc?", "Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile th...
https://api.github.com/repos/huggingface/datasets/issues/2638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2638/comments
https://api.github.com/repos/huggingface/datasets/issues/2638/events
https://github.com/huggingface/datasets/pull/2638
943,484,913
MDExOlB1bGxSZXF1ZXN0Njg5MTA5NTg1
2,638
Streaming for the Json loader
[]
closed
false
null
2
2021-07-13T14:37:06Z
2021-07-16T15:59:32Z
2021-07-16T15:59:31Z
null
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows. Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related to #2573). So I switched to using `open` which is extended to support reading from remote file progressively, and I removed the pyarrow json reader which was not practical. Instead, I'm using the classical `json.loads` from the standard library.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2638/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2638.diff", "html_url": "https://github.com/huggingface/datasets/pull/2638", "merged_at": "2021-07-16T15:59:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/2638.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2638" }
true
[ "A note is that I think we should add a few indicator of status (as mentioned by @stas00 in #2649), probably at the (1) downloading, (2) extracting and (3) reading steps. In particular when loading many very large files it's interesting to know a bit where we are in the process.", "I tested locally, and the built...
https://api.github.com/repos/huggingface/datasets/issues/4026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4026/comments
https://api.github.com/repos/huggingface/datasets/issues/4026/events
https://github.com/huggingface/datasets/pull/4026
1,180,968,774
PR_kwDODunzps41Btcm
4,026
Support streaming xtreme dataset for bucc18 config
[]
closed
false
null
1
2022-03-25T16:00:40Z
2022-03-25T16:26:50Z
2022-03-25T16:21:52Z
null
Support streaming xtreme dataset for bucc18 config.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4026/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4026/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4026.diff", "html_url": "https://github.com/huggingface/datasets/pull/4026", "merged_at": "2022-03-25T16:21:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/4026.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4026" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1941/comments
https://api.github.com/repos/huggingface/datasets/issues/1941/events
https://github.com/huggingface/datasets/issues/1941
815,985,167
MDU6SXNzdWU4MTU5ODUxNjc=
1,941
Loading of FAISS index fails for index_name = 'exact'
[]
closed
false
null
3
2021-02-25T01:30:54Z
2021-02-25T14:28:46Z
2021-02-25T14:28:46Z
null
Hi, It looks like loading of FAISS index now fails when using index_name = 'exact'. For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage). Running `transformers==4.3.2` and datasets installed from source on latest `master` branch. ```bash (venv) sergey_mkrtchyan datasets (master) $ python Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration >>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") >>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4 Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) 0%| | 0/10 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained return cls( File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__ self.init_retrieval() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval self.index.init_index() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index self.dataset = load_dataset( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset datasets = utils.map_nested( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset post_processed = self._post_process(ds, resources_paths) File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process dataset.add_faiss_index("embeddings", custom_index=index) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index super().add_faiss_index( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors self.faiss_index.add(vecs) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add self.add_c(n, swig_ptr(x)) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add return _swigfaiss.IndexHNSW_add(self, n, x) RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed >>> ``` The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1941/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1941/timeline
null
completed
null
null
false
[ "Thanks for reporting ! I'm taking a look", "Index training was missing, I fixed it here: https://github.com/huggingface/datasets/commit/f5986c46323583989f6ed1dabaf267854424a521\r\n\r\nCan you try again please ?", "Works great 👍 I just put a minor comment on the commit, I think you meant to pass the `train_siz...
https://api.github.com/repos/huggingface/datasets/issues/3389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3389/comments
https://api.github.com/repos/huggingface/datasets/issues/3389/events
https://github.com/huggingface/datasets/issues/3389
1,072,191,865
I_kwDODunzps4_6Fl5
3,389
Add EDGAR
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
2
2021-12-06T14:06:11Z
2022-10-05T10:40:22Z
null
null
## Adding a Dataset - **Name:** EDGAR Database - **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. Containing millions of company and individual filings, EDGAR benefits investors, corporations, and the U.S. economy overall by increasing the efficiency, transparency, and fairness of the securities markets. The system processes about 3,000 filings per day, serves up 3,000 terabytes of data to the public annually, and accommodates 40,000 new filers per year on average. EDGAR® and EDGARLink® are registered trademarks of the SEC. - **Data:** https://www.sec.gov/os/accessing-edgar-data - **Motivation:** Enabling and improving FSI (Financial Services Industry) datasets to increase ease of use Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3389/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3389/timeline
null
null
null
null
false
[ "cc @juliensimon ", "Datasets are not tracked in this repository anymore. But you can make your own dataset in the huggingface hub" ]
https://api.github.com/repos/huggingface/datasets/issues/4224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4224/comments
https://api.github.com/repos/huggingface/datasets/issues/4224/events
https://github.com/huggingface/datasets/pull/4224
1,216,209,667
PR_kwDODunzps420KX2
4,224
autoeval config
[]
closed
false
null
0
2022-04-26T16:35:19Z
2022-04-26T16:36:45Z
2022-04-26T16:36:45Z
null
add train eval index for autoeval
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4224/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4224/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4224.diff", "html_url": "https://github.com/huggingface/datasets/pull/4224", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4224.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4224" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/486/comments
https://api.github.com/repos/huggingface/datasets/issues/486/events
https://github.com/huggingface/datasets/issues/486
675,649,034
MDU6SXNzdWU2NzU2NDkwMzQ=
486
Bookcorpus data contains pretokenized text
[]
closed
false
null
8
2020-08-09T06:53:24Z
2022-10-04T17:44:33Z
2022-10-04T17:44:33Z
null
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively. On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/486/timeline
null
completed
null
null
false
[ "Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do...
https://api.github.com/repos/huggingface/datasets/issues/6077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6077/comments
https://api.github.com/repos/huggingface/datasets/issues/6077/events
https://github.com/huggingface/datasets/issues/6077
1,822,486,810
I_kwDODunzps5soPEa
6,077
Mapping gets stuck at 99%
[]
open
false
null
3
2023-07-26T14:00:40Z
2023-07-27T12:19:23Z
null
null
### Describe the bug Hi ! I'm currently working with a large (~150GB) unnormalized dataset at work. The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it. I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset. The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why. Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me. ### Steps to reproduce the bug I'm able to reproduce the problem using the following scripts: ```python # random_data.py import datasets import torch _VERSION = "1.0.0" class RandomDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( version=_VERSION, supervised_keys=None, features=datasets.Features( { "positions": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "normals": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "features": datasets.Array2D( shape=(30000, 6), dtype="float32", ), "scalars": datasets.Sequence( feature=datasets.Value("float32"), length=20, ), }, ), ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # type: ignore gen_kwargs={"nb_samples": 1000}, ), datasets.SplitGenerator( name=datasets.Split.TEST, # type: ignore gen_kwargs={"nb_samples": 100}, ), ] def _generate_examples(self, nb_samples: int): for idx in range(nb_samples): yield idx, { "positions": torch.randn(30000, 3), "normals": torch.randn(30000, 3), "features": torch.randn(30000, 6), "scalars": torch.randn(20), } ``` ```python # main.py import datasets import torch def apply_mean_std( dataset: datasets.Dataset, means: dict[str, torch.Tensor], stds: dict[str, torch.Tensor], ) -> dict[str, torch.Tensor]: """Normalize the dataset using the mean and standard deviation of each feature. Args: dataset (`Dataset`): A huggingface dataset. mean (`dict[str, Tensor]`): A dictionary containing the mean of each feature. std (`dict[str, Tensor]`): A dictionary containing the standard deviation of each feature. Returns: dict: A dictionary containing the normalized dataset. """ result = {} for key in means.keys(): # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # extract mean and std from dict mean = means[key] # type: ignore std = stds[key] # type: ignore # normalize data normalized_data = (data - mean) / std result[key] = normalized_data return result # get dataset ds = datasets.load_dataset( path="random_data.py", split="train", ).with_format("torch") # compute mean (along last axis) means = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names} means_sq = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names} for batch in ds.iter(batch_size=8): for key in ds.column_names: data = batch[key] batch_size = data.shape[0] data = data.reshape(-1, data.shape[-1]) means[key] += data.mean(dim=0) / len(ds) * batch_size means_sq[key] += (data**2).mean(dim=0) / len(ds) * batch_size # compute std (along last axis) stds = {key: torch.sqrt(means_sq[key] - means[key] ** 2) for key in ds.column_names} # normalize each feature of the dataset ds_normalized = ds.map( desc="Applying mean/std", # type: ignore function=apply_mean_std, batched=False, fn_kwargs={ "means": means, "stds": stds, }, ) ``` ### Expected behavior Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6077/timeline
null
null
null
null
false
[ "The `MAX_MAP_BATCH_SIZE = 1_000_000_000` hack is bad as it loads the entire dataset into RAM when performing `.map`. Instead, it's best to use `.iter(batch_size)` to iterate over the data batches and compute `mean` for each column. (`stddev` can be computed in another pass).\r\n\r\nAlso, these arrays are big, so i...
https://api.github.com/repos/huggingface/datasets/issues/5255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5255/comments
https://api.github.com/repos/huggingface/datasets/issues/5255/events
https://github.com/huggingface/datasets/issues/5255
1,452,631,517
I_kwDODunzps5WlWXd
5,255
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
21
2022-11-17T03:22:22Z
2022-12-17T12:20:38Z
2022-12-17T12:20:37Z
null
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn) * [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) Would be nice to have a dataset for depth estimation. These datasets usually have three things: input image, depth map image, and depth mask (validity mask to indicate if a reading for a pixel is valid or not). Since we already have [semantic segmentation datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:image-segmentation&sort=downloads), I don't think we need any extended utilities to support this addition. Having this dataset would also allow us to author data preprocessing guides for depth estimation, particularly like the ones we have for other tasks ([example](https://huggingface.co/docs/datasets/image_classification)). Ccing @osanseviero @nateraw @NielsRogge Happy to work on adding it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5255/timeline
null
completed
null
null
false
[ "Also cc @mariosasko and @lhoestq ", "Cool ! Let us know if you have questions or if we can help :)\r\n\r\nI guess we'll also have to create the NYU CS Department on the Hub ?", "> I guess we'll also have to create the NYU CS Department on the Hub ?\r\n\r\nYes, you're right! Let me add it to my profile first, a...
https://api.github.com/repos/huggingface/datasets/issues/3800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3800/comments
https://api.github.com/repos/huggingface/datasets/issues/3800/events
https://github.com/huggingface/datasets/pull/3800
1,155,620,761
PR_kwDODunzps4zvkjA
3,800
Added computer vision tasks
[]
closed
false
null
0
2022-03-01T17:37:46Z
2022-03-04T07:15:55Z
2022-03-04T07:15:55Z
null
Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3800/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3800.diff", "html_url": "https://github.com/huggingface/datasets/pull/3800", "merged_at": "2022-03-04T07:15:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/3800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3800" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4025/comments
https://api.github.com/repos/huggingface/datasets/issues/4025/events
https://github.com/huggingface/datasets/issues/4025
1,180,963,105
I_kwDODunzps5GZBEh
4,025
Missing argument in precision/recall
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
1
2022-03-25T15:55:52Z
2022-03-28T09:53:06Z
2022-03-28T09:53:06Z
null
**Is your feature request related to a problem? Please describe.** [`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117) Same issue is present for Recall. **Describe the solution you'd like** Support for **kwargs or adding a new field for `zero_division`. **Describe alternatives you've considered** I could filter the warnings myself, but that is not ideal. **Additional context** I can make the requested changes if this is approved.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4025/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4025/timeline
null
completed
null
null
false
[ "Thanks for the suggestion, @Dref360.\r\n\r\nWe are adding that argument. " ]
https://api.github.com/repos/huggingface/datasets/issues/1343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1343/comments
https://api.github.com/repos/huggingface/datasets/issues/1343/events
https://github.com/huggingface/datasets/pull/1343
759,809,999
MDExOlB1bGxSZXF1ZXN0NTM0NzQ4NTE4
1,343
Add LiveQA
[]
closed
false
null
0
2020-12-08T21:52:36Z
2020-12-14T09:40:28Z
2020-12-14T09:40:28Z
null
This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1343/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1343/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1343.diff", "html_url": "https://github.com/huggingface/datasets/pull/1343", "merged_at": "2020-12-14T09:40:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1343.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1343" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4041/comments
https://api.github.com/repos/huggingface/datasets/issues/4041/events
https://github.com/huggingface/datasets/issues/4041
1,183,599,461
I_kwDODunzps5GjEtl
4,041
Add support for IIIF in datasets
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
1
2022-03-28T15:19:25Z
2022-04-05T18:20:53Z
null
null
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred. ## What is [IIIF](https://iiif.io/)? IIIF (International Image Interoperability Framework) > is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions. The tl;dr is that IIIF provides various specifications for implementing useful functionality for: - Institutions to make available images for various use cases - Users to have a consistent way of interacting/requesting these images - For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF). Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/ ## IIIF APIs IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/) ### IIIF Image API The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: ```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}``` A concrete example of this: ```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg``` As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return: ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg) We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg) We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg) A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size ## Why would/could this be useful for datasets? There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows: - images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller - can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use. - options for quality, rotation, the format can all be encoded in the URL request. These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact. ## What could this look like in datasets? I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach. ### Use through datasets scripts Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script: ```python ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg") ``` This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script. ### Support through dataset scripts (with some datasets support) This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like: ```python features = {"label": ClassLabel(names=['dog','cat']), "url": datasets.IIIFURL()} ``` inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset. ### Other possible integrations Some other possible pseudocode ways that a user could interact with IIIF URLs: The ability to cast to an `IIIFImage` feature type: ``` ds.cast_column('url', IIIFImage, download=False) ``` The ability to specify some options associated with IIIF urls. ``` ds = ds.set_iiif_options(column='url', size="250,250") ``` I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways. ## prerequisite requirements There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support: ### support for handling failed images loaded via a URL (or a specific IIIFImage feature). Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs. ```python from datasets import Dataset import datasets urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3 urls.append("badurl.com/image.jpg") data = {"url":urls} ds = Dataset.from_dict(data) ds = ds.cast_column('url', datasets.Image()) ds[3]['url'] ``` returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this. ### Caching support Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs. ### Support for Parsing IIIF URLs This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share. ## Why it might not be worthwhile/suitable for datasets There are some reasons that this might not be worth implementing: - currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models. - It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble. - The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from. Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets. ## Suggested next steps: I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4041/timeline
null
null
null
null
false
[ "Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle b...
https://api.github.com/repos/huggingface/datasets/issues/6008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6008/comments
https://api.github.com/repos/huggingface/datasets/issues/6008/events
https://github.com/huggingface/datasets/issues/6008
1,789,869,344
I_kwDODunzps5qrz0g
6,008
Dataset.from_generator consistently freezes at ~1000 rows
[]
closed
false
null
3
2023-07-05T16:06:48Z
2023-07-10T13:46:39Z
2023-07-10T13:46:39Z
null
### Describe the bug Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset. I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset. Let me know if you have ideas how to resolve it! ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np def gen(): for row in range(10000): yield {"i": np.random.rand(512, 512, 3)} Dataset.from_generator(gen) # -> 90% of the time gets stuck around 1000 rows ``` ### Expected behavior Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 12.0.1 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6008/timeline
null
completed
null
null
false
[ "By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Arra...
https://api.github.com/repos/huggingface/datasets/issues/1314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1314/comments
https://api.github.com/repos/huggingface/datasets/issues/1314/events
https://github.com/huggingface/datasets/pull/1314
759,541,937
MDExOlB1bGxSZXF1ZXN0NTM0NTMwMDE5
1,314
Add snips built in intents 2016 12
[]
closed
false
null
3
2020-12-08T15:30:19Z
2020-12-14T09:59:07Z
2020-12-14T09:59:07Z
null
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1314/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1314/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1314.diff", "html_url": "https://github.com/huggingface/datasets/pull/1314", "merged_at": "2020-12-14T09:59:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1314.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1314" }
true
[ "It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?\r\n", "Added a fraction of the real data as dummy data.", "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/5754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5754/comments
https://api.github.com/repos/huggingface/datasets/issues/5754/events
https://github.com/huggingface/datasets/pull/5754
1,668,755,035
PR_kwDODunzps5OWozh
5,754
Minor tqdm fixes
[]
closed
false
null
2
2023-04-14T18:15:14Z
2023-04-20T15:27:58Z
2023-04-20T15:21:00Z
null
`GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560). Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again, this bug was introduced by me in the linked PR 😎)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5754/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5754/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5754.diff", "html_url": "https://github.com/huggingface/datasets/pull/5754", "merged_at": "2023-04-20T15:21:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/5754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5754" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/1397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1397/comments
https://api.github.com/repos/huggingface/datasets/issues/1397/events
https://github.com/huggingface/datasets/pull/1397
760,467,501
MDExOlB1bGxSZXF1ZXN0NTM1Mjk0MDgz
1,397
datasets card-creator link added
[]
closed
false
null
0
2020-12-09T16:15:18Z
2020-12-09T16:47:48Z
2020-12-09T16:47:48Z
null
dataset card creator link has been added link: https://huggingface.co/datasets/card-creator/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1397/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1397/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1397.diff", "html_url": "https://github.com/huggingface/datasets/pull/1397", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1397.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1397" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1621/comments
https://api.github.com/repos/huggingface/datasets/issues/1621/events
https://github.com/huggingface/datasets/pull/1621
772,940,417
MDExOlB1bGxSZXF1ZXN0NTQ0MTE4MTAz
1,621
updated dutch_social.py for loading jsonl (lines instead of list) files
[]
closed
false
null
0
2020-12-22T13:18:11Z
2020-12-23T11:51:51Z
2020-12-23T11:51:51Z
null
the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records Pls refer to previous PR #1321
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1621/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1621.diff", "html_url": "https://github.com/huggingface/datasets/pull/1621", "merged_at": "2020-12-23T11:51:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/1621.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1621" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1733/comments
https://api.github.com/repos/huggingface/datasets/issues/1733/events
https://github.com/huggingface/datasets/issues/1733
784,903,002
MDU6SXNzdWU3ODQ5MDMwMDI=
1,733
connection issue with glue, what is the data url for glue?
[]
closed
false
null
1
2021-01-13T08:37:40Z
2021-08-04T18:13:55Z
2021-08-04T18:13:55Z
null
Hi my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1733/timeline
null
completed
null
null
false
[ "Hello @juliahane, which config of GLUE causes you trouble?\r\nThe URLs are defined in the dataset script source code: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py" ]
https://api.github.com/repos/huggingface/datasets/issues/1599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1599/comments
https://api.github.com/repos/huggingface/datasets/issues/1599/events
https://github.com/huggingface/datasets/pull/1599
770,431,389
MDExOlB1bGxSZXF1ZXN0NTQyMTgwMzI4
1,599
add Korean Sarcasm Dataset
[]
closed
false
null
0
2020-12-17T22:49:56Z
2021-09-17T16:54:32Z
2020-12-23T17:25:59Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1599/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1599/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1599.diff", "html_url": "https://github.com/huggingface/datasets/pull/1599", "merged_at": "2020-12-23T17:25:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1599.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1599" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5381/comments
https://api.github.com/repos/huggingface/datasets/issues/5381/events
https://github.com/huggingface/datasets/issues/5381
1,504,498,387
I_kwDODunzps5ZrNLT
5,381
Wrong URL for the_pile dataset
[]
closed
false
null
1
2022-12-20T12:40:14Z
2023-02-15T16:24:57Z
2023-02-15T16:24:57Z
null
### Describe the bug When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error. ### Steps to reproduce the bug Steps to reproduce: Run: ``` from datasets import load_dataset dataset = load_dataset("the_pile") ``` I get the output: "name": "FileNotFoundError", "message": "Unable to resolve any data file that matches '['**']' at /storage/store/work/lgrinszt/memorization/the_pile with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']" ### Expected behavior `the_pile` dataset should be dowloaded. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.27 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5381/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5381/timeline
null
completed
null
null
false
[ "Hi! This error can happen if there is a local file/folder with the same name as the requested dataset. And to avoid it, rename the local file/folder.\r\n\r\nSoon, it will be possible to explicitly request a Hub dataset as follows:https://github.com/huggingface/datasets/issues/5228#issuecomment-1313494020" ]
https://api.github.com/repos/huggingface/datasets/issues/5753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5753/comments
https://api.github.com/repos/huggingface/datasets/issues/5753/events
https://github.com/huggingface/datasets/issues/5753
1,668,659,536
I_kwDODunzps5jdblQ
5,753
[IterableDatasets] Add column followed by interleave datasets gives bogus outputs
[]
closed
false
null
1
2023-04-14T17:32:31Z
2023-04-14T17:45:52Z
2023-04-14T17:36:37Z
null
### Describe the bug If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value. ### Steps to reproduce the bug What we're going to do here is: 1. Load an iterable dataset in streaming mode (`original_dataset`) 2. Add a new column to this dataset using the hack in #5752 (`modified_dataset_1`) 3. Create another new dataset by adding a column with the same key but different values (`modified_dataset_2`) 4. Interleave our new datasets (`modified_dataset_1` + `modified_dataset_2`) 5. Check the value of our newly added column (`new_column`) ```python from datasets import load_dataset # load an iterable dataset original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # now add a new column to our streaming dataset using our hack from 5752 name = "new_column" column = [f"new dataset 1, row {i}" for i in range(50)] new_features = original_dataset.features.copy() new_features[name] = new_features["file"] # I know that "file" has the right column type to match our new feature def add_column_fn(example, idx): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} modified_dataset_1 = original_dataset.map(add_column_fn, with_indices=True, features=new_features) # now create a second modified dataset using the same trick column = [f"new dataset 2, row {i}" for i in range(50)] def add_column_fn(example, idx): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} modified_dataset_2 = original_dataset.map(add_column_fn, with_indices=True, features=new_features) # interleave these datasets interleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2]) # now check what the value of the added column is for i, sample in enumerate(interleaved_dataset): print(sample["new_column"]) if i == 10: break ``` **Print Output:** ``` new dataset 2, row 0 new dataset 2, row 0 new dataset 2, row 1 new dataset 2, row 1 new dataset 2, row 2 new dataset 2, row 2 new dataset 2, row 3 new dataset 2, row 3 new dataset 2, row 4 new dataset 2, row 4 new dataset 2, row 5 ``` We see that we only get outputs from our second dataset. ### Expected behavior We should interleave between dataset 1 and 2 and increase in row value: ``` new dataset 1, row 0 new dataset 2, row 0 new dataset 1, row 1 new dataset 2, row 1 new dataset 1, row 2 new dataset 2, row 2 ... ``` ### Environment info - datasets version: 2.10.2.dev0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5753/timeline
null
completed
null
null
false
[ "Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new co...
https://api.github.com/repos/huggingface/datasets/issues/5749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5749/comments
https://api.github.com/repos/huggingface/datasets/issues/5749/events
https://github.com/huggingface/datasets/issues/5749
1,668,016,321
I_kwDODunzps5ja-jB
5,749
AttributeError: 'Version' object has no attribute 'match'
[]
closed
false
null
8
2023-04-14T10:48:06Z
2023-06-30T11:31:17Z
2023-04-18T12:57:08Z
null
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') ### Expected behavior This is error trace: Downloading and preparing dataset visual_genome/region_descriptions_v1.2.0 to C:/Users/Acer/.cache/huggingface/datasets/visual_genome/region_descriptions_v1.2.0/1.2.0/136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') File ~\.conda\envs\aai\Lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:964, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 962 split_dict = SplitDict(dataset_name=self.name) 963 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 964 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 966 # Checksums verification 967 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:377, in VisualGenome._split_generators(self, dl_manager) 375 def _split_generators(self, dl_manager): 376 # Download image meta datas. --> 377 image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url) 378 image_metadatas_file = os.path.join( 379 image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url) 380 ) 382 # Download annotations File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:328, in VisualGenomeConfig.image_metadata_url(self) 326 @property 327 def image_metadata_url(self): --> 328 if not self.version.match(_LATEST_VERSIONS["image_metadata"]): 329 logger.warning( 330 f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions." 331 ) 332 return f"{_BASE_ANNOTATION_URL}/image_data.json.zip" ### Environment info datasets 2.11.0 python 3.11.3
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5749/timeline
null
completed
null
null
false
[ "I got the same error, and the official website for visual genome is down. Did you solve this problem? ", "I am in the same situation now :( ", "Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.", "The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.",...
https://api.github.com/repos/huggingface/datasets/issues/5690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5690/comments
https://api.github.com/repos/huggingface/datasets/issues/5690/events
https://github.com/huggingface/datasets/issues/5690
1,649,289,883
I_kwDODunzps5iTiqb
5,690
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
5
2023-03-31T08:22:22Z
2023-07-21T14:21:57Z
2023-07-21T14:21:57Z
null
### Describe the bug rta.sh Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ### Reproduction _No response_ ### Logs ```shell Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ``` ### System info ```shell - huggingface_hub version: 0.13.2 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/appuser/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 1.7.1 - Jinja2: N/A - Graphviz: N/A - Pydot: N/A - Pillow: 9.3.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets - HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5690/timeline
null
completed
null
null
false
[ "Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of ...
https://api.github.com/repos/huggingface/datasets/issues/3778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3778/comments
https://api.github.com/repos/huggingface/datasets/issues/3778/events
https://github.com/huggingface/datasets/issues/3778
1,147,898,946
I_kwDODunzps5Ea4xC
3,778
Not be able to download dataset - "Newsroom"
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
2
2022-02-23T10:15:50Z
2022-02-23T17:05:04Z
2022-02-23T13:26:40Z
null
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help! Thanks Darshan Tank
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3778/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3778/timeline
null
completed
null
null
false
[ "Hi @Darshan2104, thanks for reporting.\r\n\r\nPlease note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.\r\n\r\nApparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.n...
https://api.github.com/repos/huggingface/datasets/issues/4271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4271/comments
https://api.github.com/repos/huggingface/datasets/issues/4271/events
https://github.com/huggingface/datasets/issues/4271
1,224,404,403
I_kwDODunzps5I-u2z
4,271
A typo in docs of datasets.disable_progress_bar
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-05-03T17:44:56Z
2022-05-04T06:58:35Z
2022-05-04T06:58:35Z
null
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4271/timeline
null
completed
null
null
false
[ "Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)" ]
https://api.github.com/repos/huggingface/datasets/issues/2706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2706/comments
https://api.github.com/repos/huggingface/datasets/issues/2706/events
https://github.com/huggingface/datasets/pull/2706
950,606,561
MDExOlB1bGxSZXF1ZXN0Njk1MTI3ODgz
2,706
Update BibTeX entry
[]
closed
false
null
0
2021-07-22T12:29:29Z
2021-07-22T12:43:00Z
2021-07-22T12:43:00Z
null
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2706/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2706.diff", "html_url": "https://github.com/huggingface/datasets/pull/2706", "merged_at": "2021-07-22T12:43:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/2706.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2706" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2361/comments
https://api.github.com/repos/huggingface/datasets/issues/2361/events
https://github.com/huggingface/datasets/pull/2361
891,982,808
MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4
2,361
Preserve dtype for numpy/torch/tf/jax arrays
[]
closed
false
null
6
2021-05-14T14:45:23Z
2021-08-17T08:30:04Z
2021-08-17T08:30:04Z
null
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2361/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2361.diff", "html_url": "https://github.com/huggingface/datasets/pull/2361", "merged_at": "2021-08-17T08:30:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/2361.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2361" }
true
[ "Hi @lhoestq, \r\nIt turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy...
https://api.github.com/repos/huggingface/datasets/issues/5383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5383/comments
https://api.github.com/repos/huggingface/datasets/issues/5383/events
https://github.com/huggingface/datasets/issues/5383
1,507,293,968
I_kwDODunzps5Z13sQ
5,383
IterableDataset missing column_names, differs from Dataset interface
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
null
6
2022-12-22T05:27:02Z
2023-03-13T19:03:33Z
2023-03-13T19:03:33Z
null
### Describe the bug The documentation on [Stream](https://huggingface.co/docs/datasets/v1.18.2/stream.html) seems to imply that IterableDataset behaves just like a Dataset. However, examples like ``` dataset.map(augment_data, batched=True, remove_columns=dataset.column_names, ...) ``` will not work because `.column_names` does not exist on IterableDataset. I cannot find any clear explanation on why this is not available, is it an oversight? We do have `iterable_ds.features` available. ### Steps to reproduce the bug See above ### Expected behavior Dataset and IterableDataset would be expected to have the same interface, with any differences noted in the documentation. ### Environment info n/a
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5383/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5383/timeline
null
completed
null
null
false
[ "Another example is that `IterableDataset.map` does not have `fn_kwargs`, among other arguments. It makes it harder to convert code from Dataset to IterableDataset.", "Hi! `fn_kwargs` was added to `IterableDataset.map` in `datasets 2.5.0`, so please update your installation (`pip install -U datasets`) to use it.\...
https://api.github.com/repos/huggingface/datasets/issues/6070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6070/comments
https://api.github.com/repos/huggingface/datasets/issues/6070/events
https://github.com/huggingface/datasets/pull/6070
1,820,836,330
PR_kwDODunzps5WXDLc
6,070
Fix Quickstart notebook link
[]
closed
false
null
3
2023-07-25T17:48:37Z
2023-07-25T18:19:01Z
2023-07-25T18:10:16Z
null
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6070/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6070.diff", "html_url": "https://github.com/huggingface/datasets/pull/6070", "merged_at": "2023-07-25T18:10:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/6070.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6070" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/3898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3898/comments
https://api.github.com/repos/huggingface/datasets/issues/3898/events
https://github.com/huggingface/datasets/pull/3898
1,166,778,250
PR_kwDODunzps40UWG4
3,898
Create README.md for WER metric
[]
closed
false
null
4
2022-03-11T19:29:09Z
2022-03-15T17:05:00Z
2022-03-15T17:04:59Z
null
Proposing a draft WER metric card, @lhoestq I'm not very certain about "Values from popular papers" -- I don't know ASR very well, what do you think of the examples I found?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3898/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3898/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3898.diff", "html_url": "https://github.com/huggingface/datasets/pull/3898", "merged_at": "2022-03-15T17:04:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/3898.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3898" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3898). All of your documentation changes will be reflected on that endpoint.", "For ASR you can probably ping @patrickvonplaten ", "Ah only noticed now that ` # Values from popular papers` is from a template. @lhoestq @sashav...
https://api.github.com/repos/huggingface/datasets/issues/2972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2972/comments
https://api.github.com/repos/huggingface/datasets/issues/2972/events
https://github.com/huggingface/datasets/issues/2972
1,007,808,714
I_kwDODunzps48EfDK
2,972
OSError: Not enough disk space.
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
5
2021-09-27T07:41:22Z
2022-08-29T23:21:36Z
2021-09-28T06:43:15Z
null
## Describe the bug I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough space. The file system structure is like below. The root `/` has `115G` disk space available, and the `sda1` is mounted to `/mnt`, which has `1.2T` disk space available: ``` / /mnt/sda1/path/to/args.dataset_cache_dir ``` ## Steps to reproduce the bug ```python dataset_config = DownloadConfig( cache_dir=os.path.abspath(args.dataset_cache_dir), resume_download=True, ) dataset = load_dataset("natural_questions", download_config=dataset_config) ``` ## Expected results Can download the dataset without an error. ## Actual results The following error raised: ``` OSError: Not enough disk space. Needed: 134.92 GiB (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Ubuntu 18.04 - Python version: 3.8.10 - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2972/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2972/timeline
null
completed
null
null
false
[ "Maybe we can change the disk space calculating API from `shutil.disk_usage` to `os.statvfs` in UNIX-like system, which can provide correct results.\r\n```\r\nstatvfs = os.statvfs('path')\r\navail_space_bytes = statvfs.f_frsize * statvfs.f_bavail\r\n```", "Hi @qqaatw, thanks for reporting.\r\n\r\nCould you pleas...
https://api.github.com/repos/huggingface/datasets/issues/2731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2731/comments
https://api.github.com/repos/huggingface/datasets/issues/2731/events
https://github.com/huggingface/datasets/pull/2731
956,087,452
MDExOlB1bGxSZXF1ZXN0Njk5NzQwMjg5
2,731
Adding to_tf_dataset method
[]
closed
false
null
7
2021-07-29T18:10:25Z
2021-09-16T13:50:54Z
2021-09-16T13:50:54Z
null
Oh my **god** do not merge this yet, it's just a draft. I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the whole dataset in memory!), correct shuffling, variable-length batches to reduce compute, and correct support for unusual padding. It achieves that by calling the tokenizer `pad` method in the middle of a TF compute graph via a very hacky call to `tf.py_function`, which is heretical but seems to work. A number of issues need to be resolved before it's ready to merge, though: 1) Is a MixIn the right way to do this? Do other classes besides `arrow_dataset.Dataset` need this method too? 2) Needs an argument to support constant-length batches for TPU training - this is easy to add and I'll do it soon. 3) Needs the user to supply the list of columns to drop from the arrow `Dataset`. Is there some automatic way to get the columns we want, or see which columns were added by the tokenizer? 4) Assumes the label column is always present and always called "label" - this is probably not great, but I'm not sure what the 'correct' thing to do here is.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2731/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2731.diff", "html_url": "https://github.com/huggingface/datasets/pull/2731", "merged_at": "2021-09-16T13:50:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2731" }
true
[ "This seems to be working reasonably well in testing, and performance is way better. `tf.py_function` has been dropped for an input generator, but I moved as much of the code as possible outside the generator to allow TF to compile it correctly. I also avoid `tf.RaggedTensor` at all costs, and do the shuffle in the...
https://api.github.com/repos/huggingface/datasets/issues/3280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3280/comments
https://api.github.com/repos/huggingface/datasets/issues/3280/events
https://github.com/huggingface/datasets/pull/3280
1,054,766,828
PR_kwDODunzps4ulgye
3,280
Fix bookcorpusopen RAM usage
[]
closed
false
null
0
2021-11-16T11:27:52Z
2021-11-17T15:53:28Z
2021-11-16T13:34:30Z
null
Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory Fix #3167.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3280/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3280.diff", "html_url": "https://github.com/huggingface/datasets/pull/3280", "merged_at": "2021-11-16T13:34:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/3280.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3280" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3845/comments
https://api.github.com/repos/huggingface/datasets/issues/3845/events
https://github.com/huggingface/datasets/pull/3845
1,161,739,483
PR_kwDODunzps40DvqX
3,845
add RMSE and MAE metrics.
[]
closed
false
null
6
2022-03-07T17:53:24Z
2022-03-09T16:50:03Z
2022-03-09T16:50:03Z
null
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Please suggest any changes if required. Thank you.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3845/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3845.diff", "html_url": "https://github.com/huggingface/datasets/pull/3845", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3845" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3845). All of your documentation changes will be reflected on that endpoint.", "@mariosasko I've reopened it here. Please suggest any changes if required. Thank you.", "Thanks for suggestions. :) I have added update the KWARG...
https://api.github.com/repos/huggingface/datasets/issues/3393
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3393/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3393/comments
https://api.github.com/repos/huggingface/datasets/issues/3393/events
https://github.com/huggingface/datasets/issues/3393
1,073,189,777
I_kwDODunzps4_95OR
3,393
Common Voice Belarusian Dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b",...
open
false
null
0
2021-12-07T10:37:02Z
2021-12-09T15:56:03Z
null
null
## Adding a Dataset - **Name:** *Common Voice Belarusian Dataset* - **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)* - **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)* - **Motivation:** *It has more than 7GB of data, so it will be great to have it in this package so anyone can try to train something for Belarusian language.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3393/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3393/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5344/comments
https://api.github.com/repos/huggingface/datasets/issues/5344/events
https://github.com/huggingface/datasets/pull/5344
1,485,628,319
PR_kwDODunzps5E2BPN
5,344
Clean up Dataset and DatasetDict
[]
closed
false
null
1
2022-12-09T00:02:08Z
2022-12-13T00:56:07Z
2022-12-13T00:53:02Z
null
This PR cleans up the docstrings for the other half of the methods in `Dataset` and finishes `DatasetDict`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5344/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5344/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5344.diff", "html_url": "https://github.com/huggingface/datasets/pull/5344", "merged_at": "2022-12-13T00:53:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/5344.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5344" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5880/comments
https://api.github.com/repos/huggingface/datasets/issues/5880/events
https://github.com/huggingface/datasets/issues/5880
1,719,090,101
I_kwDODunzps5mdzu1
5,880
load_dataset from s3 file system through streaming can't not iterate data
[]
open
false
null
4
2023-05-22T07:40:27Z
2023-05-26T12:52:08Z
null
null
### Describe the bug I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it <img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0"> <img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1"> we can change 4 lines to fix this bug, you can check whether it is ok for us. <img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3"> ### Steps to reproduce the bug 1. storage a file in you s3 file system 2. use load_dataset to read it through streaming 3. iterate it ### Expected behavior can iterate it successfully ### Environment info - `datasets` version: 2.12.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5880/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5880/timeline
null
null
null
null
false
[ "This sounds related to #5281.\r\n\r\nCan you try passing `storage_options=s3_client.storage_options` instead passing it to `use_auth_token=` ?", "I tried `storage_options` before, but it doesn't work, I checked our source code and I found that we even didn't pass this parameter to the following process. if I use...
https://api.github.com/repos/huggingface/datasets/issues/787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/787/comments
https://api.github.com/repos/huggingface/datasets/issues/787/events
https://github.com/huggingface/datasets/pull/787
734,070,162
MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz
787
Adding nli_tr dataset
[]
closed
false
null
1
2020-11-01T21:49:44Z
2020-11-12T19:06:02Z
2020-11-12T19:06:02Z
null
Hello, In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf) The dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub. Our dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub. ``` from datasets import load_dataset multinli_tr = load_dataset("nli_tr", "multinli_tr") snli_tr = load_dataset("nli_tr", "snli_tr") ``` Thanks for your help in reviewing our pull request.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/787/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/787/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/787.diff", "html_url": "https://github.com/huggingface/datasets/pull/787", "merged_at": "2020-11-12T19:06:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/787.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/787" }
true
[ "Thank you @lhoestq for the time you take to review our pull request. We appreciate your help.\r\n\r\nWe've made the changes you described. Hope that it is ready for being merged. Please let me know if you have any additional requests for revisions. " ]
https://api.github.com/repos/huggingface/datasets/issues/4214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4214/comments
https://api.github.com/repos/huggingface/datasets/issues/4214/events
https://github.com/huggingface/datasets/pull/4214
1,214,572,430
PR_kwDODunzps42utC5
4,214
Skip checksum computation in Imagefolder by default
[]
closed
false
null
1
2022-04-25T14:10:41Z
2022-05-03T15:28:32Z
2022-05-03T15:21:29Z
null
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4214/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4214/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4214.diff", "html_url": "https://github.com/huggingface/datasets/pull/4214", "merged_at": "2022-05-03T15:21:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/4214.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4214" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4916/comments
https://api.github.com/repos/huggingface/datasets/issues/4916/events
https://github.com/huggingface/datasets/issues/4916
1,357,076,940
I_kwDODunzps5Q41nM
4,916
Apache Beam unable to write the downloaded wikipedia dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-08-31T09:39:25Z
2022-08-31T10:53:19Z
2022-08-31T10:53:19Z
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4916/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4916/timeline
null
completed
null
null
false
[ "See:\r\n- #4915" ]
https://api.github.com/repos/huggingface/datasets/issues/2708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2708/comments
https://api.github.com/repos/huggingface/datasets/issues/2708/events
https://github.com/huggingface/datasets/issues/2708
951,092,660
MDU6SXNzdWU5NTEwOTI2NjA=
2,708
QASC: incomplete training set
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2021-07-22T21:59:44Z
2021-07-23T13:30:07Z
2021-07-23T13:30:07Z
null
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instances)}") for x in instances: print(json.dumps(x)) load_instances('test') load_instances('validation') load_instances('train') ``` ## results For test and validation, we can see the examples in the output (which is good!): ``` split: test - size: 920 {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Anthax", "under water", "uterus", "wombs", "two", "moles", "live", "embryo"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What type of birth do therian mammals have? (A) Anthax (B) under water (C) uterus (D) wombs (E) two (F) moles (G) live (H) embryo", "id": "3C44YUNSI1OBFBB8D36GODNOZN9DPA", "question": "What type of birth do therian mammals have?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Corvidae", "arthropods", "birds", "backbones", "keratin", "Jurassic", "front paws", "Parakeets."]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "By what time had mouse-sized viviparous mammals evolved? (A) Corvidae (B) arthropods (C) birds (D) backbones (E) keratin (F) Jurassic (G) front paws (H) Parakeets.", "id": "3B1NLC6UGZVERVLZFT7OUYQLD1SGPZ", "question": "By what time had mouse-sized viviparous mammals evolved?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Reduced friction", "causes infection", "vital to a good life", "prevents water loss", "camouflage from consumers", "Protection against predators", "spur the growth of the plant", "a smooth surface"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What does a plant's skin do? (A) Reduced friction (B) causes infection (C) vital to a good life (D) prevents water loss (E) camouflage from consumers (F) Protection against predators (G) spur the growth of the plant (H) a smooth surface", "id": "3QRYMNZ7FYGITFVSJET3PS0F4S0NT9", "question": "What does a plant's skin do?"} ... ``` However, only a few instances are loaded for the training split, which is not correct. ## Environment info - `datasets` version: '1.10.2' - Platform: MaxOS - Python version:3.7 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2708/timeline
null
completed
null
null
false
[ "Hi @danyaljj, thanks for reporting.\r\n\r\nUnfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:\r\n```ipython\r\nIn [10]: ds[\"train\"]\r\nOut[10]:\r\nDataset({\r\n features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_quest...
https://api.github.com/repos/huggingface/datasets/issues/5731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
https://api.github.com/repos/huggingface/datasets/issues/5731/events
https://github.com/huggingface/datasets/pull/5731
1,662,012,913
PR_kwDODunzps5N_7Un
5,731
Temporarily pin fsspec
[]
closed
false
null
2
2023-04-11T08:33:15Z
2023-04-11T08:57:45Z
2023-04-11T08:47:55Z
null
Fix #5730.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "html_url": "https://github.com/huggingface/datasets/pull/5731", "merged_at": "2023-04-11T08:47:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/3736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3736/comments
https://api.github.com/repos/huggingface/datasets/issues/3736/events
https://github.com/huggingface/datasets/pull/3736
1,140,134,483
PR_kwDODunzps4y7rMR
3,736
Local paths in common voice
[]
closed
false
null
2
2022-02-16T15:01:29Z
2022-09-21T14:58:38Z
2022-02-22T09:13:43Z
null
Continuation of https://github.com/huggingface/datasets/pull/3664: - pass the `streaming` parameter to _split_generator - update @anton-l's code to use this parameter for `common_voice` - add a comment to explain why we use `download_and_extract` in non-streaming and `iter_archive` in streaming Now the `common_voice` dataset has a local path back in `ds["path"]`, and this field is `None` in streaming mode. cc @patrickvonplaten @anton-l @albertvillanova Fix #3663.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3736/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3736.diff", "html_url": "https://github.com/huggingface/datasets/pull/3736", "merged_at": "2022-02-22T09:13:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3736.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3736" }
true
[ "I just changed to `dl_manager.is_streaming` rather than an additional parameter `streaming` that has to be handled by the DatasetBuilder class - this way the streaming logic doesn't interfere with the base builder's code.\r\n\r\nI think it's better this way, but let me know if you preferred the previous way and I ...
https://api.github.com/repos/huggingface/datasets/issues/5736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5736/comments
https://api.github.com/repos/huggingface/datasets/issues/5736/events
https://github.com/huggingface/datasets/issues/5736
1,662,286,061
I_kwDODunzps5jFHjt
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
[]
open
false
null
1
2023-04-11T11:29:15Z
2023-04-21T15:27:40Z
null
null
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.py` to generate and load an offline dataset. 2. Load it with ```python ds = datasets.load_dataset(path=/path/to/my_dataset.py, name='toy', data_dir=/path/to/my_dataset.py, cache_dir=cache_dir, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, ) ``` It loads fine ``` Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data. ``` 3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error ``` 2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json Traceback (most recent call last): File "<string>", line 2, in <module> File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir shutil.rmtree(dirname) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c' ``` ### Expected behavior Regenerate the dataset from scratch and reload it. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5736/timeline
null
null
null
null
false
[ "Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?" ]
https://api.github.com/repos/huggingface/datasets/issues/5404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5404/comments
https://api.github.com/repos/huggingface/datasets/issues/5404/events
https://github.com/huggingface/datasets/issues/5404
1,517,566,331
I_kwDODunzps5adDl7
5,404
Better integration of BIG-bench
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
1
2023-01-03T15:37:57Z
2023-02-09T20:30:26Z
null
null
### Feature request Ideally, it would be nice to have a maintained PyPI package for `bigbench`. ### Motivation We'd like to allow anyone to access, explore and use any task. ### Your contribution @lhoestq has opened an issue in their repo: - https://github.com/google/BIG-bench/issues/906
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5404/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5404/timeline
null
null
null
null
false
[ "Hi, I made my version : https://huggingface.co/datasets/tasksource/bigbench" ]
https://api.github.com/repos/huggingface/datasets/issues/597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/597/comments
https://api.github.com/repos/huggingface/datasets/issues/597/events
https://github.com/huggingface/datasets/issues/597
697,112,029
MDU6SXNzdWU2OTcxMTIwMjk=
597
Indices incorrect with multiprocessing
[]
closed
false
null
2
2020-09-09T19:50:56Z
2020-09-10T11:03:37Z
2020-09-10T11:03:37Z
null
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2) # [0, 1] # [0, 1] # [0, 1, 2, 3, 4] # [0, 1, 2, 3, 4] ``` As you can see, the subset passed to each thread is indexed from 0 to N which doesn't reflect their positions in `d`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/597/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/597/timeline
null
completed
null
null
false
[ "I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?", "Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we ar...
https://api.github.com/repos/huggingface/datasets/issues/6078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
https://api.github.com/repos/huggingface/datasets/issues/6078/events
https://github.com/huggingface/datasets/issues/6078
1,822,501,472
I_kwDODunzps5soSpg
6,078
resume_download with streaming=True
[]
open
false
null
2
2023-07-26T14:08:22Z
2023-07-26T21:10:40Z
null
null
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset? `download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True. ### Steps to reproduce the bug ``` from datasets import load_dataset, DownloadConfig dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, # optional split="train", download_config=DownloadConfig(resume_download=True) ) # interupt the run and try to relaunch it => this restart from scratch ``` ### Expected behavior I would expect a parameter to start streaming from a given index in the dataset. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
null
null
null
null
false
[ "Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ", "Ok thank you for your answer", "I'm closing this as a duplicate of #5380" ]
https://api.github.com/repos/huggingface/datasets/issues/4157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4157/comments
https://api.github.com/repos/huggingface/datasets/issues/4157/events
https://github.com/huggingface/datasets/pull/4157
1,202,239,622
PR_kwDODunzps42H2Wf
4,157
Fix formatting in BLEU metric card
[]
closed
false
null
1
2022-04-12T18:29:51Z
2022-04-13T14:30:25Z
2022-04-13T14:16:34Z
null
Fix #4148
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4157/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4157/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4157.diff", "html_url": "https://github.com/huggingface/datasets/pull/4157", "merged_at": "2022-04-13T14:16:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4157.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4157" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/58
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/58/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/58/comments
https://api.github.com/repos/huggingface/datasets/issues/58/events
https://github.com/huggingface/datasets/pull/58
614,362,308
MDExOlB1bGxSZXF1ZXN0NDE0OTM0NTY4
58
Aborted PR - Fix tests
[]
closed
false
null
1
2020-05-07T21:40:19Z
2020-05-07T21:48:01Z
2020-05-07T21:41:27Z
null
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/58/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/58/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/58.diff", "html_url": "https://github.com/huggingface/datasets/pull/58", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/58.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/58" }
true
[ "Wait I messed up my branch, let me clean this." ]
https://api.github.com/repos/huggingface/datasets/issues/3171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3171/comments
https://api.github.com/repos/huggingface/datasets/issues/3171/events
https://github.com/huggingface/datasets/issues/3171
1,037,728,059
I_kwDODunzps492nk7
3,171
Raise exceptions instead of using assertions for control flow
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
4
2021-10-27T18:26:52Z
2021-12-23T16:40:37Z
2021-12-23T16:40:37Z
null
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located under `datasets` and `src/datasets`), so when working on this, to manage the PR size, only modify 4-5 files at most before submitting a PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3171/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3171/timeline
null
completed
null
null
false
[ "Adding the remaining tasks for this issue to help new code contributors. \r\n$ cd src/datasets && ack assert -lc \r\n- [x] commands/convert.py:1\r\n- [x] arrow_reader.py:3\r\n- [x] load.py:7\r\n- [x] utils/py_utils.py:2\r\n- [x] features/features.py:9\r\n- [x] arrow_writer.py:7\r\n- [x] search.py:6\r\n- [x] table...
https://api.github.com/repos/huggingface/datasets/issues/4918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4918/comments
https://api.github.com/repos/huggingface/datasets/issues/4918/events
https://github.com/huggingface/datasets/issues/4918
1,357,242,757
I_kwDODunzps5Q5eGF
4,918
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
2
2022-08-31T12:09:07Z
2022-09-05T21:36:34Z
2022-09-05T16:32:44Z
null
### Link https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines ### Description After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist. ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4918/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4918/timeline
null
completed
null
null
false
[ "Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n<img width=\"1508\" alt=\"Capture d’écran 2022-09-05 à 18 31 22\" src=\"https://user-images.githubusercontent.com/1676121/188489762-0ed86a7e-dfb3-46e8-a125-43b815a2c6f4.p...
https://api.github.com/repos/huggingface/datasets/issues/5842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5842/comments
https://api.github.com/repos/huggingface/datasets/issues/5842/events
https://github.com/huggingface/datasets/issues/5842
1,705,510,602
I_kwDODunzps5lqAbK
5,842
Remove columns in interable dataset
[]
closed
false
null
3
2023-05-11T03:48:46Z
2023-06-21T16:36:42Z
2023-06-21T16:36:41Z
null
### Feature request Right now, remove_columns() produces a NotImplementedError for iterable style datasets ### Motivation It would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset ### Your contribution hope and courage.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5842/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5842/timeline
null
completed
null
null
false
[ "Transferring this issue as it's related to the 🤗 Datasets library ", "Hi @surya-narayanan! Could you provide some code snippet?", "This method has been recently added to the `IterableDataset`, so you need to update the `datasets`' installation (`pip install -U datasets`) to use it." ]
https://api.github.com/repos/huggingface/datasets/issues/2066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2066/comments
https://api.github.com/repos/huggingface/datasets/issues/2066/events
https://github.com/huggingface/datasets/pull/2066
833,480,551
MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz
2,066
Fix docstring rendering of Dataset/DatasetDict.from_csv args
[]
closed
false
null
0
2021-03-17T07:23:10Z
2021-03-17T09:21:21Z
2021-03-17T09:21:21Z
null
Fix the docstring rendering of Dataset/DatasetDict.from_csv args.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2066/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2066.diff", "html_url": "https://github.com/huggingface/datasets/pull/2066", "merged_at": "2021-03-17T09:21:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2066.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2066" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1582/comments
https://api.github.com/repos/huggingface/datasets/issues/1582/events
https://github.com/huggingface/datasets/pull/1582
768,776,617
MDExOlB1bGxSZXF1ZXN0NTQxMTEwODU1
1,582
Adding wiki lingua dataset as new branch
[]
closed
false
null
0
2020-12-16T11:53:07Z
2020-12-17T18:06:46Z
2020-12-17T18:06:45Z
null
Adding the dataset as new branch as advised here: #1470
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1582/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1582/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1582.diff", "html_url": "https://github.com/huggingface/datasets/pull/1582", "merged_at": "2020-12-17T18:06:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1582.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1582" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1716/comments
https://api.github.com/repos/huggingface/datasets/issues/1716/events
https://github.com/huggingface/datasets/pull/1716
782,819,006
MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5
1,716
Add Hatexplain Dataset
[]
closed
false
null
0
2021-01-10T13:30:01Z
2021-01-18T14:21:42Z
2021-01-18T14:21:42Z
null
Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1716/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1716.diff", "html_url": "https://github.com/huggingface/datasets/pull/1716", "merged_at": "2021-01-18T14:21:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/1716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1716" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4882/comments
https://api.github.com/repos/huggingface/datasets/issues/4882/events
https://github.com/huggingface/datasets/pull/4882
1,348,913,665
PR_kwDODunzps49sRtv
4,882
Fix language tags resource file
[]
closed
false
null
1
2022-08-24T06:06:01Z
2022-08-24T13:58:33Z
2022-08-24T13:58:30Z
null
This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08). This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See: - #4753
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4882/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4882.diff", "html_url": "https://github.com/huggingface/datasets/pull/4882", "merged_at": "2022-08-24T13:58:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4882" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4882). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/243/comments
https://api.github.com/repos/huggingface/datasets/issues/243/events
https://github.com/huggingface/datasets/pull/243
631,735,848
MDExOlB1bGxSZXF1ZXN0NDI4NTY2MTEy
243
Specify utf-8 encoding for GLUE
[]
closed
false
null
1
2020-06-05T16:33:00Z
2020-06-17T21:16:06Z
2020-06-08T08:42:01Z
null
#242 This makes the GLUE-MNLI dataset readable on my machine, not sure if it's a Windows-only bug.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/243/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/243/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/243.diff", "html_url": "https://github.com/huggingface/datasets/pull/243", "merged_at": "2020-06-08T08:42:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/243" }
true
[ "Thanks for fixing the encoding :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4762/comments
https://api.github.com/repos/huggingface/datasets/issues/4762/events
https://github.com/huggingface/datasets/pull/4762
1,321,261,733
PR_kwDODunzps48RE56
4,762
Improve features resolution in streaming
[]
closed
false
null
2
2022-07-28T17:28:11Z
2022-09-09T17:17:39Z
2022-09-09T17:15:30Z
null
`IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well. I also fixed `interleave_datasets` that was not filling missing columns with None, because it was not using the columns from `IterableDataset._resolve_features` cc @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4762/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4762.diff", "html_url": "https://github.com/huggingface/datasets/pull/4762", "merged_at": "2022-09-09T17:15:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4762.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4762" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Just took your comment into account @mariosasko , let me know if it's good for you now :)" ]
https://api.github.com/repos/huggingface/datasets/issues/2235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2235/comments
https://api.github.com/repos/huggingface/datasets/issues/2235/events
https://github.com/huggingface/datasets/pull/2235
861,040,716
MDExOlB1bGxSZXF1ZXN0NjE3Nzc0NDUw
2,235
Update README.md
[]
closed
false
null
0
2021-04-19T08:21:02Z
2021-04-19T12:49:19Z
2021-04-19T12:49:19Z
null
Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2235/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2235/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2235.diff", "html_url": "https://github.com/huggingface/datasets/pull/2235", "merged_at": "2021-04-19T12:49:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2235.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2235" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2813/comments
https://api.github.com/repos/huggingface/datasets/issues/2813/events
https://github.com/huggingface/datasets/issues/2813
973,470,580
MDU6SXNzdWU5NzM0NzA1ODA=
2,813
Remove compression from xopen
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
1
2021-08-18T09:35:59Z
2021-08-23T15:59:14Z
2021-08-23T15:59:14Z
null
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming In order to fulfill these requirements, streaming implementation patched some Python functions: - the `open(urlpath)` function was patched with `fsspec.open(urlpath)` - the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip://`), which are required by `fsspec.open` Recently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,... Under the hood, the implementation: - passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)` Some concerns have been raised about passing the parameter `compression` to `fsspec.open`: - https://github.com/huggingface/datasets/pull/2786#discussion_r689550254 - #2811 The main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset: ```python gzip.open(open(urlpath ``` While this is true: - it is not natural/usual to call `open` inside `gzip.open` (never seen this before) - indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming) In this particular case, there is a natural fix solution: #2811: - Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath` - Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression="gzip"` Are there other issues apart from this? Note that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just - `gzip.open` - `open` (after having called dl_manager.download_and_extract) TODO: - [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic. - For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`: - oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July) - In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming - [ ] If this is indeed an issue, which are the possible alternatives? Pros/cons?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2813/timeline
null
completed
null
null
false
[ "After discussing with @lhoestq, a reasonable alternative:\r\n- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: \r\n `bz2::http://domain.org/filename.bz2`\r\n- `xopen` parses the `urlpath` a...
https://api.github.com/repos/huggingface/datasets/issues/1698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1698/comments
https://api.github.com/repos/huggingface/datasets/issues/1698/events
https://github.com/huggingface/datasets/pull/1698
781,152,561
MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3
1,698
Update Coached Conv Pref DatasetCard
[]
closed
false
null
1
2021-01-07T09:07:16Z
2021-01-08T17:04:33Z
2021-01-08T17:04:32Z
null
Update Coached Conversation Preferance DatasetCard
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1698/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1698/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1698.diff", "html_url": "https://github.com/huggingface/datasets/pull/1698", "merged_at": "2021-01-08T17:04:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1698.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1698" }
true
[ "Really cool!\r\n\r\nCan you add some task tags for `dialogue-modeling` (under `sequence-modeling`) and `parsing` (under `structured-prediction`)?" ]
https://api.github.com/repos/huggingface/datasets/issues/3423
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3423/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3423/comments
https://api.github.com/repos/huggingface/datasets/issues/3423/events
https://github.com/huggingface/datasets/issues/3423
1,078,049,638
I_kwDODunzps5AQbtm
3,423
data duplicate when setting num_works > 1 with streaming data
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "fef2c0", "default": false, "descrip...
closed
false
null
14
2021-12-13T03:43:17Z
2022-12-14T16:04:22Z
2022-12-14T16:04:22Z
null
## Describe the bug The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import pandas as pd import numpy as np import os from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm import shutil NUM_OF_USER = 1000000 NUM_OF_ACTION = 50000 NUM_OF_SEQUENCE = 10000 NUM_OF_FILES = 32 NUM_OF_WORKERS = 16 if __name__ == "__main__": shutil.rmtree("./dataset") for i in range(NUM_OF_FILES): sequence_data = pd.DataFrame( { "imei": np.random.randint(1, NUM_OF_USER, size=NUM_OF_SEQUENCE), "sequence": np.random.randint(1, NUM_OF_ACTION, size=NUM_OF_SEQUENCE) } ) if not os.path.exists("./dataset"): os.makedirs("./dataset") sequence_data.to_csv(f"./dataset/sequence_data_{i}.csv", index=False) dataset = load_dataset("csv", data_files=[os.path.join("./dataset",file) for file in os.listdir("./dataset") if file.endswith(".csv")], split="train", streaming=True).with_format("torch") data_loader = DataLoader(dataset, batch_size=1024, num_workers=NUM_OF_WORKERS) result = pd.DataFrame() for i, batch in tqdm(enumerate(data_loader)): result = pd.concat([result, pd.DataFrame(batch)], axis=0) result.to_csv(f"num_work_{NUM_OF_WORKERS}.csv", index=False) ``` ## Expected results data do not duplicate ## Actual results data duplicate NUM_OF_WORKERS = 16 ![image](https://user-images.githubusercontent.com/16486492/145748707-9d2df25b-2f4f-4d7b-a83e-242be4fc8934.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:datasets==1.14.0 - Platform:transformers==4.11.3 - Python version:3.8 - PyArrow version:
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3423/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3423/timeline
null
completed
null
null
false
[ "Hi ! Thanks for reporting :)\r\n\r\nWhen using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.\r\n\r\nWe can probably fix this in `datasets` by checking `torch.utils.data.get_...
https://api.github.com/repos/huggingface/datasets/issues/2894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2894/comments
https://api.github.com/repos/huggingface/datasets/issues/2894/events
https://github.com/huggingface/datasets/pull/2894
993,375,654
MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5
2,894
Fix COUNTER dataset
[]
closed
false
null
0
2021-09-10T16:07:29Z
2021-09-10T16:27:45Z
2021-09-10T16:27:44Z
null
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2894/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2894/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2894.diff", "html_url": "https://github.com/huggingface/datasets/pull/2894", "merged_at": "2021-09-10T16:27:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2894.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2894" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5773/comments
https://api.github.com/repos/huggingface/datasets/issues/5773/events
https://github.com/huggingface/datasets/issues/5773
1,675,984,633
I_kwDODunzps5j5X75
5,773
train_dataset does not implement __len__
[]
open
false
null
9
2023-04-20T04:37:05Z
2023-07-19T20:33:13Z
null
null
when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers: `ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5773/timeline
null
null
null
null
false
[ "Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?", "this is a detail error info from transformers:\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\",...
https://api.github.com/repos/huggingface/datasets/issues/3202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3202/comments
https://api.github.com/repos/huggingface/datasets/issues/3202/events
https://github.com/huggingface/datasets/issues/3202
1,043,213,660
I_kwDODunzps4-Li1c
3,202
Add mIoU metric
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
1
2021-11-03T08:42:32Z
2022-06-01T17:39:05Z
2022-06-01T17:39:04Z
null
**Is your feature request related to a problem? Please describe.** Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html). Semantic segmentation (which is the task of labeling every pixel of an image with a corresponding class) is typically evaluated using the Mean Intersection and Union (mIoU). Together with the upcoming Image Feature, adding this metric could be very handy when creating example scripts to fine-tune any Transformer-based model on a semantic segmentation dataset. An implementation can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/504965184c3e6bc9ec43af54237129ef21981a5f/mmseg/core/evaluation/metrics.py#L132) for instance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3202/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3202/timeline
null
completed
null
null
false
[ "Resolved via https://github.com/huggingface/datasets/pull/3745." ]
https://api.github.com/repos/huggingface/datasets/issues/1028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1028/comments
https://api.github.com/repos/huggingface/datasets/issues/1028/events
https://github.com/huggingface/datasets/pull/1028
755,712,854
MDExOlB1bGxSZXF1ZXN0NTMxMzc0MTYw
1,028
Add ASSET dataset for text simplification evaluation
[]
closed
false
null
1
2020-12-03T00:28:29Z
2020-12-17T10:03:06Z
2020-12-03T16:34:37Z
null
Adding the ASSET dataset from https://github.com/facebookresearch/asset One config for the simplification data, one for the human ratings of quality. The README.md borrows from that written by @juand-r
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1028/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1028/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1028.diff", "html_url": "https://github.com/huggingface/datasets/pull/1028", "merged_at": "2020-12-03T16:34:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/1028.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1028" }
true
[ "Nice, thanks @yjernite !!" ]
https://api.github.com/repos/huggingface/datasets/issues/3950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3950/comments
https://api.github.com/repos/huggingface/datasets/issues/3950/events
https://github.com/huggingface/datasets/issues/3950
1,171,560,585
I_kwDODunzps5F1JiJ
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "descript...
closed
false
null
1
2022-03-16T21:14:11Z
2022-06-10T20:47:26Z
2022-06-10T20:47:26Z
null
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch") model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error. ## Actual results ``` 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__ return self._get_iterator() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__ w.start() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset' 0%| | 0/1000 [00:00<?, ?it/s] ``` This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together) Note that if you bypass this crash you get another crash. (I'll file a separate bug). ## Environment info - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3950/timeline
null
completed
null
null
false
[ "Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also t...
https://api.github.com/repos/huggingface/datasets/issues/3650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3650/comments
https://api.github.com/repos/huggingface/datasets/issues/3650/events
https://github.com/huggingface/datasets/pull/3650
1,118,537,429
PR_kwDODunzps4xyr2o
3,650
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
[]
open
false
null
4
2022-01-30T13:23:19Z
2022-07-06T15:19:50Z
null
null
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in memory. In order to flush memory, I propose we use optional `imap_unordered`. This will prevent one process to block the other ones. The logical thinking is that index are rarily relevant, and in one wants to keep an index, one can still create another column and reconstruct from there.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3650/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3650.diff", "html_url": "https://github.com/huggingface/datasets/pull/3650", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3650.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3650" }
true
[ "Hi @thomasw21, I remember suggesting `imap_unordered` to @lhoestq at that time to speed up `to_json` further but after trying `pool_imap` on multiple datasets (>9GB) , memory utilisation was almost constant and we decided to go ahead with that only. \r\n\r\n1. Did you try this without `gzip`? Because `gzip` featu...
https://api.github.com/repos/huggingface/datasets/issues/3912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3912/comments
https://api.github.com/repos/huggingface/datasets/issues/3912/events
https://github.com/huggingface/datasets/pull/3912
1,168,720,098
PR_kwDODunzps40aekr
3,912
add draft of registering function for pandas
[]
closed
false
null
3
2022-03-14T17:54:29Z
2023-01-24T12:57:35Z
2023-01-24T12:57:10Z
null
This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub. Here is an example: ```python import pandas as pd from datasets import register_pandas register_pandas() # push to hub df = pd.DataFrame.from_dict({"test": [1,2,3]}) df.push_to_hub("my_test") # load from hub df_retrieved = pd.DataFrame.load_from_hub("lvwerra/my_test") ``` It follows a similar philosophy as the `tqdm` [integration](https://github.com/tqdm/tqdm#pandas-integration). Also see [this issue](https://github.com/pandas-dev/pandas/issues/46000) on the `pandas` repository. This is just a rough draft of what such integration could look like but I would like appreciate some feedback on this: is this something you would like to add the library and is this the way to go? cc @lhoestq @albertvillanova @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3912/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3912/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/3912.diff", "html_url": "https://github.com/huggingface/datasets/pull/3912", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3912.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3912" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3912). All of your documentation changes will be reflected on that endpoint.", "That's cool ! Though I would expect such an integration to only require `huggingface_hub`, not the full `datasets` library. \r\n Indeed if users wa...
https://api.github.com/repos/huggingface/datasets/issues/797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/797/comments
https://api.github.com/repos/huggingface/datasets/issues/797/events
https://github.com/huggingface/datasets/issues/797
735,420,332
MDU6SXNzdWU3MzU0MjAzMzI=
797
Token classification labels are strings and we don't have the list of labels
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "72f99f", "default": fals...
closed
false
null
4
2020-11-03T15:33:30Z
2022-02-14T15:41:54Z
2022-02-14T15:41:53Z
null
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels. The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/797/timeline
null
completed
null
null
false
[ "Indeed. Pinging @stefan-it here if he want to give an expert opinion :)", "Related is https://github.com/huggingface/datasets/pull/636", "Should definitely be a ClassLabel 👍 ", "Already done." ]
https://api.github.com/repos/huggingface/datasets/issues/482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/482/comments
https://api.github.com/repos/huggingface/datasets/issues/482/events
https://github.com/huggingface/datasets/issues/482
674,851,147
MDU6SXNzdWU2NzQ4NTExNDc=
482
Bugs : dataset.map() is frozen on ELI5
[]
closed
false
null
8
2020-08-07T08:23:35Z
2023-04-06T09:39:59Z
2020-08-11T23:55:15Z
null
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process. Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow. ---------------------------------------- **More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object) I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/482/timeline
null
completed
null
null
false
[ "This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look", "I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip in...
https://api.github.com/repos/huggingface/datasets/issues/3654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3654/comments
https://api.github.com/repos/huggingface/datasets/issues/3654/events
https://github.com/huggingface/datasets/pull/3654
1,119,717,475
PR_kwDODunzps4x2kiX
3,654
Better TQDM output
[]
closed
false
null
1
2022-01-31T17:22:43Z
2022-02-03T15:55:34Z
2022-02-03T15:55:33Z
null
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tqdm/commit/f7722edecc3010cb35cc1c923ac4850a76336f82) * adds the missing `drop_last_batch` and `with_ranks` params to `DatasetDict.map` * correctly computes the number of iterations in `map` and the CSV/JSON loader when `batched=True` to fix `tqdm` progress bars * removes the `bool(logging.get_verbosity() == logging.NOTSET)` (or simplifies `bool(logging.get_verbosity() == logging.NOTSET) or not utils.is_progress_bar_enabled()` to `not utils.is_progress_bar_enabled()`) condition and uses `utils.is_progress_bar_enabled` to check if `tqdm` output is enabled (this comment from @stas00 explains why the `bool(logging.get_verbosity() == logging.NOTSET)` check is problematic: https://github.com/huggingface/transformers/issues/14889#issue-1087318463) Fix #2630
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3654/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3654.diff", "html_url": "https://github.com/huggingface/datasets/pull/3654", "merged_at": "2022-02-03T15:55:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3654.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3654" }
true
[ "@lhoestq I've created a notebook for you to see the difference: https://colab.research.google.com/drive/1by3EqnoKvC2p-yKW4lPDGOFOZHyGVyeQ?usp=sharing.\r\n\r\nFeel free to suggest better descriptions for the progress bars. \r\n\r\nIf everything looks good, think we can merge." ]
https://api.github.com/repos/huggingface/datasets/issues/5132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5132/comments
https://api.github.com/repos/huggingface/datasets/issues/5132/events
https://github.com/huggingface/datasets/issues/5132
1,413,607,306
I_kwDODunzps5UQe-K
5,132
Depracate `num_proc` parameter in `DownloadManager.extract`
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
null
5
2022-10-18T17:41:05Z
2022-10-25T15:56:46Z
2022-10-25T15:56:46Z
null
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5132/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5132/timeline
null
completed
null
null
false
[ "I can take this! #self-assign", "#self-assign", "@lazarust i'm already working on this issue :smile: ", "#self-assign", "hey @mariosasko , i made a pr for this issue. Could you please review it." ]
https://api.github.com/repos/huggingface/datasets/issues/6037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6037/comments
https://api.github.com/repos/huggingface/datasets/issues/6037/events
https://github.com/huggingface/datasets/issues/6037
1,805,887,184
I_kwDODunzps5ro6bQ
6,037
Documentation links to examples are broken
[]
closed
false
null
2
2023-07-15T04:54:50Z
2023-07-17T22:35:14Z
2023-07-17T15:10:32Z
null
### Describe the bug The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example - text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data are in csv files) ### Steps to reproduce the bug Click on links to examples from latest documentation ### Expected behavior Links should be up to date - it might be more stable to link to https://huggingface.co/datasets/ag_news/blob/main/ag_news.py ### Environment info dataset v1.2.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6037/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6037/timeline
null
completed
null
null
false
[ "These docs are outdated (version 1.2.1 is over two years old). Please refer to [this](https://huggingface.co/docs/datasets/dataset_script) version instead.\r\n\r\nInitially, we hosted datasets in this repo, but now you can find them [on the HF Hub](https://huggingface.co/datasets) (e.g. the [`ag_news`](https://hug...
https://api.github.com/repos/huggingface/datasets/issues/3351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3351/comments
https://api.github.com/repos/huggingface/datasets/issues/3351/events
https://github.com/huggingface/datasets/pull/3351
1,068,094,873
PR_kwDODunzps4vO5AS
3,351
Add VCTK dataset
[]
closed
false
null
9
2021-12-01T08:13:17Z
2022-02-28T09:22:03Z
2021-12-28T15:05:08Z
null
Fixes #1837.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3351/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3351/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3351.diff", "html_url": "https://github.com/huggingface/datasets/pull/3351", "merged_at": "2021-12-28T15:05:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3351.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3351" }
true
[ "Hello @patrickvonplaten, I hope it's okay to ping you with a (dumb) question!\r\n\r\nI've been trying to get `dl_manager.download_and_extract(_DL_URL)` to work with no avail. I verified that this is a problem on two different machines (lab server, GCP), so I doubt it's an issue with network connectivity. Here is t...
https://api.github.com/repos/huggingface/datasets/issues/5080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5080/comments
https://api.github.com/repos/huggingface/datasets/issues/5080/events
https://github.com/huggingface/datasets/issues/5080
1,398,849,565
I_kwDODunzps5TYMAd
5,080
Use hfh for caching
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
1
2022-10-06T05:51:58Z
2022-10-06T14:26:05Z
null
null
## Is your feature request related to a problem? As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching. ## Describe the solution you'd like Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages. First, we could easily start using `hfh` caching for: - dataset Python scripts - dataset READMEs - dataset infos JSON files (now deprecated) Second, we could also use `hfh` caching for data files downloaded from the Hub. Further investigation is needed for: - files downloaded from non-Hub hosts - extracted files from downloaded archive/compressed files - generated Arrow files ## Additional context Docs about the `hfh` caching system: - [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache) - [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache) The `transformers` library has already adopted `hfh` for caching. See: - huggingface/transformers#18438 - huggingface/transformers#18857 - huggingface/transformers#18966
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5080/timeline
null
null
null
null
false
[ "There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)" ]
https://api.github.com/repos/huggingface/datasets/issues/5860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5860/comments
https://api.github.com/repos/huggingface/datasets/issues/5860/events
https://github.com/huggingface/datasets/pull/5860
1,709,727,460
PR_kwDODunzps5QfojD
5,860
Minor tqdm optim
[]
closed
false
null
3
2023-05-15T09:49:37Z
2023-05-17T18:46:46Z
2023-05-17T18:39:35Z
null
Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`. On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5860/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5860.diff", "html_url": "https://github.com/huggingface/datasets/pull/5860", "merged_at": "2023-05-17T18:39:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/5860.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5860" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/3175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3175/comments
https://api.github.com/repos/huggingface/datasets/issues/3175/events
https://github.com/huggingface/datasets/pull/3175
1,038,945,271
PR_kwDODunzps4t0bXw
3,175
Add docs for `to_tf_dataset`
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
2
2021-10-28T20:55:22Z
2021-11-03T15:39:36Z
2021-11-03T10:07:23Z
null
This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`: - Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 😅). - Add an example for loading dataset from multiple zipped CSV files to the Load section. - Add an example for removing columns for an `IterableDataset`. - Add graphic for visualizing streaming.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3175/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3175/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3175.diff", "html_url": "https://github.com/huggingface/datasets/pull/3175", "merged_at": "2021-11-03T10:07:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3175.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3175" }
true
[ "This looks great, thank you!", "Thanks !\r\n\r\nFor some reason the new GIF is 6MB, which is a bit heavy for an image on a website. The previous one was around 200KB though which is perfect. For a good experience we usually expect images to be less than 500KB - otherwise for users with poor connection it takes t...
https://api.github.com/repos/huggingface/datasets/issues/4249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4249/comments
https://api.github.com/repos/huggingface/datasets/issues/4249/events
https://github.com/huggingface/datasets/pull/4249
1,218,524,424
PR_kwDODunzps42742y
4,249
Support streaming XGLUE dataset
[]
closed
false
null
1
2022-04-28T10:27:23Z
2022-05-06T08:38:21Z
2022-04-28T16:08:03Z
null
Support streaming XGLUE dataset. Fix #4247. CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4249/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4249/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4249.diff", "html_url": "https://github.com/huggingface/datasets/pull/4249", "merged_at": "2022-04-28T16:08:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/4249.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4249" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]