comments_url
stringlengths
70
70
timeline_url
stringlengths
70
70
closed_at
stringlengths
20
20
βŒ€
performed_via_github_app
null
state_reason
stringclasses
3 values
node_id
stringlengths
18
32
state
stringclasses
2 values
assignees
listlengths
0
4
draft
bool
2 classes
number
int64
1.61k
6.73k
user
dict
title
stringlengths
1
290
events_url
stringlengths
68
68
milestone
dict
labels_url
stringlengths
75
75
created_at
stringlengths
20
20
active_lock_reason
null
locked
bool
1 class
assignee
dict
pull_request
dict
id
int64
771M
2.18B
labels
listlengths
0
4
url
stringlengths
61
61
comments
listlengths
0
30
repository_url
stringclasses
1 value
author_association
stringclasses
3 values
body
stringlengths
0
228k
βŒ€
updated_at
stringlengths
20
20
html_url
stringlengths
49
51
reactions
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3346/comments
https://api.github.com/repos/huggingface/datasets/issues/3346/timeline
2021-12-14T14:39:05Z
null
completed
I_kwDODunzps4_osbt
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
3,346
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tianjianjiang", "id": 4812544, "login": "tianjianjiang", "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "type": "User", "url": "https://api.github.com/users/tianjianjiang" }
Failed to convert `string` with pyarrow for QED since 1.15.0
https://api.github.com/repos/huggingface/datasets/issues/3346/events
null
https://api.github.com/repos/huggingface/datasets/issues/3346/labels{/name}
2021-11-30T20:11:42Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
null
1,067,632,365
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3346
[ "Scratch that, probably the old and incompatible usage of dataset builder from promptsource.", "Actually, re-opening this issue cause the error persists\r\n\r\n```python\r\n>>> load_dataset(\"qed\")\r\nDownloading and preparing dataset qed/qed (download: 13.43 MiB, generated: 9.70 MiB, post-processed: Unknown size, total: 23.14 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/qed/qed/1.0.0/47d8b6f033393aa520a8402d4baf2d6bdc1b2fbde3dc156e595d2ef34caf7d75...\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 2228.64it/s]\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py\", line 1669, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 594, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 681, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 1083, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 468, in finalize\r\n self.write_examples_on_file()\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 339, in write_examples_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow/array.pxi\", line 229, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 125, in __arrow_array__\r\n out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)\r\n File \"pyarrow/array.pxi\", line 315, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Could not convert 'in' with type str: tried to convert to boolean\r\n```\r\n\r\nEnvironment (datasets and pyarrow):\r\n\r\n```bash\r\n(promptsource) victor_huggingface_co@victor-dev:~/promptsource$ datasets-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 1.16.1\r\n- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.1\r\n```\r\n```bash\r\n(promptsource) victor_huggingface_co@victor-dev:~/promptsource$ pip show pyarrow\r\nName: pyarrow\r\nVersion: 6.0.1\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License, Version 2.0\r\nLocation: /home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages\r\nRequires: numpy\r\nRequired-by: streamlit, datasets\r\n```" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Describe the bug Loading QED was fine until 1.15.0. related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670 Not sure where the root cause is, but here are some candidates: - #3158 - #3120 - #3196 - #2891 ## Steps to reproduce the bug ```python load_dataset("qed") ``` ## Expected results Loading completed. ## Actual results ```shell ArrowInvalid: Could not convert in with type str: tried to convert to boolean Traceback: File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/script_runner.py", line 354, in _run_script exec(code, module.__dict__) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/app.py", line 260, in <module> dataset = get_dataset(dataset_key, str(conf_option.name) if conf_option else None) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 543, in wrapped_func return get_or_create_cached_value() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 527, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/utils.py", line 49, in get_dataset builder_instance.download_and_prepare() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 1106, in _prepare_split num_examples, num_bytes = writer.finalize() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 456, in finalize self.write_examples_on_file() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 325, in write_examples_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 121, in __arrow_array__ out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type) File "pyarrow/array.pxi", line 305, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.0, 1.16.1 - Platform: macOS 1.15.7 or above - Python version: 3.7.12 and 3.9 - PyArrow version: 3.0.0, 5.0.0, 6.0.1
2021-12-14T14:39:05Z
https://github.com/huggingface/datasets/issues/3346
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3346/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3345/comments
https://api.github.com/repos/huggingface/datasets/issues/3345/timeline
2021-12-01T17:53:15Z
null
completed
I_kwDODunzps4_oqIn
closed
[]
null
3,345
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tianjianjiang", "id": 4812544, "login": "tianjianjiang", "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "type": "User", "url": "https://api.github.com/users/tianjianjiang" }
Failed to download species_800 from Google Drive zip file
https://api.github.com/repos/huggingface/datasets/issues/3345/events
null
https://api.github.com/repos/huggingface/datasets/issues/3345/labels{/name}
2021-11-30T20:00:28Z
null
false
null
null
1,067,622,951
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3345
[ "Hi,\r\n\r\nthe dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?", "> Hi,\r\n> \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI have tried that many times with both load_dataset() and a browser almost simultaneously. The browser always works for me while load_dataset() fails.", "@mariosasko \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI've tried yet again just a moment ago. This time I realize that, the step `(... post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976...` and the one after seem unstable. If I want to retry, I will have to delete it (and probably other cache lock files). It **_sometimes_** works.\r\n\r\nBut I didn't try `download_mode=\"force_redownload\"` yet.\r\n\r\nAnyway, I suppose this isn't really a pressing issue for the time being, so I'm going to close this. Thank you.\r\n\r\n" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Describe the bug One can manually download the zip file on Google Drive, but `load_dataset()` cannot. related: #3248 ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> s800 = load_dataset("species_800") ``` ## Expected results species_800 downloaded. ## Actual results ```shell Downloading: 5.68kB [00:00, 1.22MB/s] Downloading: 2.70kB [00:00, 691kB/s] Downloading and preparing dataset species800/species_800 (download: 17.36 MiB, generated: 3.53 MiB, post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976... 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/species_800/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976/species_800.py", line 104, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in map_nested for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in <listcomp> for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14,0 1.15.0, 1.16.1 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
2021-12-01T17:53:15Z
https://github.com/huggingface/datasets/issues/3345
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3345/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3344/comments
https://api.github.com/repos/huggingface/datasets/issues/3344/timeline
2021-12-01T19:35:32Z
null
null
PR_kwDODunzps4vNJwd
closed
[]
false
3,344
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
Add ArrayXD docs
https://api.github.com/repos/huggingface/datasets/issues/3344/events
null
https://api.github.com/repos/huggingface/datasets/issues/3344/labels{/name}
2021-11-30T18:53:31Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3344.diff", "html_url": "https://github.com/huggingface/datasets/pull/3344", "merged_at": "2021-12-01T19:35:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/3344.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3344" }
1,067,567,603
[]
https://api.github.com/repos/huggingface/datasets/issues/3344
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general. Let me know if I'm missing anything @lhoestq :)
2021-12-01T20:16:03Z
https://github.com/huggingface/datasets/pull/3344
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3344/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3343/comments
https://api.github.com/repos/huggingface/datasets/issues/3343/timeline
2021-12-01T11:27:58Z
null
null
PR_kwDODunzps4vM8yB
closed
[]
false
3,343
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Better error message when download fails
https://api.github.com/repos/huggingface/datasets/issues/3343/events
null
https://api.github.com/repos/huggingface/datasets/issues/3343/labels{/name}
2021-11-30T17:38:50Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3343.diff", "html_url": "https://github.com/huggingface/datasets/pull/3343", "merged_at": "2021-12-01T11:27:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3343.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3343" }
1,067,505,507
[]
https://api.github.com/repos/huggingface/datasets/issues/3343
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails. In particular the error now shows: - the error from the HEAD request if there's one - otherwise the response code of the HEAD request I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized). While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account
2021-12-01T11:27:59Z
https://github.com/huggingface/datasets/pull/3343
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3343/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3342/comments
https://api.github.com/repos/huggingface/datasets/issues/3342/timeline
2021-12-14T14:50:00Z
null
null
PR_kwDODunzps4vM3wh
closed
[]
false
3,342
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tianjianjiang", "id": 4812544, "login": "tianjianjiang", "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "type": "User", "url": "https://api.github.com/users/tianjianjiang" }
Fix ASSET dataset data URLs
https://api.github.com/repos/huggingface/datasets/issues/3342/events
null
https://api.github.com/repos/huggingface/datasets/issues/3342/labels{/name}
2021-11-30T17:13:30Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3342.diff", "html_url": "https://github.com/huggingface/datasets/pull/3342", "merged_at": "2021-12-14T14:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/3342.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3342" }
1,067,481,390
[]
https://api.github.com/repos/huggingface/datasets/issues/3342
[ "> Hi @tianjianjiang, thanks for the fix.\r\n> The links should also be updated in the `dataset_infos.json` file.\r\n> The failing tests are due to the missing tag in the header of the `README.md` file:\r\n\r\nHi @albertvillanova, thank you for the info! My apologies for the messy PR.\r\n" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that.
2021-12-14T14:50:00Z
https://github.com/huggingface/datasets/pull/3342
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3342/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3341/comments
https://api.github.com/repos/huggingface/datasets/issues/3341/timeline
2022-01-26T14:47:37Z
null
completed
I_kwDODunzps4_n_zh
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
3,341
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
Mirror the canonical datasets to the Hugging Face Hub
https://api.github.com/repos/huggingface/datasets/issues/3341/events
null
https://api.github.com/repos/huggingface/datasets/issues/3341/labels{/name}
2021-11-30T16:42:05Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
1,067,449,569
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/3341
[ "I created a GitHub project to keep track of what needs to be done:\r\nhttps://github.com/huggingface/datasets/projects/3\r\n\r\nI also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub", "I understand that the datasets are mirrored on the Hub now, right? Might I close @lhoestq @SBrandeis?" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
- [ ] create a repo on https://hf.co/datasets for every canonical dataset - [ ] on every commit related to a dataset, update the hf.co repo See https://github.com/huggingface/moon-landing/pull/1562 @SBrandeis: I let you edit this description if needed to precise the intent.
2022-01-26T14:47:37Z
https://github.com/huggingface/datasets/issues/3341
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3341/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3340/comments
https://api.github.com/repos/huggingface/datasets/issues/3340/timeline
2021-12-01T11:27:30Z
null
null
PR_kwDODunzps4vMP6Z
closed
[]
false
3,340
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix JSON ClassLabel casting for integers
https://api.github.com/repos/huggingface/datasets/issues/3340/events
null
https://api.github.com/repos/huggingface/datasets/issues/3340/labels{/name}
2021-11-30T14:19:54Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3340.diff", "html_url": "https://github.com/huggingface/datasets/pull/3340", "merged_at": "2021-12-01T11:27:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/3340.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3340" }
1,067,292,636
[]
https://api.github.com/repos/huggingface/datasets/issues/3340
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already. For example this currently fails: ```python from datasets import load_dataset, Features, ClassLabel path = "data.json" f = Features({"a": ClassLabel(names=["neg", "pos"])}) d = load_dataset("json", data_files=path, features=f) ``` data.json ```json {"a": 0} {"a": 1} ``` I fixed that by adding a line that checks the type of the JSON data before trying to convert them cc @albertvillanova let me know if it sounds good to you
2021-12-01T11:27:30Z
https://github.com/huggingface/datasets/pull/3340
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3340/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3339/comments
https://api.github.com/repos/huggingface/datasets/issues/3339/timeline
null
null
null
I_kwDODunzps4_k_pN
open
[]
null
3,339
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881" }
to_tf_dataset fails on TPU
https://api.github.com/repos/huggingface/datasets/issues/3339/events
null
https://api.github.com/repos/huggingface/datasets/issues/3339/labels{/name}
2021-11-30T00:50:52Z
null
false
null
null
1,066,662,477
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3339
[ "This might be related to https://github.com/tensorflow/tensorflow/issues/38762 , what do you think @Rocketknight1 ?\r\n> Dataset.from_generator is expected to not work with TPUs as it uses py_function underneath which is incompatible with Cloud TPU 2VM setup. If you would like to read from large datasets, maybe try to materialize it on disk and use TFRecordDataest instead.", "Hi @lhoestq @nbroad1881, I think it's very similar, yes. Unfortunately `to_tf_dataset` uses `tf.numpy_function` which can't be compiled - this is a necessary evil to load from the underlying Arrow dataset. We need to update the notebooks/examples to clarify that this won't work, or to identify a workaround. You may be able to get it to work on an actual cloud TPU VM, but those are quite new and we haven't tested it yet. ", "Thank you for the explanation. I didn't realize the nuances of `tf.numpy_function`. In this scenario, would it be better to use `export(format='tfrecord')` ? It's not quite the same, but for very large datasets that don't fit in memory it looks like it is the only option. I haven't used `export` before, but I do recall reading that there are suggestions for how big and how many tfrecords there should be to not bottleneck the TPU. It might be nice if there were a way for the `export` method to split the files up into appropriate chunk sizes depending on the size of the dataset and the number of devices. And if that is too much, it would be nice to be able to specify the number of files that would be created when using `export`. Well... maybe the user should just do the chunking themselves and call `export` a bunch of times. Whatever the case, you have been helpful. Thanks Tensorflow boy ;-) ", "Yeah, this is something we really should have a proper guide on. I'll make a note to test some things and make a 'TF TPU best practices' notebook at some point, but in the meantime I think your solution of exporting TFRecords will probably work. ", "Also: I knew that tweet would haunt me" ]
https://api.github.com/repos/huggingface/datasets
NONE
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs. ## Steps to reproduce the bug I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=sharing ## Expected results dataset from `to_tf_dataset` works in `model.fit` Right below the first error in the colab I use `tf.data.Dataset.from_tensor_slices` and `model.fit` works just fine. This is the desired outcome. ## Actual results ``` InternalError: 5 root error(s) found. (0) INTERNAL: {{function_node __inference_train_function_30558}} failed to connect to all addresses Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0: :{"created":"@1638231897.932218653","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3151,"referenced_errors":[{"created":"@1638231897.932216754","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":161,"grpc_status":14}]} [[{{node StatefulPartitionedCall}}]] [[MultiDeviceIteratorGetNextFromShard]] Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic. [[RemoteCall]] [[IteratorGetNextAsOptional]] [[tpu_compile_succeeded_assert/_14023832043698465348/_7/_439]] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0 - Tensorflow 2.7.0 - `transformers` 4.12.5
2021-12-02T14:21:27Z
https://github.com/huggingface/datasets/issues/3339
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3339/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3338/comments
https://api.github.com/repos/huggingface/datasets/issues/3338/timeline
2023-05-05T17:18:15Z
null
null
PR_kwDODunzps4vJRFM
closed
[]
true
3,338
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[WIP] Add doctests for tutorials
https://api.github.com/repos/huggingface/datasets/issues/3338/events
null
https://api.github.com/repos/huggingface/datasets/issues/3338/labels{/name}
2021-11-29T18:40:46Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3338.diff", "html_url": "https://github.com/huggingface/datasets/pull/3338", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3338.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3338" }
1,066,371,235
[]
https://api.github.com/repos/huggingface/datasets/issues/3338
[ "I manage to remove the mentions of ellipsis in the code by launching the command as follows:\r\n\r\n```\r\npython -m doctest -v docs/source/load_hub.rst -o=ELLIPSIS\r\n```\r\n\r\nThe way you put your ellipsis will only work on mac, I've adapted it for linux as well with the following:\r\n\r\n```diff\r\n >>> from datasets import load_dataset_builder\r\n >>> dataset_builder = load_dataset_builder('imdb')\r\n- >>> print(dataset_builder.cache_dir) #doctest: +ELLIPSIS\r\n- /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\n+ >>> print(dataset_builder.cache_dir)\r\n+ /.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\n```\r\n\r\nThis passes on my machine:\r\n\r\n```\r\nTrying:\r\n print(dataset_builder.cache_dir)\r\nExpecting:\r\n /.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\nok\r\n```\r\n\r\nI'm getting a last error:\r\n\r\n```py\r\nExpected:\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 3668\r\n })\r\n validation: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 408\r\n })\r\n test: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 1725\r\n })\r\n })\r\nGot:\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 3668\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 408\r\n })\r\n test: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 1725\r\n })\r\n })\r\n```\r\n\r\nBut this is due to `doctest` looking for an exact match and the list having an unordered print order. I wish `doctest` would be a bit more flexible with that." ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown. ### Issues A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle variable outputs with the `ELLIPSIS` directive. When I run doctest on the `load_hub.rst` file, doctest should recognize the expected output from the docstring, and the corresponding code sample in `load_hub.rst` should pass. I am having the same issue with handling tracebacks in the `load_dataset` function. From the docstring: ``` >>> dataset_builder.cache_dir #doctest: +ELLIPSIS /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/... ``` Test result: ``` Failed example: dataset_builder.cache_dir Expected: /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/... Got: /Users/steven/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1 ``` I am able to get the doctest to pass by adding the doctest directives (`ELLIPSIS` and `NORMALIZE_WHITESPACE`) to the code samples in the `rst` file directly. But my understanding is that these directives should also work in the docstrings of the functions. I am running the test from the root of the directory: ``` python -m doctest -v docs/source/load_hub.rst ```
2023-05-05T17:18:20Z
https://github.com/huggingface/datasets/pull/3338
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3338/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3337/comments
https://api.github.com/repos/huggingface/datasets/issues/3337/timeline
2021-12-14T10:28:54Z
null
completed
I_kwDODunzps4_jWxo
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" } ]
null
3,337
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
Typing of Dataset.__getitem__ could be improved.
https://api.github.com/repos/huggingface/datasets/issues/3337/events
null
https://api.github.com/repos/huggingface/datasets/issues/3337/labels{/name}
2021-11-29T16:20:11Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
null
1,066,232,936
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3337
[ "Hi ! Thanks for the suggestion, I didn't know about this decorator.\r\n\r\nIf you are interesting in contributing, feel free to open a pull request to add the overload methods for each typing combination :) To assign you to this issue, you can comment `#self-assign` in this thread.\r\n\r\n`Dataset.__getitem__` is defined right here: https://github.com/huggingface/datasets/blob/e6f1352fe19679de897f3d962e616936a17094f5/src/datasets/arrow_dataset.py#L1840", "#self-assign" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Describe the bug The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload) ## Steps to reproduce the bug Let's have a file `test.py` ```python from typing import List, Dict, Any from datasets import Dataset ds = Dataset.from_dict({ 'a': [1,2,3], 'b': ["1", "2", "3"] }) one_colum: List[str] = ds['a'] some_index: Dict[Any, Any] = ds[1] ``` ## Expected results Running `mypy test.py` should not give any error. ## Actual results ``` test.py:10: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "List[str]") test.py:11: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "Dict[Any, Any]") Found 2 errors in 1 file (checked 1 source file) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.1
2021-12-14T10:28:54Z
https://github.com/huggingface/datasets/issues/3337
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3337/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3336/comments
https://api.github.com/repos/huggingface/datasets/issues/3336/timeline
2023-05-16T18:24:46Z
null
null
PR_kwDODunzps4vIwUE
closed
[]
true
3,336
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays
https://api.github.com/repos/huggingface/datasets/issues/3336/events
null
https://api.github.com/repos/huggingface/datasets/issues/3336/labels{/name}
2021-11-29T15:58:59Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3336.diff", "html_url": "https://github.com/huggingface/datasets/pull/3336", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3336.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3336" }
1,066,208,436
[]
https://api.github.com/repos/huggingface/datasets/issues/3336
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Add support for multiple dynamic dimensions (e.g. `(None, None, 3)` for arbitrary sized images) and `to_pandas()` conversion for dynamic arrays. TODOs: * [ ] Cleaner code * [ ] Formatting issues (if NumPy doesn't allow broadcasting even though dtype is np.object) * [ ] Fix some issues with zero-dim tensors * [ ] Tests
2023-09-24T09:53:52Z
https://github.com/huggingface/datasets/pull/3336
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3336/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3335/comments
https://api.github.com/repos/huggingface/datasets/issues/3335/timeline
2021-12-10T10:30:15Z
null
null
PR_kwDODunzps4vISGy
closed
[]
false
3,335
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
add Speech commands dataset
https://api.github.com/repos/huggingface/datasets/issues/3335/events
null
https://api.github.com/repos/huggingface/datasets/issues/3335/labels{/name}
2021-11-29T13:52:47Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3335.diff", "html_url": "https://github.com/huggingface/datasets/pull/3335", "merged_at": "2021-12-10T10:30:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/3335.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3335" }
1,066,064,126
[]
https://api.github.com/repos/huggingface/datasets/issues/3335
[ "@anton-l ping", "@lhoestq \r\nHi Quentin! Thank you for your feedback and suggestions! πŸ€—\r\n\r\nYes, that was actually what I wanted to do next - I mean the steaming stuff :)\r\nAlso, I need to make some changes to the readme (to account for the updated features set).\r\n\r\nHopefully, I will be done by tomorrow afternoon if that's ok. \r\n", "@lhoestq Hi Quentin!\r\n\r\nI've implemented (hopefully, correctly) the streaming compatibility but the problem with the current approach is that we first need to iterate over the full archive anyway to get the list of filenames for train and validation sets (see [this](https://github.com/huggingface/datasets/pull/3335/files#diff-aeea540d136025e30a842856779e9c6485a5dc6fc9eb7fd6d3be2acd2f49b8e3R186), the same approach is implemented in TFDS version). Only after that, we can generate examples, so we cannot stream the dataset before the first iteration ends and it takes some time. It's probably not the most effective way. \r\n\r\nIf the streaming mode is turned off, this approach (with two iterations) is actually slower than the previous implementation (with archive extraction). \r\n\r\nMy suggestion is to host separate archives for each split prepared in advance. That way there would be no need for iterating over the common archive to collect train and validation filenames. @anton-l suggested to make AWS mirrors for them. I've prepared these archives, for now you can take a look at them [here](https://drive.google.com/drive/folders/1oMrZHzPgHAKprKJuvih91CM8KMSzh_pL?usp=sharing). I simplified their structure a bit so if we switch to using them, the code then should be changed (and simplified) a bit too.\r\n", "Hi ! Thanks for the changes :)\r\n\r\n> My suggestion is to host separate archives for each split prepared in advance. That way there would be no need for iterating over the common archive to collect train and validation filenames. @anton-l suggested to make AWS mirrors for them. I've prepared these archives, for now you can take a look at them here. I simplified their structure a bit so if we switch to using them, the code then should be changed (and simplified) a bit too.\r\n\r\nI agree, I just uploaded them on AWS\r\n\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_train.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_validation.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_test.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_validation.tar.gz\r\n\r\nNote that in the future we can move those files to actual repositories on the Hugging Face Hub, since we are migrating the datasets from this repository to the Hugging Face Hub (as mirrors), to make them more accessible to the community.", "@lhoestq Thank you! Gonna look at this tomorrow :)", "@lhoestq I've modified the code to fit new data format, now it works for v0.01 but doesn't work for v0.02 as the training archive is missing. Could you please create a mirror for that one too? You can find it [here](https://drive.google.com/file/d/1mPjnVMYb-VhPprGlOX8v9TBT1GT-rtcp/view?usp=sharing)\r\n\r\nAnd when it's done I'll need to regenerate all the meta / dummy stuff, and this version will be ready for a review :)", "Here you go :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_train.tar.gz", "FYI I juste merged a fix for the Windows CI error on `master`, feel free to merge `master` again into your branch", "All green ! I had to fix some minor stuff in the CI but it's good now\r\n\r\nNext step is to mark it as ready for review, and I think it's all good so we can merge πŸš€ ", "@lhoestq πŸ€—", ":tada: " ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
closes #3283
2021-12-10T10:37:21Z
https://github.com/huggingface/datasets/pull/3335
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3335/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3334/comments
https://api.github.com/repos/huggingface/datasets/issues/3334/timeline
null
null
null
I_kwDODunzps4_iZ-z
open
[]
null
3,334
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Integrate Polars library
https://api.github.com/repos/huggingface/datasets/issues/3334/events
null
https://api.github.com/repos/huggingface/datasets/issues/3334/labels{/name}
2021-11-29T12:31:54Z
null
false
null
null
1,065,983,923
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/3334
[ "If possible, a neat API could be something like `Dataset.to_polars()`, as well as `Dataset.set_format(\"polars\")`", "Note they use a \"custom\" implementation of Arrow: [Arrow2](https://github.com/jorgecarleitao/arrow2).", "Polars has grown rapidly in popularity over the last year - could you consider integrating the Polars functionality again?\r\n\r\nI don't think the \"custom\" implementation should be a barrier, it still conforms to the Arrow specification ", "Is there some direction regarding this from the HF team @lewtun ? Can conversion from polars to HF dataset be implemented with limited/zero copy? So, something like ``Dataset.from_polars()`` and ``Dataset.to_polars()`` like you mentioned. Happy to contribute if I can get some pointers on how this may be implemented." ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Check potential integration of the Polars library: https://github.com/pola-rs/polars - Benchmark: https://h2oai.github.io/db-benchmark/ CC: @thomwolf @lewtun
2023-09-12T14:53:27Z
https://github.com/huggingface/datasets/issues/3334
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 0, "heart": 7, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 12, "url": "https://api.github.com/repos/huggingface/datasets/issues/3334/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3333/comments
https://api.github.com/repos/huggingface/datasets/issues/3333/timeline
2021-12-01T03:57:48Z
null
completed
I_kwDODunzps4_f-dn
closed
[]
null
3,333
{ "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "events_url": "https://api.github.com/users/PatricYan/events{/privacy}", "followers_url": "https://api.github.com/users/PatricYan/followers", "following_url": "https://api.github.com/users/PatricYan/following{/other_user}", "gists_url": "https://api.github.com/users/PatricYan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PatricYan", "id": 38966558, "login": "PatricYan", "node_id": "MDQ6VXNlcjM4OTY2NTU4", "organizations_url": "https://api.github.com/users/PatricYan/orgs", "received_events_url": "https://api.github.com/users/PatricYan/received_events", "repos_url": "https://api.github.com/users/PatricYan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PatricYan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PatricYan/subscriptions", "type": "User", "url": "https://api.github.com/users/PatricYan" }
load JSON files, get the errors
https://api.github.com/repos/huggingface/datasets/issues/3333/events
null
https://api.github.com/repos/huggingface/datasets/issues/3333/labels{/name}
2021-11-28T14:29:58Z
null
false
null
null
1,065,346,919
[]
https://api.github.com/repos/huggingface/datasets/issues/3333
[ "Hi ! The message you're getting is not an error. It simply says that your JSON dataset is being prepared to a location in `/root/.cache/huggingface/datasets`", "> \r\n\r\nbut I want to load local JSON file by command\r\n`python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\n**squad-retrain-data/train-v2.0.json** is the local JSON file, how to load it and map it to a special structure?", "You can load it with `dataset = datasets.load_dataset('json', data_files=args.dataset)` as you said.\r\nThen if you need to apply additional processing to map it to a special structure, you can use rename columns or use `dataset.map`. For more information, you can check the documentation here: https://huggingface.co/docs/datasets/process.html\r\n\r\nAlso feel free to share your `run.py` code so we can take a look", "```\r\n# Dataset selection\r\n if args.dataset.endswith('.json') or args.dataset.endswith('.jsonl'):\r\n dataset_id = None\r\n # Load from local json/jsonl file\r\n dataset = datasets.load_dataset('json', data_files=args.dataset)\r\n # By default, the \"json\" dataset loader places all examples in the train split,\r\n # so if we want to use a jsonl file for evaluation we need to get the \"train\" split\r\n # from the loaded dataset\r\n eval_split = 'train'\r\n else:\r\n default_datasets = {'qa': ('squad',), 'nli': ('snli',)}\r\n dataset_id = tuple(args.dataset.split(':')) if args.dataset is not None else \\\r\n default_datasets[args.task]\r\n # MNLI has two validation splits (one with matched domains and one with mismatched domains). Most datasets just have one \"validation\" split\r\n eval_split = 'validation_matched' if dataset_id == ('glue', 'mnli') else 'validation'\r\n # Load the raw data\r\n dataset = datasets.load_dataset(*dataset_id)\r\n```\r\n\r\nI want to load JSON squad dataset instead `dataset = datasets.load_dataset('squad')` to retrain the model. \r\n", "If your JSON has the same format as the SQuAD dataset, then you need to pass `field=\"data\"` to `load_dataset`, since the SQuAD format is one big JSON object in which the \"data\" field contains the list of questions and answers.\r\n```python\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n\r\nLet me know if that helps :)\r\n\r\n", "Yes, code works. but the format is not as expected.\r\n```\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n```\r\npython3 run.py --do_train --task qa --dataset squad --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n```\r\npython3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['title', 'paragraphs'],\r\n num_rows: 442\r\n})\r\n\r\nI want the JSON to have the same format as before features. https://github.com/huggingface/datasets/blob/master/datasets/squad_v2/squad_v2.py is the script dealing with **squad** but how can I apply it by using JSON? ", "Ok I see, you have the paragraphs so you just need to process them to extract the questions and answers. I think you can process the SQuAD-like data this way:\r\n```python\r\ndef process_squad(articles):\r\n out = {\r\n \"title\": [],\r\n \"context\": [],\r\n \"question\": [],\r\n \"id\": [],\r\n \"answers\": [],\r\n }\r\n for title, paragraphs in zip(articles[\"title\"], articles[\"paragraphs\"]):\r\n for paragraph in paragraphs:\r\n for qa in paragraph[\"qas\"]:\r\n out[\"title\"].append(title)\r\n out[\"context\"].append(paragraph[\"context\"])\r\n out[\"question\"].append(qa[\"question\"])\r\n out[\"id\"].append(qa[\"id\"])\r\n out[\"answers\"].append({\r\n \"answer_start\": [answer[\"answer_start\"] for answer in qa[\"answers\"]],\r\n \"text\": [answer[\"text\"] for answer in qa[\"answers\"]],\r\n })\r\n return out\r\n\r\ndataset = dataset.map(process_squad, batched=True, remove_columns=[\"paragraphs\"])\r\n```\r\n\r\nI adapted the code from [squad.py](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py). The code takes as input a batch of articles (title + paragraphs) and gets all the questions and answers from the JSON structure.\r\n\r\nThe output is a dataset with `features: ['answers', 'context', 'id', 'question', 'title']`\r\n\r\nLet me know if that helps !\r\n", "Yes, this works. But how to get the training output during training the squad by **Trainer** \r\nfor example https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/trainer_qa.py \r\nI want the training inputs, labels, outputs for every epoch and step to produce the training dynamic graph", "I think you may need to implement your own Trainer, from the `QuestionAnsweringTrainer` for example.\r\nThis way you can have the flexibility of saving all the inputs/output used at each step", "does there have any function to be overwritten to do this?", "> does there have any function to be overwritten to do this?\r\n\r\nok, I overwrote the compute_loss, thank you.", "Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs\r\n\r\n```\r\n*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2106, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 2339, 2001, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} \r\n```\r\n\r\n```\r\n# This function preprocesses a question answering dataset, tokenizing the question and context text\r\n# and finding the right offsets for the answer spans in the tokenized context (to use as labels).\r\n# Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py\r\ndef prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None):\r\n questions = [q.lstrip() for q in examples[\"question\"]]\r\n max_seq_length = tokenizer.model_max_length\r\n # tokenize both questions and the corresponding context\r\n # if the context length is longer than max_length, we split it to several\r\n # chunks of max_length\r\n tokenized_examples = tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n truncation=\"only_second\",\r\n max_length=max_seq_length,\r\n stride=min(max_seq_length // 2, 128),\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\"\r\n )\r\n\r\n # Since one example might give us several features if it has a long context,\r\n # we need a map from a feature to its corresponding example.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position\r\n # in the original context. This will help us compute the start_positions\r\n # and end_positions to get the final answer string.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n # We will label features not containing the answer the index of the CLS token.\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n # from the feature idx to sample idx\r\n sample_index = sample_mapping[i]\r\n # get the answer for a feature\r\n answers = examples[\"answers\"][sample_index]\r\n\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != 1:\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != 1:\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and\r\n offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and \\\r\n offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(\r\n token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n```" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command `!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/` change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html `dataset = datasets.load_dataset('json', data_files=args.dataset)` Errors: `Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264... ` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/730#issuecomment-981095050_
2021-12-01T09:34:31Z
https://github.com/huggingface/datasets/issues/3333
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3333/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3332/comments
https://api.github.com/repos/huggingface/datasets/issues/3332/timeline
2021-11-29T13:34:14Z
null
null
PR_kwDODunzps4vGBig
closed
[]
false
3,332
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Fix error message and add extension fallback
https://api.github.com/repos/huggingface/datasets/issues/3332/events
null
https://api.github.com/repos/huggingface/datasets/issues/3332/labels{/name}
2021-11-28T14:25:29Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3332.diff", "html_url": "https://github.com/huggingface/datasets/pull/3332", "merged_at": "2021-11-29T13:34:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3332.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3332" }
1,065,345,853
[]
https://api.github.com/repos/huggingface/datasets/issues/3332
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust. In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix ordering. Now, we go from the most common to the least common extension and try to map it or return `None`. Fix #3331
2021-11-29T13:34:15Z
https://github.com/huggingface/datasets/pull/3332
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3332/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3331/comments
https://api.github.com/repos/huggingface/datasets/issues/3331/timeline
2021-11-29T13:34:14Z
null
completed
I_kwDODunzps4_ftH4
closed
[]
null
3,331
{ "avatar_url": "https://avatars.githubusercontent.com/u/34032031?v=4", "events_url": "https://api.github.com/users/luozhouyang/events{/privacy}", "followers_url": "https://api.github.com/users/luozhouyang/followers", "following_url": "https://api.github.com/users/luozhouyang/following{/other_user}", "gists_url": "https://api.github.com/users/luozhouyang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luozhouyang", "id": 34032031, "login": "luozhouyang", "node_id": "MDQ6VXNlcjM0MDMyMDMx", "organizations_url": "https://api.github.com/users/luozhouyang/orgs", "received_events_url": "https://api.github.com/users/luozhouyang/received_events", "repos_url": "https://api.github.com/users/luozhouyang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luozhouyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luozhouyang/subscriptions", "type": "User", "url": "https://api.github.com/users/luozhouyang" }
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
https://api.github.com/repos/huggingface/datasets/issues/3331/events
null
https://api.github.com/repos/huggingface/datasets/issues/3331/labels{/name}
2021-11-28T08:54:05Z
null
false
null
null
1,065,275,896
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3331
[ "Hi,\r\n\r\nthe fix was merged and will be available in the next release of `datasets`.\r\nIn the meantime, you can use it by installing `datasets` directly from master as follows:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets) But when I load the dataset, an error raised: ```bash AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("luozhouyang/question-answering-datasets", data_files=["dureader_robust.train.json"]) ``` ## Expected results Load dataset successfully without any error. ## Actual results ```bash Traceback (most recent call last): File "/mnt/home/zhouyang.lzy/github/naivenlp/naivenlp/tests/question_answering_tests/dataset_test.py", line 89, in test_load_dataset_with_hf data_files=["dureader_robust.train.json"], File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1616, in load_dataset **config_kwargs, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1443, in load_dataset_builder path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1157, in dataset_module_factory raise e1 from None File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1144, in dataset_module_factory download_mode=download_mode, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 798, in get_module raise FileNotFoundError(f"No data files or dataset script found in {self.path}") AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: linux - Python version: 3.6.13 - PyArrow version: 6.0.1
2021-11-29T13:49:44Z
https://github.com/huggingface/datasets/issues/3331
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3331/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3330/comments
https://api.github.com/repos/huggingface/datasets/issues/3330/timeline
2021-11-29T11:24:21Z
null
null
PR_kwDODunzps4vFtF7
closed
[]
false
3,330
{ "avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4", "events_url": "https://api.github.com/users/avinashsai/events{/privacy}", "followers_url": "https://api.github.com/users/avinashsai/followers", "following_url": "https://api.github.com/users/avinashsai/following{/other_user}", "gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avinashsai", "id": 22453634, "login": "avinashsai", "node_id": "MDQ6VXNlcjIyNDUzNjM0", "organizations_url": "https://api.github.com/users/avinashsai/orgs", "received_events_url": "https://api.github.com/users/avinashsai/received_events", "repos_url": "https://api.github.com/users/avinashsai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions", "type": "User", "url": "https://api.github.com/users/avinashsai" }
Change TriviaQA license (#3313)
https://api.github.com/repos/huggingface/datasets/issues/3330/events
null
https://api.github.com/repos/huggingface/datasets/issues/3330/labels{/name}
2021-11-28T03:26:45Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3330.diff", "html_url": "https://github.com/huggingface/datasets/pull/3330", "merged_at": "2021-11-29T11:24:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3330.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3330" }
1,065,176,619
[]
https://api.github.com/repos/huggingface/datasets/issues/3330
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Fixes (#3313)
2021-11-29T11:24:21Z
https://github.com/huggingface/datasets/pull/3330
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3330/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3329/comments
https://api.github.com/repos/huggingface/datasets/issues/3329/timeline
2021-11-29T20:40:15Z
null
completed
I_kwDODunzps4_fBcL
closed
[]
null
3,329
{ "avatar_url": "https://avatars.githubusercontent.com/u/52659318?v=4", "events_url": "https://api.github.com/users/josephkready666/events{/privacy}", "followers_url": "https://api.github.com/users/josephkready666/followers", "following_url": "https://api.github.com/users/josephkready666/following{/other_user}", "gists_url": "https://api.github.com/users/josephkready666/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/josephkready666", "id": 52659318, "login": "josephkready666", "node_id": "MDQ6VXNlcjUyNjU5MzE4", "organizations_url": "https://api.github.com/users/josephkready666/orgs", "received_events_url": "https://api.github.com/users/josephkready666/received_events", "repos_url": "https://api.github.com/users/josephkready666/repos", "site_admin": false, "starred_url": "https://api.github.com/users/josephkready666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josephkready666/subscriptions", "type": "User", "url": "https://api.github.com/users/josephkready666" }
Map function: Type error on iter #999
https://api.github.com/repos/huggingface/datasets/issues/3329/events
null
https://api.github.com/repos/huggingface/datasets/issues/3329/labels{/name}
2021-11-27T17:53:05Z
null
false
null
null
1,065,096,971
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3329
[ "Hi, thanks for reporting.\r\n\r\nIt would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error.", "```\r\ndef text_numbers_to_int(text, column=\"\"):\r\n \"\"\"\r\n Convert text numbers to int.\r\n\r\n :param text: text numbers\r\n :return: int\r\n \"\"\"\r\n try:\r\n numbers = find_numbers(text)\r\n if not numbers:\r\n return text\r\n result = \"\"\r\n i, j = 0, 0\r\n while i < len(text):\r\n if j < len(numbers) and i == numbers[j][1]:\r\n n = int(numbers[j][0]) if numbers[j][0] % 1 == 0 else float(numbers[j][0])\r\n result += str(n)\r\n i = numbers[j][2] #end\r\n j += 1\r\n else:\r\n result += text[i]\r\n i += 1\r\n if column:\r\n return{column: result}\r\n else:\r\n return {column: result}\r\n except Exception as e:\r\n print(e)\r\n return {column: result}\r\n```", "Maybe this is because of the `return text` line ? I think it should return a dictionary rather than a string", "Yes that was it, good catch! Thanks" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug Using the map function, it throws a type error on iter #999 Here is the code I am calling: ``` dataset = datasets.load_dataset('squad') dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'}) ``` text_numbers_to_int returns the input text with numbers replaced in the format {'context': text} It happens at ` File "C:\Users\lonek\anaconda3\envs\ai\Lib\site-packages\datasets\arrow_writer.py", line 289, in <listcomp> [row[0][col] for row in self.current_examples], type=col_type, try_type=col_try_type, col=col ` The issue is that the list comprehension expects self.current_examples to be type tuple(dict, str), but for some reason 26 out of 1000 of the sefl.current_examples are type tuple(str, str) Here is an example of what self.current_examples should be ({'context': 'Super Bowl 50 was an...merals 50.'}, '') Here is an example of what self.current_examples are when it throws the error: ('The Panthers used th... Marriott.', '')
2021-11-29T20:40:15Z
https://github.com/huggingface/datasets/issues/3329
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3329/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3328/comments
https://api.github.com/repos/huggingface/datasets/issues/3328/timeline
2021-11-29T13:32:42Z
null
null
PR_kwDODunzps4vFTpW
closed
[]
false
3,328
{ "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NouamaneTazi", "id": 29777165, "login": "NouamaneTazi", "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "type": "User", "url": "https://api.github.com/users/NouamaneTazi" }
Quick fix error formatting
https://api.github.com/repos/huggingface/datasets/issues/3328/events
null
https://api.github.com/repos/huggingface/datasets/issues/3328/labels{/name}
2021-11-27T11:47:48Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3328.diff", "html_url": "https://github.com/huggingface/datasets/pull/3328", "merged_at": "2021-11-29T13:32:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/3328.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3328" }
1,065,015,262
[]
https://api.github.com/repos/huggingface/datasets/issues/3328
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
While working on a dataset, I got the error ``` TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`. ``` This PR should fix the formatting of this error
2021-11-29T13:32:42Z
https://github.com/huggingface/datasets/pull/3328
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3328/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3327/comments
https://api.github.com/repos/huggingface/datasets/issues/3327/timeline
2021-11-26T16:44:11Z
null
completed
I_kwDODunzps4_daow
closed
[]
null
3,327
{ "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eliasws", "id": 19492473, "login": "eliasws", "node_id": "MDQ6VXNlcjE5NDkyNDcz", "organizations_url": "https://api.github.com/users/eliasws/orgs", "received_events_url": "https://api.github.com/users/eliasws/received_events", "repos_url": "https://api.github.com/users/eliasws/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "type": "User", "url": "https://api.github.com/users/eliasws" }
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
https://api.github.com/repos/huggingface/datasets/issues/3327/events
null
https://api.github.com/repos/huggingface/datasets/issues/3327/labels{/name}
2021-11-26T16:26:36Z
null
false
null
null
1,064,675,888
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3327
[ "#3323 " ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Describe the bug Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" Probably the reason for this is a wrongly converted assertion. 1.15.1: `assert len(query.shape) == 1 or (len(query.shape) == 2 and query.shape[0] == 1)` 1.16.1: ``` if len(query.shape) != 1 or (len(query.shape) == 2 and query.shape[0] != 1): raise ValueError("Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)") ``` ## Steps to reproduce the bug follow the steps described here: https://huggingface.co/course/chapter5/6?fw=tf ```python question_embedding.shape # (1, 768) scores, samples = embeddings_dataset.get_nearest_examples( "embeddings", question_embedding, k=5 # Error ) # "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" ``` ## Expected results Should work without exception ## Actual results Throws exception ## Environment info - `datasets` version: 1.15.1 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.12 - PyArrow version: 6.0.
2021-11-26T16:44:11Z
https://github.com/huggingface/datasets/issues/3327
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3327/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3326/comments
https://api.github.com/repos/huggingface/datasets/issues/3326/timeline
2021-11-26T16:31:23Z
null
null
PR_kwDODunzps4vEaYG
closed
[]
false
3,326
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix import `datasets` on python 3.10
https://api.github.com/repos/huggingface/datasets/issues/3326/events
null
https://api.github.com/repos/huggingface/datasets/issues/3326/labels{/name}
2021-11-26T16:10:00Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3326.diff", "html_url": "https://github.com/huggingface/datasets/pull/3326", "merged_at": "2021-11-26T16:31:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3326.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3326" }
1,064,664,479
[]
https://api.github.com/repos/huggingface/datasets/issues/3326
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`. To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators Fix #3324
2021-11-26T16:31:23Z
https://github.com/huggingface/datasets/pull/3326
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3326/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3325/comments
https://api.github.com/repos/huggingface/datasets/issues/3325/timeline
2021-11-26T16:20:36Z
null
null
PR_kwDODunzps4vEaGO
closed
[]
false
3,325
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Update conda dependencies
https://api.github.com/repos/huggingface/datasets/issues/3325/events
null
https://api.github.com/repos/huggingface/datasets/issues/3325/labels{/name}
2021-11-26T16:08:07Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3325.diff", "html_url": "https://github.com/huggingface/datasets/pull/3325", "merged_at": "2021-11-26T16:20:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3325.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3325" }
1,064,663,075
[]
https://api.github.com/repos/huggingface/datasets/issues/3325
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub`
2021-11-26T16:20:37Z
https://github.com/huggingface/datasets/pull/3325
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3325/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3324/comments
https://api.github.com/repos/huggingface/datasets/issues/3324/timeline
2021-11-26T16:31:23Z
null
completed
I_kwDODunzps4_dXDc
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
3,324
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Can't import `datasets` in python 3.10
https://api.github.com/repos/huggingface/datasets/issues/3324/events
null
https://api.github.com/repos/huggingface/datasets/issues/3324/labels{/name}
2021-11-26T16:06:14Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
1,064,661,212
[]
https://api.github.com/repos/huggingface/datasets/issues/3324
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
When importing `datasets` I'm getting this error in python 3.10: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module> from .arrow_reader import ArrowReader File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module> from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module> class InMemoryTable(TableBlock): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable def from_pandas(cls, *args, **kwargs): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper out = wraps(arrow_table_method)(method) File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper wrapper.__wrapped__ = wrapped AttributeError: readonly attribute ``` This makes the conda build fail. I'm opening a PR to fix this and do a patch release 1.16.1
2021-11-26T16:31:23Z
https://github.com/huggingface/datasets/issues/3324
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3324/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3323/comments
https://api.github.com/repos/huggingface/datasets/issues/3323/timeline
2021-11-26T16:44:11Z
null
null
PR_kwDODunzps4vEZwq
closed
[]
false
3,323
{ "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eliasws", "id": 19492473, "login": "eliasws", "node_id": "MDQ6VXNlcjE5NDkyNDcz", "organizations_url": "https://api.github.com/users/eliasws/orgs", "received_events_url": "https://api.github.com/users/eliasws/received_events", "repos_url": "https://api.github.com/users/eliasws/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "type": "User", "url": "https://api.github.com/users/eliasws" }
Fix wrongly converted assert
https://api.github.com/repos/huggingface/datasets/issues/3323/events
null
https://api.github.com/repos/huggingface/datasets/issues/3323/labels{/name}
2021-11-26T16:05:39Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3323.diff", "html_url": "https://github.com/huggingface/datasets/pull/3323", "merged_at": "2021-11-26T16:44:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3323.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3323" }
1,064,660,452
[]
https://api.github.com/repos/huggingface/datasets/issues/3323
[ "Closes #3327 " ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Seems like this assertion was replaced by an exception but the condition got wrongly converted.
2021-11-26T16:44:12Z
https://github.com/huggingface/datasets/pull/3323
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3323/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3322/comments
https://api.github.com/repos/huggingface/datasets/issues/3322/timeline
2021-11-29T13:40:06Z
null
null
PR_kwDODunzps4vD1Ct
closed
[]
false
3,322
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Add missing tags to XTREME
https://api.github.com/repos/huggingface/datasets/issues/3322/events
null
https://api.github.com/repos/huggingface/datasets/issues/3322/labels{/name}
2021-11-26T12:37:05Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3322.diff", "html_url": "https://github.com/huggingface/datasets/pull/3322", "merged_at": "2021-11-29T13:40:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/3322.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3322" }
1,064,429,705
[]
https://api.github.com/repos/huggingface/datasets/issues/3322
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Add missing tags to the XTREME benchmark for better discoverability.
2021-11-29T13:40:07Z
https://github.com/huggingface/datasets/pull/3322
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3322/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3321/comments
https://api.github.com/repos/huggingface/datasets/issues/3321/timeline
2021-11-26T10:30:30Z
null
null
PR_kwDODunzps4vCBeI
closed
[]
false
3,321
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Update URL of tatoeba subset of xtreme
https://api.github.com/repos/huggingface/datasets/issues/3321/events
null
https://api.github.com/repos/huggingface/datasets/issues/3321/labels{/name}
2021-11-25T18:42:31Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3321.diff", "html_url": "https://github.com/huggingface/datasets/pull/3321", "merged_at": "2021-11-26T10:30:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3321.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3321" }
1,063,858,386
[]
https://api.github.com/repos/huggingface/datasets/issues/3321
[ "<s>To be more precise: `os.path.join` is replaced on-the-fly by `xjoin` anyway with patching, to extend it to remote files</s>", "Oh actually just ignore what I said: they were used to concatenate URLs, which is not recommended. Let me fix that again by appending using `+`" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows. Fix #3320
2021-11-26T10:30:30Z
https://github.com/huggingface/datasets/pull/3321
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3321/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3320/comments
https://api.github.com/repos/huggingface/datasets/issues/3320/timeline
2021-11-26T10:30:29Z
null
completed
I_kwDODunzps4_ZDXY
closed
[]
null
3,320
{ "avatar_url": "https://avatars.githubusercontent.com/u/65535131?v=4", "events_url": "https://api.github.com/users/mmg10/events{/privacy}", "followers_url": "https://api.github.com/users/mmg10/followers", "following_url": "https://api.github.com/users/mmg10/following{/other_user}", "gists_url": "https://api.github.com/users/mmg10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mmg10", "id": 65535131, "login": "mmg10", "node_id": "MDQ6VXNlcjY1NTM1MTMx", "organizations_url": "https://api.github.com/users/mmg10/orgs", "received_events_url": "https://api.github.com/users/mmg10/received_events", "repos_url": "https://api.github.com/users/mmg10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mmg10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmg10/subscriptions", "type": "User", "url": "https://api.github.com/users/mmg10" }
Can't get tatoeba.rus dataset
https://api.github.com/repos/huggingface/datasets/issues/3320/events
null
https://api.github.com/repos/huggingface/datasets/issues/3320/labels{/name}
2021-11-25T12:31:11Z
null
false
null
null
1,063,531,992
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3320
[]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug It gives an error. > FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus ## Steps to reproduce the bug ```python data=load_dataset("xtreme","tatoeba.rus", split="validation") ``` ## Solution The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch.
2021-11-26T10:30:29Z
https://github.com/huggingface/datasets/issues/3320
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3320/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3319/comments
https://api.github.com/repos/huggingface/datasets/issues/3319/timeline
2021-11-25T14:47:46Z
null
null
PR_kwDODunzps4u-xdv
closed
[]
false
3,319
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Add push_to_hub docs
https://api.github.com/repos/huggingface/datasets/issues/3319/events
null
https://api.github.com/repos/huggingface/datasets/issues/3319/labels{/name}
2021-11-24T18:21:11Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3319.diff", "html_url": "https://github.com/huggingface/datasets/pull/3319", "merged_at": "2021-11-25T14:47:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3319.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3319" }
1,062,749,654
[]
https://api.github.com/repos/huggingface/datasets/issues/3319
[ "Looks good to me! :)\r\n\r\nMaybe we can mention that users can also set the `private` argument if they want to keep their dataset private? It would lead nicely into the next section on Privacy.", "Thanks for your comments, I fixed the capitalization for consistency and added an passage to mention the `private` parameter and to have a nice transition to the Privacy section :)\r\n\r\nI also added the login instruction that was missing before the user can actually upload a dataset." ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method. I just added a section in the "Upload a dataset to the Hub" tutorial. I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :)
2021-11-25T14:47:46Z
https://github.com/huggingface/datasets/pull/3319
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3319/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3318/comments
https://api.github.com/repos/huggingface/datasets/issues/3318/timeline
2021-11-24T15:35:04Z
null
null
PR_kwDODunzps4u9m-k
closed
[]
false
3,318
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Finish transition to PyArrow 3.0.0
https://api.github.com/repos/huggingface/datasets/issues/3318/events
null
https://api.github.com/repos/huggingface/datasets/issues/3318/labels{/name}
2021-11-24T12:30:14Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3318.diff", "html_url": "https://github.com/huggingface/datasets/pull/3318", "merged_at": "2021-11-24T15:35:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3318.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3318" }
1,062,369,717
[]
https://api.github.com/repos/huggingface/datasets/issues/3318
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Finish transition to PyArrow 3.0.0 that was started in #3098.
2021-11-24T15:35:05Z
https://github.com/huggingface/datasets/pull/3318
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3318/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3317/comments
https://api.github.com/repos/huggingface/datasets/issues/3317/timeline
2022-01-05T18:31:24Z
null
completed
I_kwDODunzps4_USyf
closed
[]
null
3,317
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje" }
Add desc parameter to Dataset filter method
https://api.github.com/repos/huggingface/datasets/issues/3317/events
null
https://api.github.com/repos/huggingface/datasets/issues/3317/labels{/name}
2021-11-24T11:01:36Z
null
false
null
null
1,062,284,447
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/3317
[ "Hi,\r\n\r\n`Dataset.map` allows more generic transforms compared to `Dataset.filter`, which purpose is very specific (to filter examples based on a condition). That's why I don't think we need the `desc` parameter there for consistency. #3196 has added descriptions to the `Dataset` methods that call `.map` internally, but not for the `filter` method, so we should do that.\r\n\r\nDo you have a description in mind? Maybe `\"Filtering the dataset\"` or `\"Filtering the indices\"`? If yes, feel free to open a PR.", "I'm personally ok with adding the `desc` parameter actually. Let's say you have different filters, it can be nice to differentiate between the different filters when they're running no ?", "@mariosasko the use case is filtering of a dataset prior to tokenization and subsequent training. As the dataset is huge it's just a matter of giving a user (model trainer) some feedback on what's going on. Otherwise, feedback is given for all steps in training preparation and not for filtering and the filtering in my use case lasts about 4-5 minutes. And yes, if there are more filtering stages, as @lhoestq pointed out, it would be nice to give some feedback. I thought desc is there already and got confused when I got the script error. ", "I don't have a strong opinion on that, so having `desc` as a parameter is also OK." ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
**Is your feature request related to a problem? Please describe.** As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to users during long operations on Datasets? **Describe the solution you'd like** Add desc parameter to Dataset filter method **Describe alternatives you've considered** N/A **Additional context** N/A
2022-01-05T18:31:24Z
https://github.com/huggingface/datasets/issues/3317
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3317/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3316/comments
https://api.github.com/repos/huggingface/datasets/issues/3316/timeline
2022-01-12T14:13:15Z
null
completed
I_kwDODunzps4_T6te
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
3,316
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Add RedCaps dataset
https://api.github.com/repos/huggingface/datasets/issues/3316/events
null
https://api.github.com/repos/huggingface/datasets/issues/3316/labels{/name}
2021-11-24T09:23:02Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
null
1,062,185,822
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
https://api.github.com/repos/huggingface/datasets/issues/3316
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
## Adding a Dataset - **Name:** RedCaps - **Description:** Web-curated image-text data created by the people, for the people - **Paper:** https://arxiv.org/abs/2111.11431 - **Data:** https://redcaps.xyz/ - **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Proposed by @patil-suraj
2022-01-12T14:13:15Z
https://github.com/huggingface/datasets/issues/3316
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3316/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3315/comments
https://api.github.com/repos/huggingface/datasets/issues/3315/timeline
2021-11-25T14:44:31Z
null
null
PR_kwDODunzps4u7WpU
closed
[]
false
3,315
{ "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anton-l", "id": 26864830, "login": "anton-l", "node_id": "MDQ6VXNlcjI2ODY0ODMw", "organizations_url": "https://api.github.com/users/anton-l/orgs", "received_events_url": "https://api.github.com/users/anton-l/received_events", "repos_url": "https://api.github.com/users/anton-l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "type": "User", "url": "https://api.github.com/users/anton-l" }
Removing query params for dynamic URL caching
https://api.github.com/repos/huggingface/datasets/issues/3315/events
null
https://api.github.com/repos/huggingface/datasets/issues/3315/labels{/name}
2021-11-23T20:24:12Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3315.diff", "html_url": "https://github.com/huggingface/datasets/pull/3315", "merged_at": "2021-11-25T14:44:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/3315.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3315" }
1,061,678,452
[]
https://api.github.com/repos/huggingface/datasets/issues/3315
[ "IMO it makes more sense to have `ignore_url_params` as an attribute of `DownloadConfig` to avoid defining a new argument in `DownloadManger`'s methods.", "@mariosasko that would make sense to me too, but it seems like `DownloadConfig` wasn't intended to be modified from a dataset loading script. @lhoestq wdyt?", "We can expose `DownloadConfig` as a property of `DownloadManager`, and then in the script before the download call we could do: `dl_manager.download_config.ignore_url_params = True`. But yes, let's hear what Quentin thinks.", "Oh indeed that's a great idea. This parameter is similar to others like `download_config.use_etag` that defines the behavior of the download and caching, so it's better if we have it there, and expose the `download_config`", "Implemented it via `dl_manager.download_config.ignore_url_params` now, and also added a usage example above :) " ]
https://api.github.com/repos/huggingface/datasets
MEMBER
The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic. Usage example: ```python import datasets class CommonVoice(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo() def _split_generators(self, dl_manager): dl_manager.download_config.ignore_url_params = True HUGE_URL = "https://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-7.0-2021-07-21/cv-corpus-7.0-2021-07-21-ab.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3IU5JYB5K%2F20211125%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20211125T131423Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEL7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDLsZw7Nj0d9h4rgheyKSBJJ6bxo1JdWLXAUhLMrUB8AXfhP8Ge4F8dtjwXmvGJgkIvdMT7P4YOEE1pS3mW8AyKsz7Z7IRVCIGQrOH1AbxGVVcDoCMMswXEOqL3nJFihKLf99%2F6l8iJVZdzftRUNgMhX5Hz0xSIL%2BzRDpH5nYa7C6YpEdOdW81CFVXybx7WUrX13wc8X4ZlUj7zrWcWf5p2VEIU5Utb7YHVi0Y5TQQiZSDoedQl0j4VmMuFkDzoobIO%2BvilgGeE2kIX0E62X423mEGNu4uQV5JsOuLAtv3GVlemsqEH3ZYrXDuxLmnvGj5HfMtySwI4vKv%2BlnnirD29o7hxvtidXiA8JMWhp93aP%2Fw7sod%2BPPbb5EqP%2B4Qb2GJ1myClOKcLEY0cqoy7XWm8NeVljLJojnFJVS5mNFBAzCCTJ%2FidxNsj8fflzkRoAzYaaPBuOTL1dgtZCdslK3FAuEvw0cik7P9A7IYiULV33otSHKMPcVfNHFsWQljs03gDztsIUWxaXvu6ck5vCcGULsHbfe6xoMPm2bR9jtKLONsslPcnzWIf7%2Fch2w%2F%2BjtTCd9IxaH4kytyJ6mIjpV%2FA%2F2h9qeDnDFsCphnMjAzPQn6tqCgTtPcyJ2b8c94ncgUnE4mepx%2FDa%2FanAEsrg9RPdmbdoPswzHn1IClh91IfSN74u95DZUxlPeZrHG5HxVCN3dKO6j%2Ft1xd20L0hEtazDdKOr8%2FYwGMirp8rp%2BII0pYOwQOrYHqH%2FREX2dRJctJtwE86Qj1eU8BAdXuFIkLC4NWXw%3D&X-Amz-Signature=1b8108d29b0e9c2bf6c7246e58ca8d5749a83de0704757ad8e8a44d78194691f&X-Amz-SignedHeaders=host" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) HUGE_URL += "&some_new_or_changed_param=12345" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) dl_manager = datasets.DownloadManager(dataset_name="common_voice") CommonVoice()._split_generators(dl_manager) ``` Output: ``` /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 ```
2021-11-25T14:44:32Z
https://github.com/huggingface/datasets/pull/3315
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3315/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3314/comments
https://api.github.com/repos/huggingface/datasets/issues/3314/timeline
2021-11-24T11:54:13Z
null
null
PR_kwDODunzps4u6mdX
closed
[]
false
3,314
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
Adding arg to pass process rank to `map`
https://api.github.com/repos/huggingface/datasets/issues/3314/events
null
https://api.github.com/repos/huggingface/datasets/issues/3314/labels{/name}
2021-11-23T15:55:21Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3314.diff", "html_url": "https://github.com/huggingface/datasets/pull/3314", "merged_at": "2021-11-24T11:54:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3314.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3314" }
1,061,448,227
[]
https://api.github.com/repos/huggingface/datasets/issues/3314
[ "Some commits seem to be there twice (made the mistake of rebasing because I wasn't sure whether the doc had changed), is this an issue @lhoestq ?" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg)
2021-11-24T11:54:13Z
https://github.com/huggingface/datasets/pull/3314
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3314/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3313/comments
https://api.github.com/repos/huggingface/datasets/issues/3313/timeline
2021-11-29T11:24:21Z
null
completed
I_kwDODunzps4_PI8Q
closed
[]
null
3,313
{ "avatar_url": "https://avatars.githubusercontent.com/u/16665267?v=4", "events_url": "https://api.github.com/users/akhilkedia/events{/privacy}", "followers_url": "https://api.github.com/users/akhilkedia/followers", "following_url": "https://api.github.com/users/akhilkedia/following{/other_user}", "gists_url": "https://api.github.com/users/akhilkedia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akhilkedia", "id": 16665267, "login": "akhilkedia", "node_id": "MDQ6VXNlcjE2NjY1MjY3", "organizations_url": "https://api.github.com/users/akhilkedia/orgs", "received_events_url": "https://api.github.com/users/akhilkedia/received_events", "repos_url": "https://api.github.com/users/akhilkedia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akhilkedia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akhilkedia/subscriptions", "type": "User", "url": "https://api.github.com/users/akhilkedia" }
TriviaQA License Mismatch
https://api.github.com/repos/huggingface/datasets/issues/3313/events
null
https://api.github.com/repos/huggingface/datasets/issues/3313/labels{/name}
2021-11-23T08:00:15Z
null
false
null
null
1,060,933,392
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3313
[ "Hi ! You're completely right, this must be mentioned in the dataset card.\r\nIf you're interesting in contributing, feel free to open a pull request to mention this in the `trivia_qa` dataset card in the \"Licensing Information\" section at https://github.com/huggingface/datasets/blob/master/datasets/trivia_qa/README.md" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License Is the License Information on HuggingFace correct?
2021-11-29T11:24:21Z
https://github.com/huggingface/datasets/issues/3313
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3313/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3312/comments
https://api.github.com/repos/huggingface/datasets/issues/3312/timeline
2021-12-02T16:07:47Z
null
null
PR_kwDODunzps4u3duV
closed
[]
false
3,312
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien" }
add bl books genre dataset
https://api.github.com/repos/huggingface/datasets/issues/3312/events
null
https://api.github.com/repos/huggingface/datasets/issues/3312/labels{/name}
2021-11-22T17:54:50Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3312.diff", "html_url": "https://github.com/huggingface/datasets/pull/3312", "merged_at": "2021-12-02T16:07:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/3312.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3312" }
1,060,440,346
[]
https://api.github.com/repos/huggingface/datasets/issues/3312
[ "To fix the CI, feel free to run the `make style` command to format the code.\r\n\r\nThen it also looks like the dummy_data.zip archives are all empty, which makes the tests fail. Can you try regenerating them ? They should have one file inside which is a dummy version of the file at https://bl.iro.bl.uk/downloads/36c7cd20-c8a7-4495-acbe-469b9132c6b1?locale=en", "@lhoestq, thanks for that feedback. \r\n\r\nI should have made most of these changes now. The `--auto_generate` flag wasn't working because the file wasn't downloaded with a `.csv` extension. I used `--match_text_files \"*\"` to get around this. Because there is a lot of data that isn't annotated using the default line number for the dummy data causes the `annotated_raw` and the `title_genre_classifiction` configs to fail because they don't generate any examples β€” bumping the line numbers to `250` fixes this. This does make the dummy data a bit bigger, though. \r\n\r\nThe total directory size for the dataset is now `150kb`. Is this okay, or do you want me to generate the dummy data manually instead? ", "Hi ! yes 150kB is fine :)\r\nFeel free to push your new dummy_data.zip files (I think the current one are still the empty ones)", "@lhoestq I've pushed those dummy files now and added your other suggestions.", "The CI failure is unrelated to this PR, merging :)", "@lhoestq, thanks for all your help with this pull request πŸ˜€" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
First of all thanks for the fantastic library/collection of datasets πŸ€— This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In addition, a subset of the data includes 'genre' information which can be used for supervised text classification tasks. I hope that this offers easier access to a dataset for doing text classification on GLAM (galleries, libraries, archives and museums) data. I have tried to create three configurations that provide both an 'easy' version of the dataset if you want to use it for training a genre classification model and a more 'raw' version of the data for other potential use cases for the data. I am open to suggestions if this doesn't make sense. Similarly, for some of the arrow datatypes, I have had to fall back to strings since there are missing values for some fields/rows but I may have missed a more elegant way of dealing with it.
2021-12-02T16:10:29Z
https://github.com/huggingface/datasets/pull/3312
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3312/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3311/comments
https://api.github.com/repos/huggingface/datasets/issues/3311/timeline
null
null
null
I_kwDODunzps4_NDx1
open
[]
null
3,311
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
Add WebSRC
https://api.github.com/repos/huggingface/datasets/issues/3311/events
null
https://api.github.com/repos/huggingface/datasets/issues/3311/labels{/name}
2021-11-22T16:58:33Z
null
false
null
null
1,060,387,957
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
https://api.github.com/repos/huggingface/datasets/issues/3311
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** WebSRC - **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata. - **Paper:** https://arxiv.org/abs/2101.09465 - **Data:** https://x-lance.github.io/WebSRC/dashboard.html# - **Motivation:** Currently adding MarkupLM to HuggingFace Transformers, which achieves SOTA on this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-11-22T16:58:33Z
https://github.com/huggingface/datasets/issues/3311
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3311/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3310/comments
https://api.github.com/repos/huggingface/datasets/issues/3310/timeline
2021-11-29T22:22:37Z
null
completed
I_kwDODunzps4_L9A4
closed
[]
null
3,310
{ "avatar_url": "https://avatars.githubusercontent.com/u/31850219?v=4", "events_url": "https://api.github.com/users/Crabzmatic/events{/privacy}", "followers_url": "https://api.github.com/users/Crabzmatic/followers", "following_url": "https://api.github.com/users/Crabzmatic/following{/other_user}", "gists_url": "https://api.github.com/users/Crabzmatic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Crabzmatic", "id": 31850219, "login": "Crabzmatic", "node_id": "MDQ6VXNlcjMxODUwMjE5", "organizations_url": "https://api.github.com/users/Crabzmatic/orgs", "received_events_url": "https://api.github.com/users/Crabzmatic/received_events", "repos_url": "https://api.github.com/users/Crabzmatic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Crabzmatic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Crabzmatic/subscriptions", "type": "User", "url": "https://api.github.com/users/Crabzmatic" }
Fatal error condition occurred in aws-c-io
https://api.github.com/repos/huggingface/datasets/issues/3310/events
null
https://api.github.com/repos/huggingface/datasets/issues/3310/labels{/name}
2021-11-22T12:27:54Z
null
false
null
null
1,060,098,104
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3310
[ "Hi ! Are you having this issue only with this specific dataset, or it also happens with other ones like `squad` ?", "@lhoestq It happens also on `squad`. It successfully downloads the whole dataset and then crashes on: \r\n\r\n```\r\nFatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n```\r\n\r\nI tested it on Ubuntu and its working OK. Didn't test on non-preview version of Windows 11, `Windows-10-10.0.22504-SP0` is a preview version, not sure if this is causing it.", "I see the same error in Windows-10.0.19042 as of a few days ago:\r\n\r\n`Fatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS`\r\n\r\npython 3.8.12 h7840368_2_cpython conda-forge\r\nboto3 1.20.11 pyhd8ed1ab_0 conda-forge\r\nbotocore 1.23.11 pyhd8ed1ab_0 conda-forge\r\n\r\n...but I am not using `datasets` (although I might take a look now that I know about it!)\r\n\r\nThe error has occurred a few times over the last two days, but not consistently enough for me to get it with DEBUG. If there is any interest I can report back here, but it seems not unique to `datasets`.", "I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?", "> I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?\r\n\r\nAgreed, this issue is not likely a bug in datasets, since I get the identical error without datasets installed.", "Will close this issue. Bug in `aws-c-io` shouldn't be in `datasets` repo. Nevertheless, it can be useful to know that it happens. Thanks @leehaust @lhoestq ", "I have also had this issue since a few days, when running scripts using PyCharm in particular, but it does not seem to affect the script from running, only reporting this error at the end of the run.", "I also get this issue, It appears after my script has finished running. I get the following error message\r\n```\r\nFatal error condition occurred in /home/conda/feedstock_root/build_artifacts/aws-c-io_1637179816120/work/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n################################################################################\r\nStack trace:\r\n################################################################################\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_backtrace_print+0x59) [0x2aabe0479579]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_fatal_assert+0x48) [0x2aabe04696c8]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x13ad3) [0x2aabe0624ad3]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x113ca) [0x2aabe06223ca]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-crt-cpp.so(_ZN3Aws3Crt2Io15ClientBootstrapD1Ev+0x3a) [0x2aabe041cf5a]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././libaws-cpp-sdk-core.so(+0x5f570) [0x2aabe00eb570]\r\n/lib64/libc.so.6(+0x39ce9) [0x2aaaab835ce9]\r\n/lib64/libc.so.6(+0x39d37) [0x2aaaab835d37]\r\n/lib64/libc.so.6(__libc_start_main+0xfc) [0x2aaaab81e55c]\r\npython(+0x1c721d) [0x55555571b21d]\r\nAborted\r\n```\r\nI don't get this issue when running my code in a container, and it seems more relevant to PyArrow but thought a more complete stack trace might be helpful to someone\r\n", "I created an issue on JIRA:\r\nhttps://issues.apache.org/jira/browse/ARROW-15141", "@CallumMcMahon Do you have a small reproducer for this problem on Linux? I can reproduce this on Windows but sadly not with linux.", "Any updates on this issue? I started receiving the same error a few days ago on the amazon reviews", "Hi,\r\n\r\nI also ran into this issue, Windows only. It caused our massive binary to minidump left and right, very annoying.\r\nWhen the program is doing an exit, the destructors in the exit-handlers want to do cleanup, leading to code in event_loop.c, on line 73-ish:\r\n\r\nAWS_FATAL_ASSERT(\r\n aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) ==\r\n AWS_OP_SUCCESS);\r\n\r\nThe fatal_assert end in an abort/minidump.\r\n\r\nDigging through the code, I found that aws_thread_launch in the Windows version (aws-c-common/source/windows/thread.c) has only ONE reason to return anything other than AWS_OP_SUCCESS:\r\n\r\nreturn aws_raise_error(AWS_ERROR_THREAD_INSUFFICIENT_RESOURCE);\r\n\r\non line 263, when CreateThread fails. Our conclusion was that, apparently, Windows dislikes launching a new thread while already handling the exit-handlers. And while I appreciate the the fatal_assert is there in case of problems, the cure here is worse than the problem.\r\n\r\nI \"fixed\" this in our (Windows) environment by (bluntly) removing the AWS_FATAL_ASSERT. If Windows cannot start a thread, the program is in deep trouble anyway and the chances of that actually happening are acceptable (to us).\r\nThe exit is going to clean up all resources anyway.\r\n\r\nA neater fix would probably be to detect somehow that the program is actually in the process of exiting and then not bother (on windows, anyway) to start a cleanup thread. Alternatively, try to start the thread but not fatal-assert when it fails during exit. Or perhaps Windows can be convinced somehow to start the thread under these circumstances?\r\n\r\n@xhochy : The problem is Windows-only, the aws_thread_launch has two implementations (posix and windows). The problem is in the windows CreateThread which fails.\r\n", "I also encountered the same problem, but I made an error in the multi gpu training environment on Linux, and the single gpu training environment will not make an error.\r\ni use accelerate package to do multi gpu training.", "> I also get this issue, It appears after my script has finished running. I get the following error message\r\n> \r\n> ```\r\n> Fatal error condition occurred in /home/conda/feedstock_root/build_artifacts/aws-c-io_1637179816120/work/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\n> Exiting Application\r\n> ################################################################################\r\n> Stack trace:\r\n> ################################################################################\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_backtrace_print+0x59) [0x2aabe0479579]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_fatal_assert+0x48) [0x2aabe04696c8]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x13ad3) [0x2aabe0624ad3]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x113ca) [0x2aabe06223ca]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-crt-cpp.so(_ZN3Aws3Crt2Io15ClientBootstrapD1Ev+0x3a) [0x2aabe041cf5a]\r\n> /home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././libaws-cpp-sdk-core.so(+0x5f570) [0x2aabe00eb570]\r\n> /lib64/libc.so.6(+0x39ce9) [0x2aaaab835ce9]\r\n> /lib64/libc.so.6(+0x39d37) [0x2aaaab835d37]\r\n> /lib64/libc.so.6(__libc_start_main+0xfc) [0x2aaaab81e55c]\r\n> python(+0x1c721d) [0x55555571b21d]\r\n> Aborted\r\n> ```\r\n> \r\n> I don't get this issue when running my code in a container, and it seems more relevant to PyArrow but thought a more complete stack trace might be helpful to someone\r\n\r\nAny updates for your issue because I'm getting the same one ", "Potentially related AWS issue: https://github.com/aws/aws-sdk-cpp/issues/1809\r\n\r\nRan into this issue today while training a BPE tokenizer on a dataset.\r\n\r\nTrain code:\r\n\r\n```python\r\n\"\"\"Train a ByteLevelBPETokenizer based on a given dataset. The dataset must be on the HF Hub.\r\nThis script is adaptated from the Transformers example in https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling\r\n\"\"\"\r\nfrom os import PathLike\r\nfrom pathlib import Path\r\nfrom typing import Sequence, Union\r\n\r\nfrom datasets import load_dataset\r\nfrom tokenizers import ByteLevelBPETokenizer\r\n\r\n\r\ndef train_tokenizer(dataset_name: str = \"oscar\", dataset_config_name: str = \"unshuffled_deduplicated_nl\",\r\n dataset_split: str = \"train\", dataset_textcol: str = \"text\",\r\n vocab_size: int = 50265, min_frequency: int = 2,\r\n special_tokens: Sequence[str] = (\"<s>\", \"<pad>\", \"</s>\", \"<unk>\", \"<mask>\"),\r\n dout: Union[str, PathLike] = \".\"):\r\n # load dataset\r\n dataset = load_dataset(dataset_name, dataset_config_name, split=dataset_split)\r\n # Instantiate tokenizer\r\n tokenizer = ByteLevelBPETokenizer()\r\n\r\n def batch_iterator(batch_size=1024):\r\n for i in range(0, len(dataset), batch_size):\r\n yield dataset[i: i + batch_size][dataset_textcol]\r\n\r\n # Customized training\r\n tokenizer.train_from_iterator(batch_iterator(), vocab_size=vocab_size, min_frequency=min_frequency,\r\n special_tokens=special_tokens)\r\n\r\n # Save to disk\r\n pdout = Path(dout).resolve()\r\n pdout.mkdir(exist_ok=True, parents=True)\r\n tokenizer.save_model(str(pdout))\r\n\r\n\r\ndef main():\r\n import argparse\r\n cparser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter)\r\n\r\n cparser.add_argument(\"dataset_name\", help=\"Name of dataset to use for tokenizer training\")\r\n cparser.add_argument(\"--dataset_config_name\", default=None,\r\n help=\"Name of the config to use for tokenizer training\")\r\n cparser.add_argument(\"--dataset_split\", default=None,\r\n help=\"Name of the split to use for tokenizer training (typically 'train')\")\r\n cparser.add_argument(\"--dataset_textcol\", default=\"text\",\r\n help=\"Name of the text column to use for tokenizer training\")\r\n cparser.add_argument(\"--vocab_size\", type=int, default=50265, help=\"Vocabulary size\")\r\n cparser.add_argument(\"--min_frequency\", type=int, default=2, help=\"Minimal frequency of tokens\")\r\n cparser.add_argument(\"--special_tokens\", nargs=\"+\", default=[\"<s>\", \"<pad>\", \"</s>\", \"<unk>\", \"<mask>\"],\r\n help=\"Special tokens to add. Useful for specific training objectives. Note that if you wish\"\r\n \" to use this tokenizer with a default transformers.BartConfig, then make sure that the\"\r\n \" order of at least these special tokens are correct: BOS (0), padding (1), EOS (2)\")\r\n cparser.add_argument(\"--dout\", default=\".\", help=\"Path to directory to save tokenizer.json file\")\r\n\r\n train_tokenizer(**vars(cparser.parse_args()))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nCommand:\r\n\r\n```sh\r\n$WDIR=\"your_tokenizer\"\r\npython prepare_tokenizer.py dbrd --dataset_config_name plain_text --dataset_split unsupervised --dout $WDIR\r\n```\r\n\r\nOutput:\r\n\r\n```\r\nReusing dataset dbrd (cache/datasets/dbrd/plain_text/3.0.0/2b12e31348489dfe586c2d0f40694e5d9f9454c9468457ac9f1b51abf686eeb3)\r\n[00:00:30] Pre-processing sequences β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 0 / 0\r\n[00:00:00] Tokenize words β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 333319 / 333319\r\n[00:01:06] Count pairs β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 333319 / 333319\r\n[00:00:03] Compute merges β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 50004 / 50004\r\n\r\nFatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n################################################################################\r\nStack trace:\r\n################################################################################\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x155106589f06]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x1551065818e5]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x1551064a6e09]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x15510658aa3d]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x1551064a4948]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x15510658aa3d]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x15510645fb46]\r\nvenv/lib/python3.9/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x155105ec446a]\r\n/lib64/libc.so.6(+0x39b0c) [0x1551075b8b0c]\r\n/lib64/libc.so.6(on_exit+0) [0x1551075b8c40]\r\n/lib64/libc.so.6(__libc_start_main+0xfa) [0x1551075a249a]\r\npython(_start+0x2e) [0x4006ce]\r\nAborted (core dumped)\r\n```\r\n\r\nRunning on datasets==2.4.0 and pyarrow==9.0.0 on RHEL 8.\r\n", "There is also a discussion here https://issues.apache.org/jira/browse/ARROW-15141 where it is suggested for conda users to use an older version of aws-sdk-cpp: `aws-sdk-cpp=1.8.186`", "Downgrading pyarrow to 6.0.1 solves the issue for me.\r\n\r\n`pip install pyarrow==6.0.1`", "First of all, I’d never call a downgrade a solution, at most a (very) temporary workaround.\r\nFurthermore: This bug also happens outside pyarrow, I incorporate AWS in a standalone Windows C-program and that crashes during exit.\r\n\r\nFrom: Bo-Ru (Roy) Lu ***@***.***>\r\nSent: Thursday, 15 September 2022 01:12\r\nTo: huggingface/datasets ***@***.***>\r\nCc: Ruurd Beerstra ***@***.***>; Comment ***@***.***>\r\nSubject: Re: [huggingface/datasets] Fatal error condition occurred in aws-c-io (Issue #3310)\r\n\r\nSent by an external sender. Please be cautious about clicking on links and opening attachments.\r\n--------------------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\nDowngrading pyarrow to 6.0.1 solves the issue.\r\n\r\nβ€”\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/3310#issuecomment-1247390774>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AKYUE3WBCSMHKJOOA2RQELLV6JLSVANCNFSM5IQ3WG7Q>.\r\nYou are receiving this because you commented.Message ID: ***@***.******@***.***>>\r\n", "> First of all, I’d never call a downgrade a solution, at most a (very) temporary workaround.\r\n\r\nVery much so! It looks like an apparent fix for the underlying problem [might](https://github.com/awslabs/aws-c-io/pull/515) have landed, but it sounds like it might still be a bit of a [lift](https://github.com/aws/aws-sdk-cpp/issues/1809#issuecomment-1289859795) to get it into aws-sdk-cpp.\r\n\r\n> Downgrading pyarrow to 6.0.1 solves the issue for me.\r\n\r\nSidenote: On conda-forge side, all recent pyarrow releases (all the way up to v9 and soon v10) have carried the respective pin and will not run into this issue.\r\n\r\n```\r\nconda install -c conda-forge pyarrow\r\n```\r\n\r\n", "For pip people, I confirmed that installing the nightly version of pyarrow also solves this by: `pip install --extra-index-url https://pypi.fury.io/arrow-nightlies/ --prefer-binary --pre pyarrow --upgrade`. (See https://arrow.apache.org/docs/python/install.html#installing-nightly-packages)\r\nAny version after https://github.com/apache/arrow/pull/14157 would work fine.", "> Furthermore: This bug also happens outside pyarrow, I incorporate AWS in a standalone Windows C-program and that crashes during exit.\r\n\r\nDo you have a reproducer you could share? I'd like to test if the new versions that supposedly solve this actually do, but we don't have a way to test it...", "Hi,\r\n\r\nNo – sorry. It is part of a massive eco-system which cannot easily be shared.\r\nBut I think the problem was summarized quite clearly: Windows does not allow a CreateThread while doing ExitProcess.\r\nThe cleanup that gets called as part of the exit handler code tries to start a thread, the fatal-assert on that causes the crash, and in windows we get a very big dump file.\r\nThe fix I applied simply removes that fatal assert, that solves the problem for me.\r\nI did not delve into the what the thread was trying to achieve and if that might cause issues when not executed during exit of the process. We did not notice anything of the kind.\r\nHowever, we *did* notice the many, many gigabytes of accumulated dumps of hundreds of processes 😊\r\n\r\nI’ll try and upgrade to the latest AWS version and report my findings, but that will be after I return from a month of vacationing…\r\n\r\n\r\n * Regards – Ruurd Beerstra\r\n\r\n\r\nFrom: h-vetinari ***@***.***>\r\nSent: Friday, 28 October 2022 02:09\r\nTo: huggingface/datasets ***@***.***>\r\nCc: Ruurd Beerstra ***@***.***>; Comment ***@***.***>\r\nSubject: Re: [huggingface/datasets] Fatal error condition occurred in aws-c-io (Issue #3310)\r\n\r\nSent by an external sender. Please be cautious about clicking on links and opening attachments.\r\n--------------------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\nFurthermore: This bug also happens outside pyarrow, I incorporate AWS in a standalone Windows C-program and that crashes during exit.\r\n\r\nDo you have a reproducer you could share? I'd like to test if the new versions that supposedly solve this actually do, but we don't have a way to test it...\r\n\r\nβ€”\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/3310#issuecomment-1294251331>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AKYUE3SHHPC5AT7KQ4GDAJDWFMKRTANCNFSM5IQ3WG7Q>.\r\nYou are receiving this because you commented.Message ID: ***@***.******@***.***>>\r\n", "> No – sorry. It is part of a massive eco-system which cannot easily be shared.\r\n\r\nOK, was worth a try...\r\n\r\n> The fix I applied simply removes that fatal assert, that solves the problem for me.\r\n\r\nThis seems to be what https://github.com/awslabs/aws-c-io/pull/515 did upstream.\r\n\r\n> I’ll try and upgrade to the latest AWS version and report my findings, but that will be after I return from a month of vacationing…\r\n\r\ncaution: aws-sdk-cpp hasn't yet upgraded its bundled(?) aws-c-io and hence doesn't contain the fix AFAICT", "Hi, I also encountered the same problem, but I made an error on Ubuntu without using `datasets` as @Crabzmatic he wrote.\r\n\r\nAt that time, I find my version of pyarrow is 9.0.0, which is different from as follow:\r\n> https://github.com/huggingface/datasets/issues/3310#issuecomment-1247390774\r\n> Downgrading pyarrow to 6.0.1 solves the issue for me.\r\n> \r\n> `pip install pyarrow==6.0.1`\r\n\r\nAs it happens, I found this error message when I introduced the [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer) of HuggingFace\r\n\r\nFor example, I write following code:\r\n```python\r\nfrom transformers import Trainer\r\nprint('Hugging Face')\r\n```\r\n I get the following error message:\r\n```python\r\nHugging Face\r\nFatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n################################################################################\r\nStack trace:\r\n################################################################################\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x7fa9add1df06]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x7fa9add158e5]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x7fa9adc3ae09]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x7fa9add1ea3d]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x7fa9adc38948]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x7fa9add1ea3d]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x7fa9adbf3b46]\r\n/home/ubuntu/anaconda3/envs/pytorch38/lib/python3.8/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x7fa9ad65846a]\r\n/lib/x86_64-linux-gnu/libc.so.6(+0x468d7) [0x7faa2fcfe8d7]\r\n/lib/x86_64-linux-gnu/libc.so.6(on_exit+0) [0x7faa2fcfea90]\r\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfa) [0x7faa2fcdc0ba]\r\n/home/ubuntu/anaconda3/envs/pytorch38/bin/python(+0x1f9ad7) [0x5654571d1ad7]\r\n```\r\nBut, when I remove the `Trainer` module from transformers, **everthing is OK**.\r\n\r\nSo Why ?\r\n\r\n**Environment info**\r\n- Platform: Ubuntu 18\r\n- Python version: 3.8\r\n- PyArrow version: 9.0.0\r\n- transformers: 4.22.1\r\n- simpletransformers: 0.63.9", "> I get the following error message:\r\n\r\nNot sure what's going on, but that shouldn't happen, especially as we're pinning to a version that should avoid this.\r\n\r\nCan you please open an issue https://github.com/conda-forge/arrow-cpp-feedstock, including the requested output of `conda list` & `conda info`?", "pyarrow 10.0.1 was just released in conda-forge, which is the first release where we're building against aws-sdk-cpp 1.9.* again after more than a year. Since we cannot test the failure reported here on our infra, I'd be very grateful if someone could verify that the problem does or doesn't reappear. πŸ™ƒ \r\n\r\n```\r\nconda install -c conda-forge pyarrow=10\r\n```", "> pyarrow 10.0.1 was just released in conda-forge, which is the first release where we're building against aws-sdk-cpp 1.9.* again after more than a year. Since we cannot test the failure reported here on our infra, I'd be very grateful if someone could verify that the problem does or doesn't reappear. πŸ™ƒ\r\n> \r\n> ```\r\n> conda install -c conda-forge pyarrow=10\r\n> ```\r\n\r\nThe problem is gone after I install the new version. Thanks!\r\npip install pyarrow==10", "@liuchaoqun, with `pip install pyarrow` you don't get aws-bindings, they're too complicated to package into wheels as far as I know. And even if they're packaged, at the time of the release of pyarrow 10 it would have still been pinned to aws 1.8 for the same reasons as in this issue." ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug Fatal error when using the library ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wikiann', 'en') ``` ## Expected results No fatal errors ## Actual results ``` Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS Exiting Application ``` ## Environment info - `datasets` version: 1.15.2.dev0 - Platform: Windows-10-10.0.22504-SP0 - Python version: 3.8.12 - PyArrow version: 6.0.0
2023-02-08T10:31:05Z
https://github.com/huggingface/datasets/issues/3310
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3310/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3309/comments
https://api.github.com/repos/huggingface/datasets/issues/3309/timeline
2021-11-23T17:00:58Z
null
null
PR_kwDODunzps4u0Xgm
closed
[]
false
3,309
{ "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/borisdayma", "id": 715491, "login": "borisdayma", "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "repos_url": "https://api.github.com/users/borisdayma/repos", "site_admin": false, "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "type": "User", "url": "https://api.github.com/users/borisdayma" }
fix: files counted twice in inferred structure
https://api.github.com/repos/huggingface/datasets/issues/3309/events
null
https://api.github.com/repos/huggingface/datasets/issues/3309/labels{/name}
2021-11-21T21:50:38Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3309.diff", "html_url": "https://github.com/huggingface/datasets/pull/3309", "merged_at": "2021-11-23T17:00:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3309.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3309" }
1,059,496,154
[]
https://api.github.com/repos/huggingface/datasets/issues/3309
[ "I see it creates some errors in the tests.\r\n\r\nAnother solution if needed is to add something like `data_files = list(set(data_files))` after [this line](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/data_files.py#L511)", "Hi ! Thanks for the correction :)\r\n\r\nYour change seems right, let me look at the errors and try to fix this", "Not sure if it's due to this change but IΒ tested `load_dataset('dalle-mini/encoded-vqgan_imagenet_f16_16384', streaming=True)` and the `validation` set is empty.", "So indeed there was an issue with the patterns `*` and `**/*` that would return some files twice. This issue came from the fact that we were not using the right `glob`.\r\n\r\nIndeed we were using `Path.rglob` for local files and `Path.match` for remote files. Since these two methods don't have the same behavior for such patterns, I decided to change that.\r\n\r\nIn particular, we now use `glob.glob` (same as `fsspec` glob) as a reference for data files resolution from patterns. This is the same as dask for example.\r\n\r\n/!\\ Here are some behaviors specific to `glob.glob` that are different from Path.glob, Path.match or fnmatch:\r\n- '*' matches only first level files\r\n- '**/*' matches only at least second level files\r\n\r\nThis way we have a consistent behavior with respect to other python data libraries and there's no overlap anymore between the two patterns.\r\n\r\nSome implementations details:\r\n\r\nTo ensure that we have the same behavior for local files and for files in a remote dataset repository, I decided to use `fsspec` glob for both. This was made possible by implementing the `HfFileSystem` class as a `fsspec` filesystem.\r\n\r\nI pushed those changes directly to your PR - I hope you don't mind. I'm still fixing the remaining tests.\r\nPlease let me know if that solves your problem, and then we can merge !", "There's still an issue with fsspec's glob - I'll take a look this afternoon", "I just found out that actually glob.glob and fsspec glob are different haha\r\nglob.glob needs `**/*` and recursive=True to look into deep subdirectories, while fsspec only requires `**`\r\n\r\nI think we can go with fsspec glob for consistency with dask and since it's our main tool for filesystems management", "To recap:\r\n```\r\nWe use fsspec glob as a reference for data files resolution from patterns.\r\nThis is the same as dask for example.\r\n\r\n/!\\ Here are some behaviors specific to fsspec glob that are different from glob.glob, Path.glob, Path.match or fnmatch:\r\n- '*' matches only first level items\r\n- '**' matches all items\r\n- '**/*' matches all at least second level items\r\n\r\nMore generally:\r\n- `*`` matches any character except a forward-slash (to match just the file or directory name)\r\n- `**`` matches any character including a forward-slash /\r\n```", "lol Windows… Maybe `Pathlib` for the tests?\r\n\r\nI tested streaming a repo and it worked perfectly now!" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Files were counted twice in a structure like: ``` my_dataset_local_path/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ β”œβ”€β”€ shard_1.csv β”‚ β”œβ”€β”€ shard_2.csv β”‚ └── shard_3.csv └── valid/ β”œβ”€β”€ shard_0.csv └── shard_1.csv ``` The reason is that they were matching both `*train*/*` and `*train*/**/*`. This PR fixes it. @lhoestq
2021-11-23T17:00:58Z
https://github.com/huggingface/datasets/pull/3309
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3309/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3308/comments
https://api.github.com/repos/huggingface/datasets/issues/3308/timeline
null
null
null
I_kwDODunzps4_IvWZ
open
[]
null
3,308
{ "avatar_url": "https://avatars.githubusercontent.com/u/8587189?v=4", "events_url": "https://api.github.com/users/amitness/events{/privacy}", "followers_url": "https://api.github.com/users/amitness/followers", "following_url": "https://api.github.com/users/amitness/following{/other_user}", "gists_url": "https://api.github.com/users/amitness/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amitness", "id": 8587189, "login": "amitness", "node_id": "MDQ6VXNlcjg1ODcxODk=", "organizations_url": "https://api.github.com/users/amitness/orgs", "received_events_url": "https://api.github.com/users/amitness/received_events", "repos_url": "https://api.github.com/users/amitness/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amitness/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amitness/subscriptions", "type": "User", "url": "https://api.github.com/users/amitness" }
"dataset_infos.json" missing for chr_en and mc4
https://api.github.com/repos/huggingface/datasets/issues/3308/events
null
https://api.github.com/repos/huggingface/datasets/issues/3308/labels{/name}
2021-11-21T00:07:22Z
null
false
null
null
1,059,255,705
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3308
[ "Hi ! Thanks for reporting :) \r\nWe can easily add the metadata for `chr_en` IMO, but for mC4 it will take more time, since it requires to count the number of examples in each language", "No problem. I am trying to do some analysis on the metadata of all available datasets. Is reading `metadata_infos.json` for each dataset the correct way to go? \r\n\r\nI noticed that the same information is also available as special variables inside .py file of each dataset. So, I was wondering if `metadata_infos.json` has been deprecated?\r\n\r\n![image](https://user-images.githubusercontent.com/8587189/142914413-a95a1abf-6f3e-4fbe-96e5-16d3ca39c831.png)\r\n", "The `dataset_infos.json` files have more information and are made to be used to analyze the datasets without having to run/parse the python scripts. Moreover some datasets on the Hugging face don't even have a python script, and for those ones we'll make tools to generate the JSON file automatically :)" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`. ## Steps to reproduce the bug Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/huggingface/datasets/tree/master/datasets/mc4)
2022-01-19T13:55:32Z
https://github.com/huggingface/datasets/issues/3308
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3308/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3307/comments
https://api.github.com/repos/huggingface/datasets/issues/3307/timeline
2021-11-25T14:51:48Z
null
null
PR_kwDODunzps4uzlWa
closed
[]
false
3,307
{ "avatar_url": "https://avatars.githubusercontent.com/u/6201626?v=4", "events_url": "https://api.github.com/users/afaji/events{/privacy}", "followers_url": "https://api.github.com/users/afaji/followers", "following_url": "https://api.github.com/users/afaji/following{/other_user}", "gists_url": "https://api.github.com/users/afaji/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afaji", "id": 6201626, "login": "afaji", "node_id": "MDQ6VXNlcjYyMDE2MjY=", "organizations_url": "https://api.github.com/users/afaji/orgs", "received_events_url": "https://api.github.com/users/afaji/received_events", "repos_url": "https://api.github.com/users/afaji/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afaji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afaji/subscriptions", "type": "User", "url": "https://api.github.com/users/afaji" }
Add IndoNLI dataset
https://api.github.com/repos/huggingface/datasets/issues/3307/events
null
https://api.github.com/repos/huggingface/datasets/issues/3307/labels{/name}
2021-11-20T20:46:03Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3307.diff", "html_url": "https://github.com/huggingface/datasets/pull/3307", "merged_at": "2021-11-25T14:51:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/3307.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3307" }
1,059,226,297
[]
https://api.github.com/repos/huggingface/datasets/issues/3307
[ "@lhoestq thanks for the review! I've modified the labels to follow other NLI datasets.\r\nPlease review my change and let me know if I miss anything." ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/
2021-11-25T14:51:48Z
https://github.com/huggingface/datasets/pull/3307
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3307/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3306/comments
https://api.github.com/repos/huggingface/datasets/issues/3306/timeline
2021-12-08T13:02:15Z
null
completed
I_kwDODunzps4_IeTE
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
3,306
{ "avatar_url": "https://avatars.githubusercontent.com/u/38486514?v=4", "events_url": "https://api.github.com/users/function2-llx/events{/privacy}", "followers_url": "https://api.github.com/users/function2-llx/followers", "following_url": "https://api.github.com/users/function2-llx/following{/other_user}", "gists_url": "https://api.github.com/users/function2-llx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/function2-llx", "id": 38486514, "login": "function2-llx", "node_id": "MDQ6VXNlcjM4NDg2NTE0", "organizations_url": "https://api.github.com/users/function2-llx/orgs", "received_events_url": "https://api.github.com/users/function2-llx/received_events", "repos_url": "https://api.github.com/users/function2-llx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/function2-llx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/function2-llx/subscriptions", "type": "User", "url": "https://api.github.com/users/function2-llx" }
nested sequence feature won't encode example if the first item of the outside sequence is an empty list
https://api.github.com/repos/huggingface/datasets/issues/3306/events
null
https://api.github.com/repos/huggingface/datasets/issues/3306/labels{/name}
2021-11-20T16:57:54Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
null
1,059,185,860
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3306
[ "knock knock", "Hi, thanks for reporting! I've linked a PR that should fix the issue.", "I've checked the PR and it looks great, thanks a lot!" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list. ## Steps to reproduce the bug ```python from datasets import Features, Sequence, ClassLabel features = Features({ 'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))), }) print(features.encode_batch({ 'x': [ [['a'], ['b']], [[], ['b']], ] })) ``` ## Expected results print `{'x': [[[0], [1]], [[], ['1']]]}` ## Actual results print `{'x': [[[0], [1]], [[], ['b']]]}` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux-5.13.0-21-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.0 ## Additional information I think the issue stems from [here](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/features/features.py#L847-L848).
2021-12-08T13:02:15Z
https://github.com/huggingface/datasets/issues/3306
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3306/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3305/comments
https://api.github.com/repos/huggingface/datasets/issues/3305/timeline
2021-11-22T17:08:13Z
null
null
PR_kwDODunzps4uzZWv
closed
[]
false
3,305
{ "avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4", "events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}", "followers_url": "https://api.github.com/users/Ishan-Kumar2/followers", "following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}", "gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ishan-Kumar2", "id": 46553104, "login": "Ishan-Kumar2", "node_id": "MDQ6VXNlcjQ2NTUzMTA0", "organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs", "received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events", "repos_url": "https://api.github.com/users/Ishan-Kumar2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions", "type": "User", "url": "https://api.github.com/users/Ishan-Kumar2" }
asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py``
https://api.github.com/repos/huggingface/datasets/issues/3305/events
null
https://api.github.com/repos/huggingface/datasets/issues/3305/labels{/name}
2021-11-20T14:51:23Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3305.diff", "html_url": "https://github.com/huggingface/datasets/pull/3305", "merged_at": "2021-11-22T17:08:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3305.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3305" }
1,059,161,000
[]
https://api.github.com/repos/huggingface/datasets/issues/3305
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Addresses #3171 Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests
2021-11-22T18:24:32Z
https://github.com/huggingface/datasets/pull/3305
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3305/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3304/comments
https://api.github.com/repos/huggingface/datasets/issues/3304/timeline
2021-11-21T07:07:25Z
null
completed
I_kwDODunzps4_IQx-
closed
[]
null
3,304
{ "avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4", "events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}", "followers_url": "https://api.github.com/users/RajkumarGalaxy/followers", "following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}", "gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RajkumarGalaxy", "id": 59993678, "login": "RajkumarGalaxy", "node_id": "MDQ6VXNlcjU5OTkzNjc4", "organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs", "received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events", "repos_url": "https://api.github.com/users/RajkumarGalaxy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions", "type": "User", "url": "https://api.github.com/users/RajkumarGalaxy" }
Dataset object has no attribute `to_tf_dataset`
https://api.github.com/repos/huggingface/datasets/issues/3304/events
null
https://api.github.com/repos/huggingface/datasets/issues/3304/labels{/name}
2021-11-20T12:03:59Z
null
false
null
null
1,059,130,494
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3304
[ "The issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n```\r\n# upgrade transformers and datasets to latest versions\r\n!pip install --upgrade transformers\r\n!pip install --upgrade datasets\r\n```\r\n\r\nRegards!" ]
https://api.github.com/repos/huggingface/datasets
NONE
I am following HuggingFace Course. I am at Fine-tuning a model. Link: https://huggingface.co/course/chapter3/2?fw=tf I use tokenize_function and `map` as mentioned in the course to process data. `# define a tokenize function` `def Tokenize_function(example):` ` return tokenizer(example['sentence'], truncation=True)` `# tokenize entire data` `tokenized_data = raw_data.map(Tokenize_function, batched=True)` I get Dataset object at this point. When I try converting this to a TF dataset object as mentioned in the course, it throws the following error. `# convert to TF dataset` `train_data = tokenized_data["train"].to_tf_dataset( ` ` columns = ['attention_mask', 'input_ids', 'token_type_ids'], ` ` label_cols = ['label'], ` ` shuffle = True, ` ` collate_fn = data_collator, ` ` batch_size = 8 ` `)` Output: `---------------------------------------------------------------------------` `AttributeError Traceback (most recent call last)` `/tmp/ipykernel_42/103099799.py in <module>` ` 1 # convert to TF dataset` `----> 2 train_data = tokenized_data["train"].to_tf_dataset( \` ` 3 columns = ['attention_mask', 'input_ids', 'token_type_ids'], \` ` 4 label_cols = ['label'], \` ` 5 shuffle = True, \` `AttributeError: 'Dataset' object has no attribute 'to_tf_dataset'` When I look for `dir(tokenized_data["train"])`, there is no method or attribute in the name of `to_tf_dataset`. Why do I get this error? And how to clear this? Please help me.
2021-11-21T07:07:25Z
https://github.com/huggingface/datasets/issues/3304
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3304/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3303/comments
https://api.github.com/repos/huggingface/datasets/issues/3303/timeline
2021-11-21T07:05:37Z
null
completed
I_kwDODunzps4_IQmE
closed
[]
null
3,303
{ "avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4", "events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}", "followers_url": "https://api.github.com/users/RajkumarGalaxy/followers", "following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}", "gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RajkumarGalaxy", "id": 59993678, "login": "RajkumarGalaxy", "node_id": "MDQ6VXNlcjU5OTkzNjc4", "organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs", "received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events", "repos_url": "https://api.github.com/users/RajkumarGalaxy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions", "type": "User", "url": "https://api.github.com/users/RajkumarGalaxy" }
DataCollatorWithPadding: TypeError
https://api.github.com/repos/huggingface/datasets/issues/3303/events
null
https://api.github.com/repos/huggingface/datasets/issues/3303/labels{/name}
2021-11-20T11:59:55Z
null
false
null
null
1,059,129,732
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3303
[ "\r\n> \r\n> Input:\r\n> \r\n> ```\r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"tf\")\r\n> ```\r\n> \r\n> Output:\r\n> \r\n> ```\r\n> TypeError Traceback (most recent call last)\r\n> /tmp/ipykernel_42/1563280798.py in <module>\r\n> 1 checkpoint = 'bert-base-uncased'\r\n> 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"pt\")\r\n> TypeError: __init__() got an unexpected keyword argument 'return_tensors'\r\n> ```\r\n> \r\n\r\nThe issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n`# upgrade transformers and datasets to latest versions`\r\n`!pip install --upgrade transformers`\r\n`!pip install --upgrade datasets`\r\n\r\nCheers!" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a CPU-only-device or a GPU-device. Input: ```checkpoint = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(checkpoint) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` Output: ```--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_42/1563280798.py in <module> 1 checkpoint = 'bert-base-uncased' 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint) ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt") TypeError: __init__() got an unexpected keyword argument 'return_tensors' ``` When I call `help` method, it too confirms that there is no argument `return_tensors`. Input: ``` help(DataCollatorWithPadding.__init__) ``` Output: ``` Help on function __init__ in module transformers.data.data_collator: __init__(self, tokenizer: transformers.tokenization_utils_base.PreTrainedTokenizerBase, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = True, max_length: Union[int, NoneType] = None, pad_to_multiple_of: Union[int, NoneType] = None) -> None ``` But, the source file *[Data Collator - docs](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorwithpadding)* says that there is such an argument. By default, it returns Pytorch tensors while I need TF tensors. Where do I miss? Please help me.
2021-11-21T07:05:37Z
https://github.com/huggingface/datasets/issues/3303
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3303/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3302/comments
https://api.github.com/repos/huggingface/datasets/issues/3302/timeline
2021-11-22T17:04:19Z
null
null
PR_kwDODunzps4uynjc
closed
[]
false
3,302
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
fix old_val typo in f-string
https://api.github.com/repos/huggingface/datasets/issues/3302/events
null
https://api.github.com/repos/huggingface/datasets/issues/3302/labels{/name}
2021-11-19T20:51:08Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3302.diff", "html_url": "https://github.com/huggingface/datasets/pull/3302", "merged_at": "2021-11-22T17:04:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3302.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3302" }
1,058,907,168
[]
https://api.github.com/repos/huggingface/datasets/issues/3302
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment. Related closed issue : #3257 Sorry about that πŸ˜….
2021-11-25T22:14:43Z
https://github.com/huggingface/datasets/pull/3302
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3302/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3301/comments
https://api.github.com/repos/huggingface/datasets/issues/3301/timeline
2021-11-19T16:49:29Z
null
null
PR_kwDODunzps4uyA9o
closed
[]
false
3,301
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Add wikipedia tags
https://api.github.com/repos/huggingface/datasets/issues/3301/events
null
https://api.github.com/repos/huggingface/datasets/issues/3301/labels{/name}
2021-11-19T16:39:25Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3301.diff", "html_url": "https://github.com/huggingface/datasets/pull/3301", "merged_at": "2021-11-19T16:49:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3301.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3301" }
1,058,718,957
[]
https://api.github.com/repos/huggingface/datasets/issues/3301
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Add the missing tags to the wikipedia dataset card. I also added the missing languages code in our language codes list. This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292
2021-11-19T16:49:30Z
https://github.com/huggingface/datasets/pull/3301
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3301/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3300/comments
https://api.github.com/repos/huggingface/datasets/issues/3300/timeline
2021-12-22T10:57:56Z
null
completed
I_kwDODunzps4_GaHr
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
3,300
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci" }
❓ Dataset loading script from Hugging Face Hub
https://api.github.com/repos/huggingface/datasets/issues/3300/events
null
https://api.github.com/repos/huggingface/datasets/issues/3300/labels{/name}
2021-11-19T15:20:52Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
null
1,058,644,459
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
https://api.github.com/repos/huggingface/datasets/issues/3300
[ "Hi ! In the next version of `datasets`, your train and test splits will be correctly separated (changes from #3027) if you create a dataset repository with only your CSV files.\r\n\r\nAlso it seems that you overwrite the `data_files` and `data_dir` arguments in your code, when you instantiate the AGNewsConfig objects. Those parameters are not necessary since you already know which files you want to load.\r\n\r\nYou can find an example on how to specify which file the dataset has to download in this [example script](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107):\r\n```python\r\n_URLS = {\r\n \"train\": \"train-v1.1.json\", # you can use a URL or a relative path from the python script to your file in the repository\r\n \"dev\": \"dev-v1.1.json\",\r\n}\r\n```\r\n```python\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n\r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": downloaded_files[\"train\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": downloaded_files[\"dev\"]}),\r\n ]\r\n```", "Also I think the viewer will be updated when you fix the dataset script, let me know if it doesn't", "Hi @lhoestq,\r\n\r\nThanks a lot for the super quick answer!\r\n\r\nYour suggestion solves my issue. I am now able to load the dataset properly πŸš€ \r\nHowever, the dataviewer is not working yet.\r\n\r\nReally, thanks a lot for your help and consideration!\r\n\r\nBest,\r\nPietro", "Great ! We'll take a look at the viewer to fix it", "@lhoestq I think I am having a related problem.\r\nMy call to load_dataset() looks like this:\r\n\r\n```\r\n datasets = load_dataset(\r\n os.path.abspath(layoutlmft.data.datasets.xfun.__file__),\r\n f\"xfun.{data_args.lang}\",\r\n additional_langs=data_args.additional_langs,\r\n keep_in_memory=True,\r\n )\r\n\r\n```\r\n\r\nMy _split_generation code is:\r\n\r\n```\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n downloaded_file = dl_manager.download_and_extract(\"https://guillaumejaume.github.io/FUNSD/dataset.zip\")\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/training_data/\"}\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TEST, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/testing_data/\"}\r\n ),\r\n ]\r\n\r\n```\r\nHowever I get the error \"TypeError: _generate_examples() got an unexpected keyword argument 'filepath'\"\r\nThe path looks right and I see the data in the path so I think the only problem I have is that it doesn't like the key \"filepath\". However, the documentation (example [here](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107)) seems to show that this is the correct parameter. \r\n\r\nHere is the full stack trace:\r\n\r\n```\r\nDownloading and preparing dataset xfun/xfun.en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/caseygre/.cache/huggingface/datasets/xfun/xfun.en/0.0.0/96b8cb7c57f6f822f0ab37ae3be7b82d84ac57062e774c9361ccf0a4b9ef61cc...\r\nTraceback (most recent call last):\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 975, in _prepare_split\r\n generator = self._generate_examples(**split_generator.gen_kwargs)\r\nTypeError: _generate_examples() got an unexpected keyword argument 'filepath'\r\npython-BaseException\r\n```", "Hi ! The `gen_kwargs` dictionary is passed to `_generate_examples`, so in your case it must be defined this way:\r\n```python\r\ndef _generate_examples(self, filepath):\r\n ...\r\n```\r\n\r\nAnd here is an additional tip: you can use `os.path.join(downloaded_file, \"dataset/testing_data\")` instead of `f\"downloaded_file}/dataset/testing_data/\"` to get compatibility with Windows and streaming.\r\n\r\nIndeed Windows uses a backslash separator, not a slash, and streaming uses chained URLs (like `zip://dataset/testing_data::https://https://guillaumejaume.github.io/FUNSD/dataset.zip` for example)", "Thanks for you quick reply @lhoestq and so sorry for my very delayed response.\r\nWe have gotten around the error another way but I will try to duplicate this when I can. We may have had \"filepaths\" instead of \"filepath\" in our def of _generate_examples() and not noticed the difference. If I find a more useful answer for others I will add to this ticket so they know what the issue was.\r\nNote: we do have our own _generate_examples() defined with the same def as Quentin has. (But one version does have \"filepaths\".)\r\n", "Fixed in the viewer: https://huggingface.co/datasets/pietrolesci/ag_news" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi there, I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do so I have encountered certain problems as detailed below. Issues I have encountered: - Without a loading script, the train and test files are loaded together into a unique `dataset.Dataset` -> so I wrote a loading script. Also, I need a loading script otherwise I cannot specify multiple configurations - Once my loading script is working locally, I do not manage to make it work on the hub. In particular, I would like to be able to load the dataset like this ```python load_dataset("pietrolesci/ag_news", name="my_configuration") ``` Apparently, the `load_dataset` is able to pick up the loading script from the hub and run it. However, it errors because it is unable to find the files. The structure of my hub repo is the following ``` ag_news.py train.csv test.csv ``` and the loading script I specify `data_dir=Path(__file__).parent` and `data_files=DataFilesDict({"train": "train.csv", "test": "test.csv"})`. In the documentation I could not find info regarding loading a dataset from the hub using a loading script present on the hub. Any suggestion is very much appreciated. Best, Pietro Link to the hub repo: https://huggingface.co/datasets/pietrolesci/ag_news BONUS: how can I make the data viewer work in this specific case? :)
2021-12-22T10:57:56Z
https://github.com/huggingface/datasets/issues/3300
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3300/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3299/comments
https://api.github.com/repos/huggingface/datasets/issues/3299/timeline
null
null
null
I_kwDODunzps4_F7TF
open
[]
null
3,299
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Add option to find unique elements in nested sequences when calling `Dataset.unique`
https://api.github.com/repos/huggingface/datasets/issues/3299/events
null
https://api.github.com/repos/huggingface/datasets/issues/3299/labels{/name}
2021-11-19T13:16:06Z
null
false
null
null
1,058,518,213
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/3299
[ "Hi @mariosasko!\r\n\r\nHas this been patched into any of the releases?", "Hi! Not yet, would you be interested in contributing a PR? I can give you some pointers if needed. ", "@mariosasko did this ever get implemented? Willing to help if you are still up for it.", "@dcruiz01 No, but here is an example of how to do this with the existing API:\r\n\r\n\r\n```python\r\nds = Dataset.from_dict({\"tokens\": [[\"a\", \"b\"], [\"c\", \"a\"], [\"c\", \"e\"]]})\r\n\r\ndef flatten_tokens(pa_table):\r\n return pa.table([pc.list_flatten(pa_table[\"tokens\"])], [\"flat_tokens\"])\r\n\r\nds = ds.with_format(\"arrow\")\r\nds = ds.map(flatten_tokens, batched=True)\r\nds = ds.with_format(None)\r\n\r\nunique_tokens = ds.unique(\"flat_tokens\")\r\n```\r\n\r\nWhen I think about it, `.unique` on `Sequence(Value(...))` should return unique sequences/arrays, not unique elements of these sequences..." ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. ~~Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.~~
2023-05-19T14:45:40Z
https://github.com/huggingface/datasets/issues/3299
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3299/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3298/comments
https://api.github.com/repos/huggingface/datasets/issues/3298/timeline
2021-12-21T16:24:05Z
null
completed
I_kwDODunzps4_FjXp
closed
[]
null
3,298
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci" }
Agnews dataset viewer is not working
https://api.github.com/repos/huggingface/datasets/issues/3298/events
null
https://api.github.com/repos/huggingface/datasets/issues/3298/labels{/name}
2021-11-19T11:18:59Z
null
false
null
null
1,058,420,201
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
https://api.github.com/repos/huggingface/datasets/issues/3298
[ "Hi ! Thanks for reporting\r\nWe've already fixed the code that generates the preview for this dataset, we'll release the fix soon :)", "Hi @lhoestq, thanks for your feedback!", "Fixed in the viewer.\r\n\r\nhttps://huggingface.co/datasets/ag_news" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/ag_news Hi there, the `ag_news` dataset viewer is not working. Am I the one who added this dataset? No
2021-12-21T16:24:05Z
https://github.com/huggingface/datasets/issues/3298
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3298/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3297/comments
https://api.github.com/repos/huggingface/datasets/issues/3297/timeline
null
null
null
I_kwDODunzps4_E9Mz
open
[]
null
3,297
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal" }
.map() cache is wrongfully reused - only happens when the mapping function is imported
https://api.github.com/repos/huggingface/datasets/issues/3297/events
null
https://api.github.com/repos/huggingface/datasets/issues/3297/labels{/name}
2021-11-19T08:18:36Z
null
false
null
null
1,058,263,859
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3297
[ "Hi ! Thanks for reporting. Indeed this is a current limitation of the usage we have of `dill` in `datasets`. I'd suggest you use your workaround for now until we find a way to fix this. Maybe functions that are not coming from a module not installed with pip should be dumped completely, rather than only taking their locations into account", "I agree. Sounds like a solution for it would be pretty dirty, even [cloudpickle](https://stackoverflow.com/a/16891169) doesn't help in this case.\r\nIn the meanwhile I think that adding a warning and the workaround somewhere in the documentation can be helpful.", "For anyone interested, I see that with `dill==0.3.6` the workaround I suggested doesn't work anymore.\r\nI opened an issue about it: https://github.com/uqfoundation/dill/issues/572.\r\n\r\n " ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Describe the bug When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified. The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411). I guess it is not a widespread case, but it can still lead to unwanted results unnoticeably. ## Steps to reproduce the bug Create files `a.py` and `b.py`: ```python # a.py from datasets import load_dataset def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples if __name__ == "__main__": main() ``` ```python # b.py from datasets import load_dataset from a import mapping_func def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) if __name__ == "__main__": main() ``` Run `python b.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python b.py` again. You'll see that `.map` loads from the cache the result of the previous mapping function. ## Expected results Run `python a.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python a.py` again. You'll see that the dataset is being processed and that there's no reuse of the previous mapping function result. ## Workaround Put the mapping function inside a dummy class as a static method: ```python # a.py class MappingFuncClass: @staticmethod def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples ``` ```python # b.py from datasets import load_dataset from a import MappingFuncClass def main(): squad = load_dataset("squad") squad.map(MappingFuncClass.mapping_func, batched=True) if __name__ == "__main__": main() ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
2023-01-30T12:40:17Z
https://github.com/huggingface/datasets/issues/3297
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3297/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3296/comments
https://api.github.com/repos/huggingface/datasets/issues/3296/timeline
2021-12-06T10:45:04Z
null
null
PR_kwDODunzps4uvlQz
closed
[]
false
3,296
{ "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francisco-perez-sorrosal", "id": 918006, "login": "francisco-perez-sorrosal", "node_id": "MDQ6VXNlcjkxODAwNg==", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "type": "User", "url": "https://api.github.com/users/francisco-perez-sorrosal" }
Fix temporary dataset_path creation for URIs related to remote fs
https://api.github.com/repos/huggingface/datasets/issues/3296/events
null
https://api.github.com/repos/huggingface/datasets/issues/3296/labels{/name}
2021-11-18T23:32:45Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3296.diff", "html_url": "https://github.com/huggingface/datasets/pull/3296", "merged_at": "2021-12-06T10:45:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/3296.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3296" }
1,057,970,638
[]
https://api.github.com/repos/huggingface/datasets/issues/3296
[ "Hi ! Thanks for the fix :) \r\n\r\nI think this should be `extract_path_from_uri` 's responsibility to strip the extra `/` from a badly formatted path like `hdfs:///absolute/path` (or raise an error). Do you think you can simply do the changes in `extract_path_from_uri` ? This way this fix will be available for all the other parts of the lib that need to extract the inner path from an URI of a remote filesystem\r\n\r\nThen we can also keep your test cases but simply apply them to `extract_path_from_uri` instead", "Hi @lhoestq! No problem! Thanks for your interest! :)\r\n\r\nI think stripping the 3rd `/` in `hdfs:///absolute/path` inside `extract_path_from_uri` is not the solution. When I provide `hdfs:///absolute/path` to `extract_path_from_uri` we want `/absolute/path` to be returned, as it does now (at least in the case of URIs with `hdfs` schemas, for `s3` is different as it should start with a bucket name).\r\n\r\nThe problem comes in line 1041 in the original code below:\r\n\r\nhttps://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042\r\n\r\nLets assume the following parameters for line 1041 after `extract_path_from_uri` has removed the `hdfs` schema part and the `://` from `hdfs:///absolute/path`, and `get_temporary_cache_files_directory()` returns `/tmp/a1b2b3c4`, as it is shown below: \r\n\r\n```python\r\nsrc_dataset_path = '/absolute/path'\r\ntmp_dir = '/tmp/a1b2b3c4'\r\ndataset_path = Path(tmp_dir, src_dataset_path)\r\n```\r\n\r\nAfter passing those paths to the `Path` object, `dataset_path` contains only `/absolute/path`; that is, it has lost the temporary directory path. This is because, when two (or more) absolute paths are passed to the `Path` function, only the last one is taken. However, if the contents of those variables are:\r\n\r\n```python\r\nsrc_dataset_path = 'relative/path'\r\ntmp_dir = '/tmp/a1b2b3c4'\r\ndataset_path = Path(tmp_dir, src_dataset_path)\r\n```\r\n\r\nthen `dataset_path` contains `/tmp/a1b2b3c4/relative/path` as expected.\r\n\r\nAbsolute paths are allowed in hdfs URIs, so that's why I added the extra function `build_local_temp_path` in the PR; so in case the second argument is an absolute path, it still will create the correct absolute path by concatenating the temp dir and the path passed by converting it to a relative path (and it also works for windows paths too.) It also allows to add the tests, checking that the main combinations are ok.\r\n\r\nI've checked all the places where the result of `extract_path_from_uri` is used, and as far as I've seen this is the only place where it is concatenated with another possible absolute path, so no need to add `build_local_temp_path` anywhere else. \r\n" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This aims to close #3295
2021-12-06T10:45:04Z
https://github.com/huggingface/datasets/pull/3296
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3296/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3295/comments
https://api.github.com/repos/huggingface/datasets/issues/3295/timeline
2021-12-06T10:45:04Z
null
completed
I_kwDODunzps4_DxxM
closed
[]
null
3,295
{ "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francisco-perez-sorrosal", "id": 918006, "login": "francisco-perez-sorrosal", "node_id": "MDQ6VXNlcjkxODAwNg==", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "type": "User", "url": "https://api.github.com/users/francisco-perez-sorrosal" }
Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk
https://api.github.com/repos/huggingface/datasets/issues/3295/events
null
https://api.github.com/repos/huggingface/datasets/issues/3295/labels{/name}
2021-11-18T23:24:02Z
null
false
null
null
1,057,954,892
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3295
[ "Hi ! Good catch and thanks for opening a PR :)\r\n\r\nI just responded in your PR" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Describe the bug When trying to build a temporary dataset path from a remote URI in this block of code: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042 the result is not the expected when passing an absolute path in an URI like `hdfs:///absolute/path`. ## Steps to reproduce the bug ```python dataset_path = "hdfs:///absolute/path" src_dataset_path = extract_path_from_uri(dataset_path) tmp_dir = get_temporary_cache_files_directory() dataset_path = Path(tmp_dir, src_dataset_path) print(dataset_path) ``` ## Expected results With the code above, we would expect a value in `dataset_path` similar to: `/tmp/tmpnwxyvao5/absolute/path` ## Actual results However, we get a `dataset_path` value like: `/absolute/path` This is because this line here: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1041 returns the last absolute path when two absolute paths (the one in `tmp_dir` and the one extracted from the URI in `src_dataset_path`) are passed as arguments. ## Environment info - `datasets` version: 1.13.3 - Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
2021-12-06T10:45:04Z
https://github.com/huggingface/datasets/issues/3295
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3295/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3294/comments
https://api.github.com/repos/huggingface/datasets/issues/3294/timeline
null
null
null
I_kwDODunzps4_CBmx
open
[]
null
3,294
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
Add Natural Adversarial Objects dataset
https://api.github.com/repos/huggingface/datasets/issues/3294/events
null
https://api.github.com/repos/huggingface/datasets/issues/3294/labels{/name}
2021-11-18T15:34:44Z
null
false
null
null
1,057,495,473
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
https://api.github.com/repos/huggingface/datasets/issues/3294
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
## Adding a Dataset - **Name:** Natural Adversarial Objects (NAO) - **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. - **Paper:** https://arxiv.org/abs/2111.04204v1 - **Data:** https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8 - **Motivation:** interesting object detection dataset useful for miscclassifications cc @NielsRogge Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-12-08T12:00:02Z
https://github.com/huggingface/datasets/issues/3294
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3294/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3293/comments
https://api.github.com/repos/huggingface/datasets/issues/3293/timeline
2021-11-18T10:28:04Z
null
null
PR_kwDODunzps4uslLN
closed
[]
false
3,293
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Pin version exclusion for Markdown
https://api.github.com/repos/huggingface/datasets/issues/3293/events
null
https://api.github.com/repos/huggingface/datasets/issues/3293/labels{/name}
2021-11-18T06:56:01Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3293.diff", "html_url": "https://github.com/huggingface/datasets/pull/3293", "merged_at": "2021-11-18T10:28:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3293" }
1,057,004,431
[]
https://api.github.com/repos/huggingface/datasets/issues/3293
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
As Markdown version 3.3.5 has a bug, it is better to exclude it in case the users have it previously installed in their environment. Related to #3289, #3286.
2021-11-18T10:28:05Z
https://github.com/huggingface/datasets/pull/3293
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3293/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3292/comments
https://api.github.com/repos/huggingface/datasets/issues/3292/timeline
2021-11-19T16:49:29Z
null
completed
I_kwDODunzps4-__f6
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
3,292
{ "avatar_url": "https://avatars.githubusercontent.com/u/13541524?v=4", "events_url": "https://api.github.com/users/abhibisht89/events{/privacy}", "followers_url": "https://api.github.com/users/abhibisht89/followers", "following_url": "https://api.github.com/users/abhibisht89/following{/other_user}", "gists_url": "https://api.github.com/users/abhibisht89/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhibisht89", "id": 13541524, "login": "abhibisht89", "node_id": "MDQ6VXNlcjEzNTQxNTI0", "organizations_url": "https://api.github.com/users/abhibisht89/orgs", "received_events_url": "https://api.github.com/users/abhibisht89/received_events", "repos_url": "https://api.github.com/users/abhibisht89/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhibisht89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhibisht89/subscriptions", "type": "User", "url": "https://api.github.com/users/abhibisht89" }
Not able to load 'wikipedia' dataset
https://api.github.com/repos/huggingface/datasets/issues/3292/events
null
https://api.github.com/repos/huggingface/datasets/issues/3292/labels{/name}
2021-11-18T05:41:18Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
1,056,962,554
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3292
[ "Hi ! Indeed it looks like the code snippet on the Hugging face Hub doesn't show the second parameter\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/142649237-45ba55c5-1a64-4c30-8692-2c8120572f92.png)\r\n\r\nThanks for reporting, I'm taking a look\r\n" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error. ## Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset("wikipedia") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 339 "Config name is missing." 340 "\nPlease pick one among the available configs: %s" % list(self.builder_configs.keys()) --> 341 + "\nExample of usage:\n\t`{}`".format(example_of_usage) 342 ) 343 builder_config = self.BUILDER_CONFIGS[0] ValueError: Config name is missing. Please pick one among the available configs: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] Example of usage: `load_dataset('wikipedia', '20200501.aa')` I think the other parameter is missing in the load_dataset function that is not shown in the instruction.
2021-11-19T16:49:29Z
https://github.com/huggingface/datasets/issues/3292
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3292/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3291/comments
https://api.github.com/repos/huggingface/datasets/issues/3291/timeline
2021-11-22T16:40:16Z
null
null
PR_kwDODunzps4urikR
closed
[]
false
3,291
{ "avatar_url": "https://avatars.githubusercontent.com/u/84228424?v=4", "events_url": "https://api.github.com/users/Carlosbogo/events{/privacy}", "followers_url": "https://api.github.com/users/Carlosbogo/followers", "following_url": "https://api.github.com/users/Carlosbogo/following{/other_user}", "gists_url": "https://api.github.com/users/Carlosbogo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Carlosbogo", "id": 84228424, "login": "Carlosbogo", "node_id": "MDQ6VXNlcjg0MjI4NDI0", "organizations_url": "https://api.github.com/users/Carlosbogo/orgs", "received_events_url": "https://api.github.com/users/Carlosbogo/received_events", "repos_url": "https://api.github.com/users/Carlosbogo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Carlosbogo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Carlosbogo/subscriptions", "type": "User", "url": "https://api.github.com/users/Carlosbogo" }
Use f-strings in the dataset scripts
https://api.github.com/repos/huggingface/datasets/issues/3291/events
null
https://api.github.com/repos/huggingface/datasets/issues/3291/labels{/name}
2021-11-17T22:20:19Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3291.diff", "html_url": "https://github.com/huggingface/datasets/pull/3291", "merged_at": "2021-11-22T16:40:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3291.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3291" }
1,056,689,876
[]
https://api.github.com/repos/huggingface/datasets/issues/3291
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Uses f-strings to format the .py files in the dataset folder
2021-11-22T16:40:16Z
https://github.com/huggingface/datasets/pull/3291
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3291/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3290/comments
https://api.github.com/repos/huggingface/datasets/issues/3290/timeline
2021-11-19T15:08:57Z
null
null
PR_kwDODunzps4uqzcv
closed
[]
false
3,290
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Make several audio datasets streamable
https://api.github.com/repos/huggingface/datasets/issues/3290/events
null
https://api.github.com/repos/huggingface/datasets/issues/3290/labels{/name}
2021-11-17T17:43:41Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3290.diff", "html_url": "https://github.com/huggingface/datasets/pull/3290", "merged_at": "2021-11-19T15:08:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/3290.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3290" }
1,056,414,856
[]
https://api.github.com/repos/huggingface/datasets/issues/3290
[ "Reading FLAC (for `librispeech_asr`) works OK for me (`soundfile` version: `0.10.3`):\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets/librispeech_asr/librispeech_asr.py\", \"clean\", streaming=True, split=\"train.100\")\r\n\r\nIn [3]: item = next(iter(ds))\r\n\r\nIn [4]: item.keys()\r\nOut[4]: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\n\r\nIn [5]: item[\"file\"]\r\nOut[5]: '374-180298-0000.flac'\r\n\r\nIn [6]: item[\"audio\"].keys()\r\nOut[6]: dict_keys(['path', 'array', 'sampling_rate'])\r\n\r\nIn [7]: item[\"audio\"][\"sampling_rate\"]\r\nOut[7]: 16000\r\n\r\nIn [8]: item[\"audio\"][\"path\"]\r\nOut[8]: '374-180298-0000.flac'\r\n\r\nIn [9]: item[\"audio\"][\"array\"].shape\r\nOut[9]: (232480,)\r\n```", "Oh cool ! I think this might have come from an issue with my local `soundfile` installation then", "I'll do `multilingual_librispeech` in a separate PR since it requires the data to be in another format (in particular separate the train/dev/test splits in different files)", "@lhoestq @albertvillanova - think it would have been nice to have added a big message at the top stating that this is a breaking change and ping `transformers` people a bit more here." ]
https://api.github.com/repos/huggingface/datasets
MEMBER
<s>Needs https://github.com/huggingface/datasets/pull/3129 to be merged first</s> Make those audio datasets streamable: - [x] common_voice - [x] openslr - [x] vivos - [x] librispeech_asr <s>(still has some issues to read FLAC)</s> *actually it's ok* - [ ] <s>multilingual_librispeech (yet to be converted)</S> *TODO in a separate PR*
2022-02-01T21:00:52Z
https://github.com/huggingface/datasets/pull/3290
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3290/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3289/comments
https://api.github.com/repos/huggingface/datasets/issues/3289/timeline
2021-11-17T16:23:08Z
null
null
PR_kwDODunzps4uqf79
closed
[]
false
3,289
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Unpin markdown for build_docs now that it's fixed
https://api.github.com/repos/huggingface/datasets/issues/3289/events
null
https://api.github.com/repos/huggingface/datasets/issues/3289/labels{/name}
2021-11-17T16:22:53Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3289.diff", "html_url": "https://github.com/huggingface/datasets/pull/3289", "merged_at": "2021-11-17T16:23:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/3289.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3289" }
1,056,323,715
[]
https://api.github.com/repos/huggingface/datasets/issues/3289
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
`markdown`'s bug has been fixed, so this PR reverts #3286
2021-11-17T16:23:09Z
https://github.com/huggingface/datasets/pull/3289
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3289/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3288/comments
https://api.github.com/repos/huggingface/datasets/issues/3288/timeline
2021-11-17T15:41:11Z
null
null
PR_kwDODunzps4up6S5
closed
[]
false
3,288
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Allow datasets with indices table when concatenating along axis=1
https://api.github.com/repos/huggingface/datasets/issues/3288/events
null
https://api.github.com/repos/huggingface/datasets/issues/3288/labels{/name}
2021-11-17T13:41:28Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3288.diff", "html_url": "https://github.com/huggingface/datasets/pull/3288", "merged_at": "2021-11-17T15:41:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3288.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3288" }
1,056,145,703
[]
https://api.github.com/repos/huggingface/datasets/issues/3288
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Calls `flatten_indices` on the datasets with indices table in `concatenate_datasets` to fix issues when concatenating along `axis=1`. cc @lhoestq: I decided to flatten all the datasets instead of flattening all the datasets except the largest one in the end. The latter approach fails on the following example: ```python a = Dataset.from_dict({"a": [10, 20, 30, 40]}) b = Dataset.from_dict({"b": [10, 20, 30, 40, 50, 60]}) # largest dataset a = a.select([1, 2, 3]) b = b.select([1, 2, 3]) concatenate_datasets([a, b], axis=1) # fails at line concat_tables(...) because the real length of b's data is 6 and a's length is 3 after flattening (was 4 before flattening) ``` Also, it requires additional re-ordering of indices to prepare them for working with the indices table of the largest dataset. IMO not worth when we save only one `flatten_indices` call. (feel free to check the code of that approach at https://github.com/huggingface/datasets/commit/6acd10481c70950dcfdbfd2bab0bf0c74ad80bcb if you are interested) Fixes #3273
2021-11-17T15:41:12Z
https://github.com/huggingface/datasets/pull/3288
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3288/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3287/comments
https://api.github.com/repos/huggingface/datasets/issues/3287/timeline
2021-12-01T15:29:07Z
null
null
PR_kwDODunzps4upsWR
closed
[]
false
3,287
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Add The Pile dataset and PubMed Central subset
https://api.github.com/repos/huggingface/datasets/issues/3287/events
null
https://api.github.com/repos/huggingface/datasets/issues/3287/labels{/name}
2021-11-17T12:35:58Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3287.diff", "html_url": "https://github.com/huggingface/datasets/pull/3287", "merged_at": "2021-12-01T15:29:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/3287.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3287" }
1,056,079,724
[]
https://api.github.com/repos/huggingface/datasets/issues/3287
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Add: - The complete final version of The Pile dataset: "all" config - PubMed Central subset of The Pile: "pubmed_central" config Close #1675, close bigscience-workshop/data_tooling#74. CC: @StellaAthena, @lewtun
2021-12-01T15:29:08Z
https://github.com/huggingface/datasets/pull/3287
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3287/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3286/comments
https://api.github.com/repos/huggingface/datasets/issues/3286/timeline
2021-11-17T11:19:19Z
null
null
PR_kwDODunzps4updTK
closed
[]
false
3,286
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix build_docs CI
https://api.github.com/repos/huggingface/datasets/issues/3286/events
null
https://api.github.com/repos/huggingface/datasets/issues/3286/labels{/name}
2021-11-17T11:18:56Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3286.diff", "html_url": "https://github.com/huggingface/datasets/pull/3286", "merged_at": "2021-11-17T11:19:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3286.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3286" }
1,056,008,586
[]
https://api.github.com/repos/huggingface/datasets/issues/3286
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Because of https://github.com/Python-Markdown/markdown/issues/1196 we have to temporarily pin `markdown` to 3.3.4 for the docs to build without issues
2021-11-17T11:19:20Z
https://github.com/huggingface/datasets/pull/3286
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3286/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3285/comments
https://api.github.com/repos/huggingface/datasets/issues/3285/timeline
null
null
null
I_kwDODunzps4-6cEq
open
[]
null
3,285
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
Add IEMOCAP dataset
https://api.github.com/repos/huggingface/datasets/issues/3285/events
null
https://api.github.com/repos/huggingface/datasets/issues/3285/labels{/name}
2021-11-16T22:47:20Z
null
false
null
null
1,055,506,730
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
https://api.github.com/repos/huggingface/datasets/issues/3285
[ "The IEMOCAP dataset is private and available only on request.\r\n```\r\nTo obtain the IEMOCAP data you just need to fill out an electronic release form below.\r\n```\r\n\r\n- [Request form](https://sail.usc.edu/iemocap/release_form.php)\r\n- [License ](https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf)\r\n\r\n\r\n> We do not share the dataset for commercial purposes due to privacy concerns surrounding the participants of the research. The login details will only be emailed to the given academic email address.\r\n\r\nI think it won't be possible to add this dataset to πŸ€— datasets.", "Hi @dnaveenr ! We can contact the authors to see if they are interested in hosting the dataset on the Hub. In the meantime, feel free to work on a script with manual download.", "Hi @mariosasko . Thanks for your response. Sure, I will mail them and find out if they're open to this.\r\n\r\nWork on a script with manual download ? This is new to me, any guidelines would be helpful here.\r\n", "> Thanks for your response. Sure, I will mail them and find out if they're open to this.\r\n\r\nIt's best to leave this part to us because we have to explain how login would work and (potentially) set up a custom verification for the dataset.\r\n\r\n> Work on a script with manual download ? This is new to me, any guidelines would be helpful here.\r\n\r\nFor instance, this is one of the scripts with manual download: https://huggingface.co/datasets/arxiv_dataset. Compared to the standard dataset, it has the `manual_download_instructions` attribute and uses `dl_manager.manual_dir` (derived from `load_dataset(..., data_dir=\"path/to/data\")`) to access the dataset's data files.", "> It's best to leave this part to us because we have to explain how login would work and (potentially) set up a custom verification for the dataset.\r\n\r\nYes. That would be perfect. Thanks.\r\n\r\n----\r\nOkay. Thanks for giving a reference. This is helpful. I will go through it.\r\n\r\n", "@mariosasko has this been solved? I would like to use login and custom verification for training on my private dataset.", "@flckv I think the [gating mechanism](https://huggingface.co/docs/hub/datasets-gated) is what you are looking for. ", "@mariosasko Thanks, but no. I would like to keep my HuggingFace Dataset private and train a model on it. Is this possible?" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
## Adding a Dataset - **Name:** IEMOCAP - **Description:** acted, multimodal and multispeaker database - **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf - **Data:** https://sail.usc.edu/iemocap/index.html - **Motivation:** Useful multimodal dataset cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2023-06-10T08:14:52Z
https://github.com/huggingface/datasets/issues/3285
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3285/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3284/comments
https://api.github.com/repos/huggingface/datasets/issues/3284/timeline
null
null
null
I_kwDODunzps4-6bI9
open
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
null
3,284
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
Add VoxLingua107 dataset
https://api.github.com/repos/huggingface/datasets/issues/3284/events
null
https://api.github.com/repos/huggingface/datasets/issues/3284/labels{/name}
2021-11-16T22:44:08Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
null
1,055,502,909
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/3284
[ "#self-assign" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** Nice audio classification dataset cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-12-06T09:49:45Z
https://github.com/huggingface/datasets/issues/3284
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3284/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3283/comments
https://api.github.com/repos/huggingface/datasets/issues/3283/timeline
2021-12-10T10:30:15Z
null
completed
I_kwDODunzps4-6ZbC
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
null
3,283
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
Add Speech Commands dataset
https://api.github.com/repos/huggingface/datasets/issues/3283/events
null
https://api.github.com/repos/huggingface/datasets/issues/3283/labels{/name}
2021-11-16T22:39:56Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
null
1,055,495,874
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/3283
[ "#self-assign" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
## Adding a Dataset - **Name:** Speech commands - **Description:** A Dataset for Limited-Vocabulary Speech Recognition - **Paper:** https://arxiv.org/abs/1804.03209 - **Data:** https://www.tensorflow.org/datasets/catalog/speech_commands, Available: http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz - **Motivation:** Nice dataset for audio classification training cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-12-10T10:30:15Z
https://github.com/huggingface/datasets/issues/3283
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3283/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3282/comments
https://api.github.com/repos/huggingface/datasets/issues/3282/timeline
2022-04-12T11:57:43Z
null
completed
I_kwDODunzps4-4twy
closed
[]
null
3,282
{ "avatar_url": "https://avatars.githubusercontent.com/u/10078549?v=4", "events_url": "https://api.github.com/users/MinionAttack/events{/privacy}", "followers_url": "https://api.github.com/users/MinionAttack/followers", "following_url": "https://api.github.com/users/MinionAttack/following{/other_user}", "gists_url": "https://api.github.com/users/MinionAttack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MinionAttack", "id": 10078549, "login": "MinionAttack", "node_id": "MDQ6VXNlcjEwMDc4NTQ5", "organizations_url": "https://api.github.com/users/MinionAttack/orgs", "received_events_url": "https://api.github.com/users/MinionAttack/received_events", "repos_url": "https://api.github.com/users/MinionAttack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MinionAttack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MinionAttack/subscriptions", "type": "User", "url": "https://api.github.com/users/MinionAttack" }
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
https://api.github.com/repos/huggingface/datasets/issues/3282/events
null
https://api.github.com/repos/huggingface/datasets/issues/3282/labels{/name}
2021-11-16T16:05:19Z
null
false
null
null
1,055,054,898
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
https://api.github.com/repos/huggingface/datasets/issues/3282
[ "Hi ! Thanks for reporting :)\r\nI think this is because the dataset is behind an access page. We can fix the dataset viewer\r\n\r\nIf you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access the dataset.", "Ah ok, I didn't realise about the login page. I'll try `use_auth_token=True` and see if that solves it.\r\n\r\nRegards!", "Hi, \r\n\r\nUsing `use_auth_token=True` and downloading the credentials with `huggingface-cli login` (stored in .huggingface/token) solved the issue.\r\n\r\nShould I leave the issue open until you fix the Dataset viewer issue?", "Cool ! Yes let's keep this issue open until the viewer is fixed - I'll close it when this is fixed. Thanks", "The error I get when trying to load OSCAR 21.09 is this\r\n```\r\nConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n```\r\n\r\nThe URL I get in the browser is this\r\n```\r\nhttps://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n```\r\n\r\nMaybe URL is the issue? (resolve vs blob)", "> The error I get when trying to load OSCAR 21.09 is this\r\n> \r\n> ```\r\n> ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> The URL I get in the browser is this\r\n> \r\n> ```\r\n> https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> Maybe URL is the issue? (resolve vs blob)\r\n\r\nYou need to download your login credentials. See `huggingface-cli login` documentation and when loading the dataset use `use_auth_token=True`:\r\n`\r\nload_dataset(corpus, language, split=None, use_auth_token=True, cache_dir=cache_folder)`", "Fixed.\r\n\r\n<img width=\"1542\" alt=\"Capture d’écran 2022-04-12 aΜ€ 13 57 24\" src=\"https://user-images.githubusercontent.com/1676121/162957585-af96d19c-f86c-47fe-80c4-2b071083cee4.png\">\r\n" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)* *The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.* ``` raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py ``` Am I the one who added this dataset ? No Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library.
2022-04-12T11:57:43Z
https://github.com/huggingface/datasets/issues/3282
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3282/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3281/comments
https://api.github.com/repos/huggingface/datasets/issues/3281/timeline
2021-11-18T10:44:04Z
null
null
PR_kwDODunzps4umWZE
closed
[]
false
3,281
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[Datasets] Improve Covost 2
https://api.github.com/repos/huggingface/datasets/issues/3281/events
null
https://api.github.com/repos/huggingface/datasets/issues/3281/labels{/name}
2021-11-16T15:32:19Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3281.diff", "html_url": "https://github.com/huggingface/datasets/pull/3281", "merged_at": "2021-11-18T10:44:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3281" }
1,055,018,876
[]
https://api.github.com/repos/huggingface/datasets/issues/3281
[ "I am trying to use `load_dataset` with the French dataset(common voice corpus 1) which is downloaded from a common voice site and the target language is English (using colab)\r\n\r\nSteps I have followed:\r\n\r\n**1. untar:**\r\n`!tar xvzf fr.tar -C data_dir`\r\n\r\n**2. load data:**\r\n`load_dataset('covost2', 'fr_en', data_dir=\"/content/data_dir\")`\r\n\r\n0 rows are loading as shown below:\r\n```\r\nUsing custom data configuration fr_en-data_dir=%2Fcontent%2Fdata_dir\r\nReusing dataset covost2 (/root/.cache/huggingface/datasets/covost2/fr_en-data_dir=%2Fcontent%2Fdata_dir/1.0.0/bba950aae1ffa5a14b876b7e09c17b44de2c3cf60e7bd5d459640beffc78e35b)\r\n100%\r\n3/3 [00:00<00:00, 54.98it/s]\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nCan you please provide a sample working example code to load the dataset?", "Hi ! I think it only works with the subsets of Common Voice Corpus 4, not Common Voice Corpus 1" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
It's currently quite confusing to understand the manual data download instruction of Covost and not very user-friendly. Currenty the user has to: 1. Go on Common Voice website 2. Find the correct dataset which is **not** mentioned in the error message 3. Download it 4. Untar it 5. Create a language id folder (why? this folder does not exist in the `.tar` downloaded file) 6. pass the folder containing the created language id folder This PR improves this to: 1. Go on Common Voice website 2. Find the correct dataset which **is** mentioned in the error message 3. Download it 4. Untar it 5. pass the untared folder **Note**: This PR is not at all time-critical
2022-01-26T16:17:06Z
https://github.com/huggingface/datasets/pull/3281
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3281/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3280/comments
https://api.github.com/repos/huggingface/datasets/issues/3280/timeline
2021-11-16T13:34:30Z
null
null
PR_kwDODunzps4ulgye
closed
[]
false
3,280
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix bookcorpusopen RAM usage
https://api.github.com/repos/huggingface/datasets/issues/3280/events
null
https://api.github.com/repos/huggingface/datasets/issues/3280/labels{/name}
2021-11-16T11:27:52Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3280.diff", "html_url": "https://github.com/huggingface/datasets/pull/3280", "merged_at": "2021-11-16T13:34:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/3280.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3280" }
1,054,766,828
[]
https://api.github.com/repos/huggingface/datasets/issues/3280
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory Fix #3167.
2021-11-17T15:53:28Z
https://github.com/huggingface/datasets/pull/3280
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3280/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3279/comments
https://api.github.com/repos/huggingface/datasets/issues/3279/timeline
2021-11-16T11:18:02Z
null
null
PR_kwDODunzps4ulVHe
closed
[]
false
3,279
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795788?v=4", "events_url": "https://api.github.com/users/SebastinSanty/events{/privacy}", "followers_url": "https://api.github.com/users/SebastinSanty/followers", "following_url": "https://api.github.com/users/SebastinSanty/following{/other_user}", "gists_url": "https://api.github.com/users/SebastinSanty/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SebastinSanty", "id": 13795788, "login": "SebastinSanty", "node_id": "MDQ6VXNlcjEzNzk1Nzg4", "organizations_url": "https://api.github.com/users/SebastinSanty/orgs", "received_events_url": "https://api.github.com/users/SebastinSanty/received_events", "repos_url": "https://api.github.com/users/SebastinSanty/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SebastinSanty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SebastinSanty/subscriptions", "type": "User", "url": "https://api.github.com/users/SebastinSanty" }
Minor Typo Fix - Precision to Recall
https://api.github.com/repos/huggingface/datasets/issues/3279/events
null
https://api.github.com/repos/huggingface/datasets/issues/3279/labels{/name}
2021-11-16T10:32:22Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3279.diff", "html_url": "https://github.com/huggingface/datasets/pull/3279", "merged_at": "2021-11-16T11:18:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3279.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3279" }
1,054,711,852
[]
https://api.github.com/repos/huggingface/datasets/issues/3279
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
null
2021-11-16T11:18:03Z
https://github.com/huggingface/datasets/pull/3279
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3279/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3278/comments
https://api.github.com/repos/huggingface/datasets/issues/3278/timeline
2021-11-16T11:19:37Z
null
null
PR_kwDODunzps4uj2EQ
closed
[]
false
3,278
{ "avatar_url": "https://avatars.githubusercontent.com/u/2111202?v=4", "events_url": "https://api.github.com/users/wooters/events{/privacy}", "followers_url": "https://api.github.com/users/wooters/followers", "following_url": "https://api.github.com/users/wooters/following{/other_user}", "gists_url": "https://api.github.com/users/wooters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wooters", "id": 2111202, "login": "wooters", "node_id": "MDQ6VXNlcjIxMTEyMDI=", "organizations_url": "https://api.github.com/users/wooters/orgs", "received_events_url": "https://api.github.com/users/wooters/received_events", "repos_url": "https://api.github.com/users/wooters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wooters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wooters/subscriptions", "type": "User", "url": "https://api.github.com/users/wooters" }
Proposed update to the documentation for WER
https://api.github.com/repos/huggingface/datasets/issues/3278/events
null
https://api.github.com/repos/huggingface/datasets/issues/3278/labels{/name}
2021-11-15T23:28:31Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3278.diff", "html_url": "https://github.com/huggingface/datasets/pull/3278", "merged_at": "2021-11-16T11:19:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3278.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3278" }
1,054,249,463
[]
https://api.github.com/repos/huggingface/datasets/issues/3278
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I wanted to submit a minor update to the description of WER for your consideration. Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0: ``` >>> from datasets import load_metric >>> metric = load_metric("wer") >>> metric.compute(predictions=["hello how are you"], references=["hello"]) 3.0 ``` and similarly from the underlying jiwer module's `wer` function: ``` >>> from jiwer import wer >>> wer("hello", "hello how are you") 3.0 ```
2021-11-16T11:19:37Z
https://github.com/huggingface/datasets/pull/3278
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3278/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3277/comments
https://api.github.com/repos/huggingface/datasets/issues/3277/timeline
2021-11-17T16:18:38Z
null
null
PR_kwDODunzps4ujk11
closed
[]
false
3,277
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
f-string formatting
https://api.github.com/repos/huggingface/datasets/issues/3277/events
null
https://api.github.com/repos/huggingface/datasets/issues/3277/labels{/name}
2021-11-15T21:37:05Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3277.diff", "html_url": "https://github.com/huggingface/datasets/pull/3277", "merged_at": "2021-11-17T16:18:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/3277.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3277" }
1,054,122,656
[]
https://api.github.com/repos/huggingface/datasets/issues/3277
[ "Hello @lhoestq, ```make style``` is applied as asked. :)" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** - [x] **src/Datasets/\*.py** Modules in **_src/Datasets/_**: - [x] **commands** - [x] **features** - [x] **formatting** - [x] **io** - [x] **tasks** - [x] **utils** Module **datasets** will not be edited as asked by @mariosasko -A correction of the first PR (#3267)-
2021-11-19T20:40:08Z
https://github.com/huggingface/datasets/pull/3277
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3277/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3276/comments
https://api.github.com/repos/huggingface/datasets/issues/3276/timeline
2021-11-16T11:21:58Z
null
null
PR_kwDODunzps4uihih
closed
[]
false
3,276
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Update KILT metadata JSON
https://api.github.com/repos/huggingface/datasets/issues/3276/events
null
https://api.github.com/repos/huggingface/datasets/issues/3276/labels{/name}
2021-11-15T15:25:25Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3276.diff", "html_url": "https://github.com/huggingface/datasets/pull/3276", "merged_at": "2021-11-16T11:21:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3276.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3276" }
1,053,793,063
[]
https://api.github.com/repos/huggingface/datasets/issues/3276
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Fix #3265.
2021-11-16T11:21:59Z
https://github.com/huggingface/datasets/pull/3276
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3276/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3275/comments
https://api.github.com/repos/huggingface/datasets/issues/3275/timeline
2021-11-15T14:45:23Z
null
null
PR_kwDODunzps4uiN9t
closed
[]
false
3,275
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Force data files extraction if download_mode='force_redownload'
https://api.github.com/repos/huggingface/datasets/issues/3275/events
null
https://api.github.com/repos/huggingface/datasets/issues/3275/labels{/name}
2021-11-15T14:00:24Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3275.diff", "html_url": "https://github.com/huggingface/datasets/pull/3275", "merged_at": "2021-11-15T14:45:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3275.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3275" }
1,053,698,898
[]
https://api.github.com/repos/huggingface/datasets/issues/3275
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Avoids weird issues when redownloading a dataset due to cached data not being fully updated. With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows: ```python dset = load_dataset(..., download_mode="force_redownload") ```
2021-11-15T14:45:23Z
https://github.com/huggingface/datasets/pull/3275
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3275/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3274/comments
https://api.github.com/repos/huggingface/datasets/issues/3274/timeline
2021-11-15T14:43:54Z
null
null
PR_kwDODunzps4uiL8-
closed
[]
false
3,274
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix some contact information formats
https://api.github.com/repos/huggingface/datasets/issues/3274/events
null
https://api.github.com/repos/huggingface/datasets/issues/3274/labels{/name}
2021-11-15T13:50:34Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3274.diff", "html_url": "https://github.com/huggingface/datasets/pull/3274", "merged_at": "2021-11-15T14:43:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/3274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3274" }
1,053,689,140
[]
https://api.github.com/repos/huggingface/datasets/issues/3274
[ "The CI fail are caused by some missing sections or tags, which is unrelated to this PR. Merging !" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly. This PR fixes this for CoNLL-2002 and some other datasets with the same issue
2021-11-15T14:43:55Z
https://github.com/huggingface/datasets/pull/3274
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3274/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3273/comments
https://api.github.com/repos/huggingface/datasets/issues/3273/timeline
2021-11-17T15:41:11Z
null
completed
I_kwDODunzps4-y_V2
closed
[]
null
3,273
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Respect row ordering when concatenating datasets along axis=1
https://api.github.com/repos/huggingface/datasets/issues/3273/events
null
https://api.github.com/repos/huggingface/datasets/issues/3273/labels{/name}
2021-11-15T11:27:14Z
null
false
null
null
1,053,554,038
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3273
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored. A minimal reproducible example: ```python >>> from datasets import Dataset, concatenate_datasets >>> a = Dataset.from_dict({"a": [30, 20, 10]}) >>> b = Dataset.from_dict({"b": [2, 1, 3]}) >>> d = concatenate_datasets([a.sort("a"), b.sort("b")], axis=1) >>> print(d[:3]) # expected: {'a': [10, 20, 30], 'b': [1, 2, 3]} {'a': [10, 20, 30], 'b': [3, 1, 2]} ``` I've noticed the bug while working on #3195.
2021-11-17T15:41:11Z
https://github.com/huggingface/datasets/issues/3273
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3273/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3272/comments
https://api.github.com/repos/huggingface/datasets/issues/3272/timeline
null
null
null
I_kwDODunzps4-y2K_
open
[ { "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" } ]
null
3,272
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Make iter_archive work with ZIP files
https://api.github.com/repos/huggingface/datasets/issues/3272/events
null
https://api.github.com/repos/huggingface/datasets/issues/3272/labels{/name}
2021-11-15T10:50:42Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
null
1,053,516,479
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/3272
[ "Hello, is this issue open for any contributor ? can I work on it ?\r\n\r\n", "Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue to you by commenting `#self-assign`. Then if you have any question or if I can help, feel free to ping me.\r\n\r\nTo begin with, feel free to take a look at both implementations of `iter_archive` for local downloads and for data streaming:\r\n\r\nIn the `DownloadManager` for local dowloads:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/download_manager.py#L218-L242\r\n\r\nIn the `StreamingDownloadManager` to stream the content of the archive directly from the remote file:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/streaming_download_manager.py#L502-L526\r\n\r\nNotice the call to `xopen` that opens and streams a file given either an URL or a local path :)", "Okay thank you for the information. I will work on this :) ", "#self-assign" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive. It would be nice if it could work with ZIP files too !
2021-11-25T00:08:47Z
https://github.com/huggingface/datasets/issues/3272
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3272/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3271/comments
https://api.github.com/repos/huggingface/datasets/issues/3271/timeline
2021-11-16T11:35:58Z
null
null
PR_kwDODunzps4uhgi1
closed
[]
false
3,271
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Decode audio from remote
https://api.github.com/repos/huggingface/datasets/issues/3271/events
null
https://api.github.com/repos/huggingface/datasets/issues/3271/labels{/name}
2021-11-15T10:25:56Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3271.diff", "html_url": "https://github.com/huggingface/datasets/pull/3271", "merged_at": "2021-11-16T11:35:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3271.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3271" }
1,053,482,919
[]
https://api.github.com/repos/huggingface/datasets/issues/3271
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Currently the Audio feature type can only decode local audio files, not remote files. To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py cc @albertvillanova @mariosasko
2021-11-16T11:35:58Z
https://github.com/huggingface/datasets/pull/3271
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3271/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3270/comments
https://api.github.com/repos/huggingface/datasets/issues/3270/timeline
2021-11-15T10:27:03Z
null
null
PR_kwDODunzps4uhcxm
closed
[]
false
3,270
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Add os.listdir for streaming
https://api.github.com/repos/huggingface/datasets/issues/3270/events
null
https://api.github.com/repos/huggingface/datasets/issues/3270/labels{/name}
2021-11-15T10:14:04Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3270.diff", "html_url": "https://github.com/huggingface/datasets/pull/3270", "merged_at": "2021-11-15T10:27:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3270.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3270" }
1,053,465,662
[]
https://api.github.com/repos/huggingface/datasets/issues/3270
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example
2021-11-15T10:27:03Z
https://github.com/huggingface/datasets/pull/3270
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3270/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3269/comments
https://api.github.com/repos/huggingface/datasets/issues/3269/timeline
2022-01-19T13:58:19Z
null
completed
I_kwDODunzps4-xtfR
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
3,269
{ "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZhaofengWu", "id": 11954789, "login": "ZhaofengWu", "node_id": "MDQ6VXNlcjExOTU0Nzg5", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "type": "User", "url": "https://api.github.com/users/ZhaofengWu" }
coqa NonMatchingChecksumError
https://api.github.com/repos/huggingface/datasets/issues/3269/events
null
https://api.github.com/repos/huggingface/datasets/issues/3269/labels{/name}
2021-11-15T05:04:07Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
null
1,053,218,769
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3269
[ "Hi @ZhaofengWu, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.91MB/s]\r\nDownloading: 1.79kB [00:00, 1.79MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 49.0M/49.0M [00:06<00:00, 7.17MB/s]\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 9.09M/9.09M [00:01<00:00, 6.08MB/s]\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:12<00:00, 6.48s/it]\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 333.26it/s]\r\nDataset coqa downloaded and prepared to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0. Subsequent calls will reuse this data.\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 285.49it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 7199\r\n })\r\n validation: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 500\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details about your development environment? You can run the command `datasets-cli env` and copy-and-paste its output:\r\n```\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nIt might be because you are using an old version of `datasets`. Could you please update it (`pip install -U datasets`) and confirm if the problem parsists? ", "I'm getting the same error in two separate environments:\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.0\r\n```\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.9.5\r\n- PyArrow version: 6.0.0\r\n```", "I'm sorry, but don't get to reproduce the error in the Linux environment.\r\n\r\n@mariosasko @lhoestq can you reproduce it?", "I also can't reproduce the error on Windows/Linux (tested both the master and the `1.15.1` version). ", "Maybe the file had issues during the download ? Could you try to delete your cache and try again ?\r\nBy default the downloads cache is at `~/.cache/huggingface/datasets/downloads`\r\n\r\nAlso can you check if you have a proxy that could prevent the download to succeed ? Are you able to download those files via your browser ?", "I got the same error in a third environment (google cloud) as well. The internet for these three environments are all different so I don't think that's the reason.\r\n```\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-1022-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.0\r\n```\r\nI deleted the entire `~/.cache/huggingface/datasets` on my local mac, and got a different first time error.\r\n```\r\nPython 3.9.5 (default, May 18 2021, 12:31:01) \r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.19MB/s] \r\nDownloading: 1.79kB [00:00, 712kB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 222/222 [00:00<00:00, 1.36MB/s]\r\n 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1/2 [00:00<00:00, 2.47it/s]Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 675, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/Users/zhaofengw/.cache/huggingface/modules/datasets_modules/datasets/coqa/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0/coqa.py\", line 70, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 284, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 216, in map_nested\r\n mapped = [\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 217, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 152, in _single_map_nested\r\n return function(data_struct)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 295, in cached_path\r\n output_path = get_from_cache(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 594, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\r\n>>> dataset = load_dataset(\"coqa\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 222/222 [00:00<00:00, 1.38MB/s]\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 6.26it/s]\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 1087.45it/s]\r\n 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1/2 [00:45<00:45, 45.60s/it]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 679, in _download_and_prepare\r\n verify_checksums(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json']\r\n```\r\nI can access the URL using my browser, though I did notice a redirection -- could that have something to do with it?", "Hi @ZhaofengWu, \r\n\r\nWhat about in Google Colab? Can you run this notebook without errors? \r\nhttps://colab.research.google.com/drive/1CCpiiHmtNlfO_4CZ3-fW-TSShr1M0rL4?usp=sharing", "I can run your notebook fine, but if I create one myself, it has that error: https://colab.research.google.com/drive/107GIdhrauPO6ZiFDY7G9S74in4qqI2Kx?usp=sharing.\r\n\r\nIt's so funny -- it's like whenever you guys run it it's fine but whenever I run it it fails, whatever the environment is.", "I guess it must be some connection issue: the data owner may be blocking requests coming from your country or IP range...", "I mean, I don't think google colab sends the connection from my IP. Same applies to google cloud.", "Hello, I am having the same error with @ZhaofengWu first with \"social bias frames\" dataset. As I found this report, I tried also \"coqa\" and it fails as well. \r\n\r\nI test this on Google Colab. \r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.12\r\n- PyArrow version: 3.0.0\r\n```\r\n\r\nThen another environment\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-12.0.1-arm64-arm-64bit\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n```\r\n\r\nI tried the notebook @albertvillanova provided earlier, and it fails...\r\n", "Hi, still not able to reproduce the issue with `coqa`. If you still have this issue, could you please run these additional commands ?\r\n```python\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n9090845\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n`95d427588e3733e4ebec55f6938dbba6`\r\n>>> open(path).read(500)\r\n'{\\n \"version\": \"1.0\",\\n \"data\": [\\n {\\n \"source\": \"mctest\",\\n \"id\": \"3dr23u6we5exclen4th8uq9rb42tel\",\\n \"filename\": \"mc160.test.41\",\\n \"story\": \"Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer\\'s horses slept. But Cotton wasn\\'t alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters. All of her sisters w'\r\n```\r\n\r\nThis way we can know whether you downloaded a corrupted file or an error file that could cause the `NonMatchingChecksumError` error to happen", "```\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n222\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n'1195812a37c01a4481a4748c85d0c6a9'\r\n>>> open(path).read(500)\r\n'<html>\\n<head><title>503 Service Temporarily Unavailable</title></head>\\n<body bgcolor=\"white\">\\n<center><h1>503 Service Temporarily Unavailable</h1></center>\\n<hr><center>nginx/1.10.3 (Ubuntu)</center>\\n</body>\\n</html>\\n'\r\n```\r\nLooks like there was a server-side error when downloading the dataset? But I don't believe this is a transient error given (a) deleting the cache and re-downloading gives the same error; (b) it happens on multiple platforms with different network configurations; (c) other people are getting this error too, see above. So I'm not sure why it works for some people but not others.", "`wget https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json` does work. So I suspect there might be some problem in `datasets`' networking code? Can you give me some snippet that simulates how `datasets` requests the resource which I can run on my end?", "There is a redirection -- I don't know if that's the cause.", "Ok This is an issue with the server that hosts the data at `https://nlp.stanford.edu/nlp/data` that randomly returns 503 (by trying several times it also happens on my side), hopefully it can be fixed soon. I'll try to reach the people in charge of hosting the data", "Thanks. Also it might help to display a more informative error message?", "You're right. I just opened a PR that would show this error if it happens again:\r\n```python\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json (error 503)\r\n```" ]
https://api.github.com/repos/huggingface/datasets
NONE
``` >>> from datasets import load_dataset >>> dataset = load_dataset("coqa") Downloading: 3.82kB [00:00, 1.26MB/s] Downloading: 1.79kB [00:00, 733kB/s] Using custom data configuration default Downloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 222/222 [00:00<00:00, 1.38MB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 222/222 [00:00<00:00, 1.32MB/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:01<00:00, 1.91it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 1117.44it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json'] ```
2022-01-19T13:58:19Z
https://github.com/huggingface/datasets/issues/3269
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3269/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3268/comments
https://api.github.com/repos/huggingface/datasets/issues/3268/timeline
2021-12-21T10:24:51Z
null
completed
I_kwDODunzps4-w2Sp
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
3,268
{ "avatar_url": "https://avatars.githubusercontent.com/u/22389228?v=4", "events_url": "https://api.github.com/users/liliwei25/events{/privacy}", "followers_url": "https://api.github.com/users/liliwei25/followers", "following_url": "https://api.github.com/users/liliwei25/following{/other_user}", "gists_url": "https://api.github.com/users/liliwei25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/liliwei25", "id": 22389228, "login": "liliwei25", "node_id": "MDQ6VXNlcjIyMzg5MjI4", "organizations_url": "https://api.github.com/users/liliwei25/orgs", "received_events_url": "https://api.github.com/users/liliwei25/received_events", "repos_url": "https://api.github.com/users/liliwei25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/liliwei25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liliwei25/subscriptions", "type": "User", "url": "https://api.github.com/users/liliwei25" }
Dataset viewer issue for 'liweili/c4_200m'
https://api.github.com/repos/huggingface/datasets/issues/3268/events
null
https://api.github.com/repos/huggingface/datasets/issues/3268/labels{/name}
2021-11-14T17:18:46Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
null
1,052,992,681
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
https://api.github.com/repos/huggingface/datasets/issues/3268
[ "Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):\r\n```python\r\npath = filepath + \"/*.tsv*\"\r\n```\r\n\r\nYou can fix this by doing this instead:\r\n```python\r\npath = os.path.join(filepath, \"/*.tsv*\")\r\n```\r\n\r\nHere is why:\r\n\r\nLocally you can append `\"/*.tsv*\"` to your local path, however it doesn't work in streaming mode, and the dataset viewer does use the streaming mode.\r\nIn streaming mode, the download and extract part is done lazily. It means that instead of using local paths, it's still passing around URLs and [chained URLs](https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining)\r\n\r\nTherefore in streaming mode, `filepath` is not a local path, but instead is equal to\r\n```python\r\nzip://::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\nThe `zip://` part means that we navigate inside the remote ZIP file.\r\n\r\nYou must use `os.path.join` to navigate inside it and get your TSV files:\r\n```python\r\n>>> os.path.join(filepath, \"/*.tsv*\")\r\nzip://*.tsv*::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\n\r\n`datasets` extends `os.path.join`, `glob.glob`, etc. in your dataset scripts to work with remote files.", "hi @lhoestq ! thanks for the tip! i've updated the line of code but it's still not working. am i doing something else wrong? thank you!", "Hi ! Your dataset code is all good now :)\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: d = load_dataset(\"liweili/c4_200m\", streaming=True)\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.79k/2.79k [00:00<00:00, 4.83MB/s]\r\nUsing custom data configuration default\r\n\r\nIn [3]: next(iter(d[\"train\"]))\r\nOut[3]: \r\n{'input': 'Bitcoin is for $7,094 this morning, which CoinDesk says.',\r\n 'output': 'Bitcoin goes for $7,094 this morning, according to CoinDesk.'}\r\n```\r\nThough the viewer doesn't seem to be updated, I'll take a look at what's wrong", "thank you @lhoestq! πŸ˜„ ", "It's working\r\n\r\n<img width=\"1424\" alt=\"Capture d’écran 2021-12-21 aΜ€ 11 24 29\" src=\"https://user-images.githubusercontent.com/1676121/146914238-24bf87c0-c68d-4699-8d6c-fa3065656d1d.png\">\r\n\r\n" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Dataset viewer issue for '*liweili/c4_200m*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)* *Server Error* ``` Status code: 404 Exception: Status404Error Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist. ``` Am I the one who added this dataset ? Yes
2021-12-21T10:25:20Z
https://github.com/huggingface/datasets/issues/3268
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3268/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3267/comments
https://api.github.com/repos/huggingface/datasets/issues/3267/timeline
2021-11-16T14:55:43Z
null
null
PR_kwDODunzps4ufQzB
closed
[]
false
3,267
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
Replacing .format() and % by f-strings
https://api.github.com/repos/huggingface/datasets/issues/3267/events
null
https://api.github.com/repos/huggingface/datasets/issues/3267/labels{/name}
2021-11-13T19:12:02Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3267.diff", "html_url": "https://github.com/huggingface/datasets/pull/3267", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3267.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3267" }
1,052,750,084
[]
https://api.github.com/repos/huggingface/datasets/issues/3267
[ "Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n\r\nFeel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)", "> Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n> \r\n> Feel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)\r\n\r\nThank you for your answer :) , I will open a new PR with the correct changes.", "Hi @lhoestq, I submitted 3 commits in a new PR (#3277) where I did not apply black.\r\n\r\nI can apply the ```make style``` command if asked.", "Cool thanks ! Yes feel free to make sure you have `black==21.4b0` and run `make style`" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** Will follow in the next PR the modules left : - [ ] **src** Module **datasets** will not be edited as asked by @mariosasko PS : black and isort applied to files
2021-11-16T21:00:26Z
https://github.com/huggingface/datasets/pull/3267
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3267/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3266/comments
https://api.github.com/repos/huggingface/datasets/issues/3266/timeline
2021-12-06T11:16:31Z
null
null
PR_kwDODunzps4ufH94
closed
[]
false
3,266
{ "avatar_url": "https://avatars.githubusercontent.com/u/28014149?v=4", "events_url": "https://api.github.com/users/LashaO/events{/privacy}", "followers_url": "https://api.github.com/users/LashaO/followers", "following_url": "https://api.github.com/users/LashaO/following{/other_user}", "gists_url": "https://api.github.com/users/LashaO/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LashaO", "id": 28014149, "login": "LashaO", "node_id": "MDQ6VXNlcjI4MDE0MTQ5", "organizations_url": "https://api.github.com/users/LashaO/orgs", "received_events_url": "https://api.github.com/users/LashaO/received_events", "repos_url": "https://api.github.com/users/LashaO/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LashaO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LashaO/subscriptions", "type": "User", "url": "https://api.github.com/users/LashaO" }
Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution
https://api.github.com/repos/huggingface/datasets/issues/3266/events
null
https://api.github.com/repos/huggingface/datasets/issues/3266/labels{/name}
2021-11-13T15:01:34Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3266.diff", "html_url": "https://github.com/huggingface/datasets/pull/3266", "merged_at": "2021-12-06T11:16:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/3266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3266" }
1,052,700,155
[]
https://api.github.com/repos/huggingface/datasets/issues/3266
[ "There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?", "Hello @LashaO , I think the errors were caused by `_DATA_FILES` in `definite_pronoun_resolution.py`. Here are details of the test error.\r\n```\r\nself = BuilderConfig(name='plain_text', version=1.0.0, data_dir=None, data_files={'train': 'train.c.txt', 'test': 'test.c.txt'}, description='Plain text import of the Definite Pronoun Resolution Dataset.')\r\n\r\n def __post_init__(self):\r\n # The config name is used to name the cache directory.\r\n invalid_windows_characters = r\"<>:/\\|?*\"\r\n for invalid_char in invalid_windows_characters:\r\n if invalid_char in self.name:\r\n raise InvalidConfigName(\r\n f\"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. \"\r\n f\"They could create issues when creating a directory for this config on Windows filesystem.\"\r\n )\r\n if self.data_files is not None and not isinstance(self.data_files, DataFilesDict):\r\n> raise ValueError(f\"Expected a DataFilesDict in data_files but got {self.data_files}\")\r\nE ValueError: Expected a DataFilesDict in data_files but got {'train': 'train.c.txt', 'test': 'test.c.txt'}\r\n```", "Hi ! Thanks for the fixes :)\r\n\r\nInstead of uploading the `definite_pronoun_resolution` data files in this PR, maybe we can just update the URL ?\r\nThe old url was http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt, but now it's https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt (https instead of http)", "Actually the bad certificate creates an issue with the download\r\n```python\r\nimport datasets \r\ndatasets.DownloadManager().download(\"https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\")\r\n# raises: ConnectionError: Couldn't reach https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\r\n```\r\n\r\nLet me see if I can fix that", "I uploaded them to these URLs, feel free to use them instead of having the text files here in the PR :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/train.c.txt\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/test.c.txt", "Thank you for the tips! Having a busy week so anyone willing to commit the suggestions is welcome. Else, I will try to get back to this in a while.", "@LashaO Thanks for working on this. Yes, I'll take over as we already have a request to fix the URL of the Jeopardy! dataset in a separate issue.", "~~Still have to fix the error in the dummy data test of the WikiAuto dataset (so please don't merge).~~ Done! Ready for merging.", "Thank you, Mario!", "The CI failure is only related to missing tags in the dataset cards, merging :)" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
[#3264](https://github.com/huggingface/datasets/issues/3264)
2021-12-06T11:16:31Z
https://github.com/huggingface/datasets/pull/3266
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3266/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3265/comments
https://api.github.com/repos/huggingface/datasets/issues/3265/timeline
2021-11-16T11:21:58Z
null
completed
I_kwDODunzps4-vmq-
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
3,265
{ "avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4", "events_url": "https://api.github.com/users/slyviacassell/events{/privacy}", "followers_url": "https://api.github.com/users/slyviacassell/followers", "following_url": "https://api.github.com/users/slyviacassell/following{/other_user}", "gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slyviacassell", "id": 22296717, "login": "slyviacassell", "node_id": "MDQ6VXNlcjIyMjk2NzE3", "organizations_url": "https://api.github.com/users/slyviacassell/orgs", "received_events_url": "https://api.github.com/users/slyviacassell/received_events", "repos_url": "https://api.github.com/users/slyviacassell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions", "type": "User", "url": "https://api.github.com/users/slyviacassell" }
Checksum error for kilt_task_wow
https://api.github.com/repos/huggingface/datasets/issues/3265/events
null
https://api.github.com/repos/huggingface/datasets/issues/3265/labels{/name}
2021-11-13T12:04:17Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
null
1,052,666,558
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3265
[ "Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution.", "Hi @slyviacassell, thanks for reporting.\r\n\r\nYes, there is an issue with the checksum verification. I'm fixing it.\r\n\r\nAnd as you pointed out, in the meantime, you can circumvent the problem by passing `ignore_verifications=True`. " ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug Checksum failed when downloads kilt_tasks_wow. See error output for details. ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('kilt_tasks','wow') ``` ## Expected results Download successful ## Actual results ``` Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a... 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 5121.25it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1527.42it/s] Traceback (most recent call last): File "kilt_wow.py", line 30, in <module> main() File "kilt_wow.py", line 27, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "kilt_wow.py", line 21, in load_dataset return datasets.load_dataset('kilt_tasks','wow') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
2021-11-16T11:23:53Z
https://github.com/huggingface/datasets/issues/3265
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3265/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3264/comments
https://api.github.com/repos/huggingface/datasets/issues/3264/timeline
2022-06-01T17:38:16Z
null
completed
I_kwDODunzps4-vl7Z
closed
[]
null
3,264
{ "avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4", "events_url": "https://api.github.com/users/slyviacassell/events{/privacy}", "followers_url": "https://api.github.com/users/slyviacassell/followers", "following_url": "https://api.github.com/users/slyviacassell/following{/other_user}", "gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slyviacassell", "id": 22296717, "login": "slyviacassell", "node_id": "MDQ6VXNlcjIyMjk2NzE3", "organizations_url": "https://api.github.com/users/slyviacassell/orgs", "received_events_url": "https://api.github.com/users/slyviacassell/received_events", "repos_url": "https://api.github.com/users/slyviacassell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions", "type": "User", "url": "https://api.github.com/users/slyviacassell" }
Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
https://api.github.com/repos/huggingface/datasets/issues/3264/events
null
https://api.github.com/repos/huggingface/datasets/issues/3264/labels{/name}
2021-11-13T11:47:12Z
null
false
null
null
1,052,663,513
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3264
[ "#take\r\nI am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.\r\n\r\nAs for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as the files are <1MB in size total.", "> #take I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy.\r\n> \r\n> As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. Anyone has opinions on whether it is preferable for me to host them somewhere (e.g. personal GDrive account) or upload them to the dataset folder directly and use github raw URLs? The files are <1MB in size.\r\n\r\nI am planning to fix it next few days. But my to-do list is full and I do not have the cache of definite_pronoun_resolution. I am glad that you can take this. Thanks a lot!", "No problem, buddy! Will submit a PR over this weekend." ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug - WikiAuto Manual The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author. ``` https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy The downloading URL for jeopardy may move from ``` http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` to ``` https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?resourcekey=0-1abK4cJq-mqxFoSg86ieIg ``` - definite_pronoun_resolution The following downloading URL for definite_pronoun_resolution cannot be reached for some reasons. ``` http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('wiki_auto','manual') datasets.load_datasets('jeopardy') datasets.load_datasets('definite_pronoun_resolution') ``` ## Expected results Download successfully ## Actual results - WikiAuto Manual ``` Downloading and preparing dataset wiki_auto/manual (download: 151.65 MiB, generated: 155.97 MiB, post-processed: Unknown size, total: 307.61 MiB) to /root/.cache/huggingface/datasets/wiki_auto/manual/1.0.0/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8... 0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last): File "wiki_auto.py", line 43, in <module> main() File "wiki_auto.py", line 40, in main train, dev, test = dataset.generate_k_shot_data(k=16, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 24, in generate_k_shot_data dataset = self.load_dataset() File "wiki_auto.py", line 34, in load_dataset return datasets.load_dataset('wiki_auto', 'manual') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wiki_auto/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8/wiki_auto.py", line 193, in _split_generators data_dir = dl_manager.download_and_extract(my_urls) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 592, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy ``` Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /root/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "jeopardy.py", line 45, in <module> main() File "jeopardy.py", line 42, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "jeopardy.py", line 36, in load_dataset return datasets.load_dataset("jeopardy") File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` - definite_pronoun_resolution ``` Downloading and preparing dataset definite_pronoun_resolution/plain_text (download: 222.12 KiB, generated: 239.12 KiB, post-processed: Unknown size, total: 461.24 KiB) to /root/.cache/huggingface/datasets/definite_pronoun_resolution/plain_text/1.0.0/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff... 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last): File "definite_pronoun_resolution.py", line 37, in <module> main() File "definite_pronoun_resolution.py", line 34, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "definite_pronoun_resolution.py", line 28, in load_dataset return datasets.load_dataset('definite_pronoun_resolution') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/definite_pronoun_resolution/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff/definite_pronoun_resolution.py", line 76, in _split_generators files = dl_manager.download_and_extract( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
2022-06-01T17:38:16Z
https://github.com/huggingface/datasets/issues/3264
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3264/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3263/comments
https://api.github.com/repos/huggingface/datasets/issues/3263/timeline
2021-11-13T13:31:47Z
null
completed
I_kwDODunzps4-vK1E
closed
[]
null
3,263
{ "avatar_url": "https://avatars.githubusercontent.com/u/90987031?v=4", "events_url": "https://api.github.com/users/FStell01/events{/privacy}", "followers_url": "https://api.github.com/users/FStell01/followers", "following_url": "https://api.github.com/users/FStell01/following{/other_user}", "gists_url": "https://api.github.com/users/FStell01/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FStell01", "id": 90987031, "login": "FStell01", "node_id": "MDQ6VXNlcjkwOTg3MDMx", "organizations_url": "https://api.github.com/users/FStell01/orgs", "received_events_url": "https://api.github.com/users/FStell01/received_events", "repos_url": "https://api.github.com/users/FStell01/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FStell01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FStell01/subscriptions", "type": "User", "url": "https://api.github.com/users/FStell01" }
FET DATA
https://api.github.com/repos/huggingface/datasets/issues/3263/events
null
https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name}
2021-11-13T05:46:06Z
null
false
null
null
1,052,552,516
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
https://api.github.com/repos/huggingface/datasets/issues/3263
[]
https://api.github.com/repos/huggingface/datasets
NONE
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-11-13T13:31:47Z
https://github.com/huggingface/datasets/issues/3263
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3262/comments
https://api.github.com/repos/huggingface/datasets/issues/3262/timeline
2021-11-15T11:08:37Z
null
null
PR_kwDODunzps4uej4t
closed
[]
false
3,262
{ "avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4", "events_url": "https://api.github.com/users/manisnesan/events{/privacy}", "followers_url": "https://api.github.com/users/manisnesan/followers", "following_url": "https://api.github.com/users/manisnesan/following{/other_user}", "gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manisnesan", "id": 153142, "login": "manisnesan", "node_id": "MDQ6VXNlcjE1MzE0Mg==", "organizations_url": "https://api.github.com/users/manisnesan/orgs", "received_events_url": "https://api.github.com/users/manisnesan/received_events", "repos_url": "https://api.github.com/users/manisnesan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions", "type": "User", "url": "https://api.github.com/users/manisnesan" }
asserts replaced with exception for image classification task, csv, json
https://api.github.com/repos/huggingface/datasets/issues/3262/events
null
https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name}
2021-11-12T22:34:59Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3262.diff", "html_url": "https://github.com/huggingface/datasets/pull/3262", "merged_at": "2021-11-15T11:08:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3262.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3262" }
1,052,455,082
[]
https://api.github.com/repos/huggingface/datasets/issues/3262
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171
2021-11-15T11:08:37Z
https://github.com/huggingface/datasets/pull/3262
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3261/comments
https://api.github.com/repos/huggingface/datasets/issues/3261/timeline
2021-12-21T10:24:10Z
null
completed
I_kwDODunzps4-uYgN
closed
[]
null
3,261
{ "avatar_url": "https://avatars.githubusercontent.com/u/37913218?v=4", "events_url": "https://api.github.com/users/lara-martin/events{/privacy}", "followers_url": "https://api.github.com/users/lara-martin/followers", "following_url": "https://api.github.com/users/lara-martin/following{/other_user}", "gists_url": "https://api.github.com/users/lara-martin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lara-martin", "id": 37913218, "login": "lara-martin", "node_id": "MDQ6VXNlcjM3OTEzMjE4", "organizations_url": "https://api.github.com/users/lara-martin/orgs", "received_events_url": "https://api.github.com/users/lara-martin/received_events", "repos_url": "https://api.github.com/users/lara-martin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lara-martin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lara-martin/subscriptions", "type": "User", "url": "https://api.github.com/users/lara-martin" }
Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
https://api.github.com/repos/huggingface/datasets/issues/3261/events
null
https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name}
2021-11-12T19:25:19Z
null
false
null
null
1,052,346,381
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
https://api.github.com/repos/huggingface/datasets/issues/3261
[ "Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(data_dir, \"all-sci-fi-data-train.txt\")\r\nreturn [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": train_filepath,\r\n },\r\n ),\r\n...\r\n])\r\n\r\n# in generate_examples\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n ...\r\n```", "It's working: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/viewer/Scifi_TV_Shows/test\r\n\r\n<img width=\"1494\" alt=\"Capture d’écran 2021-12-21 aΜ€ 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/146914068-f4b7225f-42c5-471d-9c73-2adac722162f.png\">\r\n" ]
https://api.github.com/repos/huggingface/datasets
NONE
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*' **Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows) I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance! Am I the one who added this dataset? Yes
2021-12-21T10:24:10Z
https://github.com/huggingface/datasets/issues/3261
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3260/comments
https://api.github.com/repos/huggingface/datasets/issues/3260/timeline
2021-11-16T17:55:22Z
null
null
PR_kwDODunzps4ueCIU
closed
[]
false
3,260
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Fix ConnectionError in Scielo dataset
https://api.github.com/repos/huggingface/datasets/issues/3260/events
null
https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name}
2021-11-12T18:02:37Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3260.diff", "html_url": "https://github.com/huggingface/datasets/pull/3260", "merged_at": "2021-11-16T17:55:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/3260.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3260" }
1,052,247,373
[]
https://api.github.com/repos/huggingface/datasets/issues/3260
[ "The CI error is unrelated to the change." ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR: * allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint) * makes the Scielo dataset streamable Fixes #3255.
2021-11-16T18:18:17Z
https://github.com/huggingface/datasets/pull/3260
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3259/comments
https://api.github.com/repos/huggingface/datasets/issues/3259/timeline
2021-11-18T17:19:33Z
null
null
PR_kwDODunzps4ud5W3
closed
[]
false
3,259
{ "avatar_url": "https://avatars.githubusercontent.com/u/1298052?v=4", "events_url": "https://api.github.com/users/jkkummerfeld/events{/privacy}", "followers_url": "https://api.github.com/users/jkkummerfeld/followers", "following_url": "https://api.github.com/users/jkkummerfeld/following{/other_user}", "gists_url": "https://api.github.com/users/jkkummerfeld/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jkkummerfeld", "id": 1298052, "login": "jkkummerfeld", "node_id": "MDQ6VXNlcjEyOTgwNTI=", "organizations_url": "https://api.github.com/users/jkkummerfeld/orgs", "received_events_url": "https://api.github.com/users/jkkummerfeld/received_events", "repos_url": "https://api.github.com/users/jkkummerfeld/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jkkummerfeld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jkkummerfeld/subscriptions", "type": "User", "url": "https://api.github.com/users/jkkummerfeld" }
Updating details of IRC disentanglement data
https://api.github.com/repos/huggingface/datasets/issues/3259/events
null
https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name}
2021-11-12T17:16:58Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3259.diff", "html_url": "https://github.com/huggingface/datasets/pull/3259", "merged_at": "2021-11-18T17:19:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3259.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3259" }
1,052,189,775
[]
https://api.github.com/repos/huggingface/datasets/issues/3259
[ "Thank you for the cleanup!" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation.
2021-11-18T17:19:33Z
https://github.com/huggingface/datasets/pull/3259
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3258/comments
https://api.github.com/repos/huggingface/datasets/issues/3258/timeline
null
null
null
I_kwDODunzps4-tx4j
open
[]
null
3,258
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Reload dataset that was already downloaded with `load_from_disk` from cloud storage
https://api.github.com/repos/huggingface/datasets/issues/3258/events
null
https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name}
2021-11-12T17:14:59Z
null
false
null
null
1,052,188,195
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/3258
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
`load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once. It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file.
2021-11-12T17:14:59Z
https://github.com/huggingface/datasets/issues/3258
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3257/comments
https://api.github.com/repos/huggingface/datasets/issues/3257/timeline
2021-11-17T16:18:38Z
null
completed
I_kwDODunzps4-tg1d
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" } ]
null
3,257
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Use f-strings for string formatting
https://api.github.com/repos/huggingface/datasets/issues/3257/events
null
https://api.github.com/repos/huggingface/datasets/issues/3257/labels{/name}
2021-11-12T16:02:15Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
null
1,052,118,365
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
https://api.github.com/repos/huggingface/datasets/issues/3257
[ "Hi, I would be glad to help with this. Is there anyone else working on it?", "Hi, I would be glad to work on this too.", "#self-assign", "Hi @Carlosbogo,\r\n\r\nwould you be interested in replacing the `.format` and `%` syntax with f-strings in the modules in the `datasets` directory since @Mehdi2402 has opened a PR that does that for all the other directories?", "Oh I see. I will be glad to help with the `datasets` directory then." ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax. > **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under `datasets/*`.
2021-11-17T16:18:38Z
https://github.com/huggingface/datasets/issues/3257
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3257/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3256/comments
https://api.github.com/repos/huggingface/datasets/issues/3256/timeline
2021-11-12T14:59:32Z
null
null
PR_kwDODunzps4udTqg
closed
[]
false
3,256
{ "avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4", "events_url": "https://api.github.com/users/manisnesan/events{/privacy}", "followers_url": "https://api.github.com/users/manisnesan/followers", "following_url": "https://api.github.com/users/manisnesan/following{/other_user}", "gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manisnesan", "id": 153142, "login": "manisnesan", "node_id": "MDQ6VXNlcjE1MzE0Mg==", "organizations_url": "https://api.github.com/users/manisnesan/orgs", "received_events_url": "https://api.github.com/users/manisnesan/received_events", "repos_url": "https://api.github.com/users/manisnesan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions", "type": "User", "url": "https://api.github.com/users/manisnesan" }
asserts replaced by exception for text classification task with test.
https://api.github.com/repos/huggingface/datasets/issues/3256/events
null
https://api.github.com/repos/huggingface/datasets/issues/3256/labels{/name}
2021-11-12T14:05:36Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3256.diff", "html_url": "https://github.com/huggingface/datasets/pull/3256", "merged_at": "2021-11-12T14:59:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/3256.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3256" }
1,052,000,613
[]
https://api.github.com/repos/huggingface/datasets/issues/3256
[ "Haha it looks like you got the chance of being reviewed twice at the same time and got the same suggestion twice x)\r\nAnyway it's all good now so we can merge !", "Thanks for the feedback. " ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 . I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many changes. Once this gets approved, I will look into the rest. Thanks.
2021-11-12T15:09:33Z
https://github.com/huggingface/datasets/pull/3256
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3256/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3255/comments
https://api.github.com/repos/huggingface/datasets/issues/3255/timeline
2021-11-16T17:55:22Z
null
completed
I_kwDODunzps4-sO_Z
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
3,255
{ "avatar_url": "https://avatars.githubusercontent.com/u/2575047?v=4", "events_url": "https://api.github.com/users/WojciechKusa/events{/privacy}", "followers_url": "https://api.github.com/users/WojciechKusa/followers", "following_url": "https://api.github.com/users/WojciechKusa/following{/other_user}", "gists_url": "https://api.github.com/users/WojciechKusa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WojciechKusa", "id": 2575047, "login": "WojciechKusa", "node_id": "MDQ6VXNlcjI1NzUwNDc=", "organizations_url": "https://api.github.com/users/WojciechKusa/orgs", "received_events_url": "https://api.github.com/users/WojciechKusa/received_events", "repos_url": "https://api.github.com/users/WojciechKusa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WojciechKusa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WojciechKusa/subscriptions", "type": "User", "url": "https://api.github.com/users/WojciechKusa" }
SciELO dataset ConnectionError
https://api.github.com/repos/huggingface/datasets/issues/3255/events
null
https://api.github.com/repos/huggingface/datasets/issues/3255/labels{/name}
2021-11-12T09:57:14Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
null
1,051,783,129
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3255
[]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug I get `ConnectionError` when I am trying to load the SciELO dataset. When I try the URL with `requests` I get: ``` >>> requests.head("https://ndownloader.figstatic.com/files/14019287") <Response [302]> ``` And as far as I understand redirections in `datasets` are not supported for downloads. https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45 ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("scielo", "en-es") ``` ## Expected results Download SciELO dataset and load Dataset object ## Actual results ``` Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e... Traceback (most recent call last): File "scielo.py", line 3, in <module> dataset = load_dataset("scielo", "en-es") File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators data_dir = dl_manager.download_and_extract(_URLS[self.config.name]) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.12 - PyArrow version: 6.0.0
2021-11-16T17:55:22Z
https://github.com/huggingface/datasets/issues/3255
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3255/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3254/comments
https://api.github.com/repos/huggingface/datasets/issues/3254/timeline
2021-11-12T10:30:57Z
null
null
PR_kwDODunzps4ubPwR
closed
[]
false
3,254
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Update xcopa dataset (fix checksum issues + add translated data)
https://api.github.com/repos/huggingface/datasets/issues/3254/events
null
https://api.github.com/repos/huggingface/datasets/issues/3254/labels{/name}
2021-11-11T20:51:33Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3254.diff", "html_url": "https://github.com/huggingface/datasets/pull/3254", "merged_at": "2021-11-12T10:30:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/3254.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3254" }
1,051,351,172
[]
https://api.github.com/repos/huggingface/datasets/issues/3254
[ "The CI failures are unrelated to the changes (missing fields in the readme and the CER metric error fixed in #3252)." ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib.
2021-11-12T10:30:58Z
https://github.com/huggingface/datasets/pull/3254
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3254/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3253/comments
https://api.github.com/repos/huggingface/datasets/issues/3253/timeline
2021-12-09T14:26:58Z
null
completed
I_kwDODunzps4-qbOs
closed
[]
null
3,253
{ "avatar_url": "https://avatars.githubusercontent.com/u/69010336?v=4", "events_url": "https://api.github.com/users/pavel-lexyr/events{/privacy}", "followers_url": "https://api.github.com/users/pavel-lexyr/followers", "following_url": "https://api.github.com/users/pavel-lexyr/following{/other_user}", "gists_url": "https://api.github.com/users/pavel-lexyr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pavel-lexyr", "id": 69010336, "login": "pavel-lexyr", "node_id": "MDQ6VXNlcjY5MDEwMzM2", "organizations_url": "https://api.github.com/users/pavel-lexyr/orgs", "received_events_url": "https://api.github.com/users/pavel-lexyr/received_events", "repos_url": "https://api.github.com/users/pavel-lexyr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pavel-lexyr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pavel-lexyr/subscriptions", "type": "User", "url": "https://api.github.com/users/pavel-lexyr" }
`GeneratorBasedBuilder` does not support `None` values
https://api.github.com/repos/huggingface/datasets/issues/3253/events
null
https://api.github.com/repos/huggingface/datasets/issues/3253/labels{/name}
2021-11-11T19:51:21Z
null
false
null
null
1,051,308,972
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3253
[ "Hi,\r\n\r\nthanks for reporting and providing a minimal reproducible example. \r\n\r\nThis line of the PR I've linked in our discussion on the Forum will add support for `None` values:\r\nhttps://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835\r\n\r\nI expect that PR to be merged soon." ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug `GeneratorBasedBuilder` does not support `None` values. ## Steps to reproduce the bug See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction. ## Expected results Dataset is initialized with a `None` value in the `value` column. ## Actual results ``` Traceback (most recent call last): File "main.py", line 3, in <module> datasets.load_dataset("./bad-data") File ".../datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File ".../datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File ".../datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".../datasets/builder.py", line 1103, in _prepare_split example = self.info.features.encode_example(record) File ".../datasets/features/features.py", line 1033, in encode_example return encode_nested_example(self, example) File ".../datasets/features/features.py", line 808, in encode_nested_example return { File ".../datasets/features/features.py", line 809, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File ".../datasets/features/features.py", line 855, in encode_nested_example return schema.encode_example(obj) File ".../datasets/features/features.py", line 299, in encode_example return float(value) TypeError: float() argument must be a string or a number, not 'NoneType' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.0
2021-12-09T14:26:58Z
https://github.com/huggingface/datasets/issues/3253
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3253/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3252/comments
https://api.github.com/repos/huggingface/datasets/issues/3252/timeline
2021-11-12T14:06:43Z
null
null
PR_kwDODunzps4uagoy
closed
[]
false
3,252
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
Fix failing CER metric test in CI after update
https://api.github.com/repos/huggingface/datasets/issues/3252/events
null
https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name}
2021-11-11T15:57:16Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3252.diff", "html_url": "https://github.com/huggingface/datasets/pull/3252", "merged_at": "2021-11-12T14:06:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3252" }
1,051,124,749
[]
https://api.github.com/repos/huggingface/datasets/issues/3252
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version.
2021-11-12T14:06:44Z
https://github.com/huggingface/datasets/pull/3252
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3250/comments
https://api.github.com/repos/huggingface/datasets/issues/3250/timeline
2022-10-03T09:37:25Z
null
null
PR_kwDODunzps4uYmkr
closed
[]
false
3,250
{ "avatar_url": "https://avatars.githubusercontent.com/u/7088559?v=4", "events_url": "https://api.github.com/users/ssss1029/events{/privacy}", "followers_url": "https://api.github.com/users/ssss1029/followers", "following_url": "https://api.github.com/users/ssss1029/following{/other_user}", "gists_url": "https://api.github.com/users/ssss1029/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ssss1029", "id": 7088559, "login": "ssss1029", "node_id": "MDQ6VXNlcjcwODg1NTk=", "organizations_url": "https://api.github.com/users/ssss1029/orgs", "received_events_url": "https://api.github.com/users/ssss1029/received_events", "repos_url": "https://api.github.com/users/ssss1029/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ssss1029/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ssss1029/subscriptions", "type": "User", "url": "https://api.github.com/users/ssss1029" }
Add ETHICS dataset
https://api.github.com/repos/huggingface/datasets/issues/3250/events
null
https://api.github.com/repos/huggingface/datasets/issues/3250/labels{/name}
2021-11-11T03:45:34Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3250.diff", "html_url": "https://github.com/huggingface/datasets/pull/3250", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3250.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3250" }
1,050,541,348
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
https://api.github.com/repos/huggingface/datasets/issues/3250
[ "Thanks for your contribution, @ssss1029. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
https://api.github.com/repos/huggingface/datasets
NONE
This PR adds the ETHICS dataset, including all 5 sub-datasets. From https://arxiv.org/abs/2008.02275
2022-10-03T09:37:25Z
https://github.com/huggingface/datasets/pull/3250
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3250/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3249/comments
https://api.github.com/repos/huggingface/datasets/issues/3249/timeline
2021-11-12T14:01:31Z
null
null
PR_kwDODunzps4uXeea
closed
[]
false
3,249
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix streaming for id_newspapers_2018
https://api.github.com/repos/huggingface/datasets/issues/3249/events
null
https://api.github.com/repos/huggingface/datasets/issues/3249/labels{/name}
2021-11-10T18:55:30Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3249.diff", "html_url": "https://github.com/huggingface/datasets/pull/3249", "merged_at": "2021-11-12T14:01:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/3249.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3249" }
1,050,193,138
[]
https://api.github.com/repos/huggingface/datasets/issues/3249
[]
https://api.github.com/repos/huggingface/datasets
MEMBER
To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file
2021-11-12T14:01:32Z
https://github.com/huggingface/datasets/pull/3249
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3249/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3248/comments
https://api.github.com/repos/huggingface/datasets/issues/3248/timeline
2021-11-12T17:18:11Z
null
null
PR_kwDODunzps4uXZzU
closed
[]
false
3,248
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Stream from Google Drive and other hosts
https://api.github.com/repos/huggingface/datasets/issues/3248/events
null
https://api.github.com/repos/huggingface/datasets/issues/3248/labels{/name}
2021-11-10T18:32:32Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3248.diff", "html_url": "https://github.com/huggingface/datasets/pull/3248", "merged_at": "2021-11-12T17:18:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/3248.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3248" }
1,050,171,082
[]
https://api.github.com/repos/huggingface/datasets/issues/3248
[ "I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow", "I'm fixing the remaining files based on TAR archives", "THANKS A LOT" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Streaming from Google Drive is a bit more challenging than the other host we've been supporting: - the download URL must be updated to add the confirm token obtained by HEAD request - it requires to use cookies to keep the connection alive - the URL doesn't tell any information about whether the file is compressed or not Therefore I did two things: - I added a step for URL and headers/cookies preparation in the StreamingDownloadManager - I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures) This allows to do do fancy things like ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob # zip file containing a train.tsv file url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh" extracted = StreamingDownloadManager().download_and_extract(url) for inner_file in xglob(xjoin(extracted, "*.tsv")): with xopen(inner_file) as f: # streaming starts here for line in f: print(line) ``` This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list: ``` amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail, code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans, code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14, gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018, igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa, mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary, poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo, search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner, twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018, wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3 ``` Some of them may not work if the host doesn't support HTTP range requests for example Fix https://github.com/huggingface/datasets/issues/2742 Fix https://github.com/huggingface/datasets/issues/3188
2021-11-30T16:03:43Z
https://github.com/huggingface/datasets/pull/3248
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3248/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/3247/comments
https://api.github.com/repos/huggingface/datasets/issues/3247/timeline
2022-04-10T14:05:57Z
null
completed
I_kwDODunzps4-kSMQ
closed
[]
null
3,247
{ "avatar_url": "https://avatars.githubusercontent.com/u/29249513?v=4", "events_url": "https://api.github.com/users/maxzirps/events{/privacy}", "followers_url": "https://api.github.com/users/maxzirps/followers", "following_url": "https://api.github.com/users/maxzirps/following{/other_user}", "gists_url": "https://api.github.com/users/maxzirps/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxzirps", "id": 29249513, "login": "maxzirps", "node_id": "MDQ6VXNlcjI5MjQ5NTEz", "organizations_url": "https://api.github.com/users/maxzirps/orgs", "received_events_url": "https://api.github.com/users/maxzirps/received_events", "repos_url": "https://api.github.com/users/maxzirps/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxzirps/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxzirps/subscriptions", "type": "User", "url": "https://api.github.com/users/maxzirps" }
Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
https://api.github.com/repos/huggingface/datasets/issues/3247/events
null
https://api.github.com/repos/huggingface/datasets/issues/3247/labels{/name}
2021-11-10T11:17:59Z
null
false
null
null
1,049,699,088
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/3247
[ "Hi,\r\n\r\nthis issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).\r\n\r\n@lhoestq Is this worth opening an issue on Jira? Basically, PyArrow doesn't allow casts that change the order of the struct fields because they treat `pa.struct` as an ordered sequence. Reordering fields manually in Python is probably too slow, so I think this needs to be fixed by them to be usable on our side.", "I agree I would expect PyArrow to be able to handle this, do you want to open the issue @mariosasko ?\r\nAlthough maybe it's possible to fix struct casting on our side without hurting performance too much, if it's simply a matter of reordering the arrays in the StructArray", "Fixed in #3575, so I'm closing this issue." ]
https://api.github.com/repos/huggingface/datasets
NONE
## Describe the bug When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` Splitting the big file into smaller ones and then loading it with the `load_dataset` method did also not work. Creating a pandas dataframe from it and then loading it with `Dataset.from_pandas` works ## Steps to reproduce the bug ```python load_dataset("json", data_files="test.json") ``` test.json ~25MB ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ... ``` working.json ~160bytes ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ``` ## Expected results It should load the dataset from the json file without error. ## Actual results It raises Exception `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` ``` Traceback (most recent call last): File "/Users/m/workspace/xxx/project/main.py", line 60, in <module> dataset = load_dataset("json", data_files="result.json") File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/load.py", line 1627, in load_dataset builder_instance.download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1685, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 630, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 338, in pyarrow.lib.asarray File "pyarrow/table.pxi", line 304, in pyarrow.lib.ChunkedArray.cast File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/pyarrow/compute.py", line 309, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct ``` ## Environment info - `datasets` version: 1.14.0 - Platform: macOS-12.0.1-arm64-arm-64bit - Python version: 3.9.7 - PyArrow version: 6.0.0
2022-04-10T14:05:57Z
https://github.com/huggingface/datasets/issues/3247
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3247/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/3246/comments
https://api.github.com/repos/huggingface/datasets/issues/3246/timeline
2021-11-10T11:10:39Z
null
null
PR_kwDODunzps4uVvaW
closed
[]
false
3,246
{ "avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4", "events_url": "https://api.github.com/users/verbose-void/events{/privacy}", "followers_url": "https://api.github.com/users/verbose-void/followers", "following_url": "https://api.github.com/users/verbose-void/following{/other_user}", "gists_url": "https://api.github.com/users/verbose-void/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/verbose-void", "id": 26421036, "login": "verbose-void", "node_id": "MDQ6VXNlcjI2NDIxMDM2", "organizations_url": "https://api.github.com/users/verbose-void/orgs", "received_events_url": "https://api.github.com/users/verbose-void/received_events", "repos_url": "https://api.github.com/users/verbose-void/repos", "site_admin": false, "starred_url": "https://api.github.com/users/verbose-void/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/verbose-void/subscriptions", "type": "User", "url": "https://api.github.com/users/verbose-void" }
[tiny] fix typo in stream docs
https://api.github.com/repos/huggingface/datasets/issues/3246/events
null
https://api.github.com/repos/huggingface/datasets/issues/3246/labels{/name}
2021-11-10T10:40:02Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/3246.diff", "html_url": "https://github.com/huggingface/datasets/pull/3246", "merged_at": "2021-11-10T11:10:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3246.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3246" }
1,049,662,746
[]
https://api.github.com/repos/huggingface/datasets/issues/3246
[]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
null
2021-11-10T11:10:39Z
https://github.com/huggingface/datasets/pull/3246
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3246/reactions" }
true