url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1884/comments | https://api.github.com/repos/huggingface/datasets/issues/1884/events | https://github.com/huggingface/datasets/pull/1884 | 808,755,894 | MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5 | 1,884 | dtype fix when using numpy arrays | [] | closed | false | null | 0 | 2021-02-15T18:55:25Z | 2021-07-30T11:01:18Z | 2021-07-30T11:01:18Z | null | As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1884/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5634/comments | https://api.github.com/repos/huggingface/datasets/issues/5634/events | https://github.com/huggingface/datasets/issues/5634 | 1,622,424,174 | I_kwDODunzps5gtDpu | 5,634 | Not all progress bars are showing up when they should for downloading dataset | [] | open | false | null | 2 | 2023-03-13T23:04:18Z | 2023-03-21T01:59:59Z | null | null | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/110427462/224851138-13fee5b7-ab51-4883-b96f-1b9808782e3b.png">
tqdm
<img width="1251" alt="Screen Shot 2023-03-13 at 3 58 59 PM" src="https://user-images.githubusercontent.com/110427462/224851180-5feb7825-9250-4b1e-ad0c-f3172ac1eb78.png">
### Steps to reproduce the bug
1. Run this line
```
from datasets import load_dataset
rotten_tomatoes = load_dataset("rotten_tomatoes", split="train")
```
### Expected behavior
all progress bars for builder script, metadata, readme, training, validation, and test set
### Environment info
requirements.txt
```
aiofiles==22.1.0
aiohttp==3.8.4
aiosignal==1.3.1
aiosqlite==0.18.0
anyio==3.6.2
appnope==0.1.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-generator==1.10
async-timeout==4.0.2
attrs==22.2.0
Babel==2.12.1
backcall==0.2.0
beautifulsoup4==4.11.2
bleach==6.0.0
brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1666764961872/work
certifi==2022.12.7
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1671179414629/work
cfgv==3.3.1
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work
comm==0.1.2
conda==22.9.0
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1669907009957/work
conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1669733752472/work
coverage==7.2.1
cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1669592251328/work
datasets==2.1.0
debugpy==1.6.6
decorator==5.1.1
defusedxml==0.7.1
dill==0.3.6
distlib==0.3.6
distro==1.4.0
entrypoints==0.4
exceptiongroup==1.1.0
executing==1.2.0
fastjsonschema==2.16.3
filelock==3.9.0
flaky==3.7.0
fqdn==1.5.1
frozenlist==1.3.3
fsspec==2023.3.0
huggingface-hub==0.10.1
identify==2.5.18
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work
iniconfig==2.0.0
ipykernel==6.12.1
ipyparallel==8.4.1
ipython==7.32.0
ipython-genutils==0.2.0
ipywidgets==8.0.4
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
json5==0.9.11
jsonpointer==2.3
jsonschema==4.17.3
jupyter-events==0.6.3
jupyter-ydoc==0.2.2
jupyter_client==8.0.3
jupyter_core==5.2.0
jupyter_server==2.4.0
jupyter_server_fileid==0.8.0
jupyter_server_terminals==0.4.4
jupyter_server_ydoc==0.6.1
jupyterlab==3.6.1
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.5
jupyterlab_server==2.20.0
libmambapy @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/libmambapy
mamba @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/mamba
MarkupSafe==2.1.2
matplotlib-inline==0.1.6
mistune==2.0.5
multidict==6.0.4
multiprocess==0.70.14
nbclassic==0.5.3
nbclient==0.7.2
nbconvert==7.2.9
nbformat==5.7.3
nest-asyncio==1.5.6
nodeenv==1.7.0
notebook==6.5.3
notebook_shim==0.2.2
numpy==1.24.2
outcome==1.2.0
packaging==23.0
pandas==1.5.3
pandocfilters==1.5.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
platformdirs==3.0.0
plotly==5.13.1
pluggy==1.0.0
pre-commit==3.1.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
psutil==5.9.4
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==11.0.0
pycosat @ file:///Users/runner/miniforge3/conda-bld/pycosat_1666836580084/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
Pygments==2.14.0
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1665350324128/work
pyrsistent==0.19.3
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
pytest==7.2.1
pytest-asyncio==0.20.3
pytest-cov==4.0.0
pytest-timeout==2.1.0
python-dateutil==2.8.2
python-json-logger==2.0.7
pytz==2022.7.1
PyYAML==6.0
pyzmq==25.0.0
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1661872987712/work
responses==0.18.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
ruamel-yaml-conda @ file:///Users/runner/miniforge3/conda-bld/ruamel_yaml_1666819760545/work
Send2Trash==1.8.0
simplegeneric==0.8.1
six==1.16.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.4
stack-data==0.6.2
tenacity==8.2.2
terminado==0.17.1
tinycss2==1.2.1
tomli==2.0.1
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
tornado==6.2
tqdm==4.64.1
traitlets==5.8.1
trio==0.22.0
typing_extensions==4.5.0
uri-template==1.2.0
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1669259737463/work
virtualenv==20.19.0
wcwidth==0.2.6
webcolors==1.12
webencodings==0.5.1
websocket-client==1.5.1
widgetsnbextension==4.0.5
xxhash==3.2.0
y-py==0.5.9
yarl==1.8.2
ypy-websocket==0.8.2
zstandard==0.19.0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5634/timeline | null | null | null | null | false | [
"Hi! \r\n\r\nBy default, tqdm has `leave=True` to \"keep all traces of the progress bar upon the termination of iteration\". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.\r\n\r\nI feel like our TQDM bars are noisy, so I think we should always set `l... |
https://api.github.com/repos/huggingface/datasets/issues/5389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5389/comments | https://api.github.com/repos/huggingface/datasets/issues/5389/events | https://github.com/huggingface/datasets/pull/5389 | 1,509,348,626 | PR_kwDODunzps5GHsOo | 5,389 | Fix link in `load_dataset` docstring | [] | closed | false | null | 6 | 2022-12-23T13:26:31Z | 2023-01-25T19:00:43Z | 2023-01-24T16:33:38Z | null | Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5389/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5389",
"merged_at": "2023-01-24T16:33:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5389"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4078/comments | https://api.github.com/repos/huggingface/datasets/issues/4078/events | https://github.com/huggingface/datasets/pull/4078 | 1,189,513,572 | PR_kwDODunzps41eWnl | 4,078 | Fix GithubMetricModuleFactory instantiation with None download_config | [] | closed | false | null | 1 | 2022-04-01T09:26:58Z | 2022-04-01T14:44:51Z | 2022-04-01T14:39:27Z | null | Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4078/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4078",
"merged_at": "2022-04-01T14:39:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4078"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5588/comments | https://api.github.com/repos/huggingface/datasets/issues/5588/events | https://github.com/huggingface/datasets/pull/5588 | 1,603,304,766 | PR_kwDODunzps5K8YYz | 5,588 | Flatten dataset on the fly in `save_to_disk` | [] | closed | false | null | 3 | 2023-02-28T15:37:46Z | 2023-02-28T17:28:35Z | 2023-02-28T17:21:17Z | null | Flatten a dataset on the fly in `save_to_disk` instead of doing it with `flatten_indices` to avoid creating an additional cache file.
(this is one of the sub-tasks in https://github.com/huggingface/datasets/issues/5507) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5588/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5588",
"merged_at": "2023-02-28T17:21:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5588"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/146/comments | https://api.github.com/repos/huggingface/datasets/issues/146/events | https://github.com/huggingface/datasets/pull/146 | 619,564,653 | MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx | 146 | Add BERTScore to metrics | [] | closed | false | null | 0 | 2020-05-16T22:09:39Z | 2020-05-17T22:22:10Z | 2020-05-17T22:22:09Z | null | This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics.
Here is an example of how to use it.
```sh
import nlp
bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket
predictions = ['example', 'fruit']
references = [['this is an example.', 'this is one example.'], ['apple']]
results = bertscore.compute(predictions, references, lang='en')
print(results)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/146/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/146.diff",
"html_url": "https://github.com/huggingface/datasets/pull/146",
"merged_at": "2020-05-17T22:22:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/146.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/146"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/871/comments | https://api.github.com/repos/huggingface/datasets/issues/871/events | https://github.com/huggingface/datasets/issues/871 | 747,470,136 | MDU6SXNzdWU3NDc0NzAxMzY= | 871 | terminate called after throwing an instance of 'google::protobuf::FatalException' | [] | closed | false | null | 2 | 2020-11-20T12:56:24Z | 2020-12-12T21:16:32Z | 2020-12-12T21:16:32Z | null | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
run_t5_base_eval.sh: line 19: 5795 Aborted | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/871/timeline | null | completed | null | null | false | [
"Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)",
"closing now, figured out this is because the max length of decoder w... |
https://api.github.com/repos/huggingface/datasets/issues/1590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1590/comments | https://api.github.com/repos/huggingface/datasets/issues/1590/events | https://github.com/huggingface/datasets/issues/1590 | 769,242,858 | MDU6SXNzdWU3NjkyNDI4NTg= | 1,590 | Add helper to resolve namespace collision | [] | closed | false | null | 5 | 2020-12-16T20:17:24Z | 2022-06-01T15:32:04Z | 2022-06-01T15:32:04Z | null | Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1590/timeline | null | completed | null | null | false | [
"Do you have an example?",
"I was thinking about using something like [importlib](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) to over-ride the collision. \r\n\r\n**Reason requested**: I use the [following template](https://github.com/jramapuram/ml_base/) repo where I house a... |
https://api.github.com/repos/huggingface/datasets/issues/557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/557/comments | https://api.github.com/repos/huggingface/datasets/issues/557/events | https://github.com/huggingface/datasets/pull/557 | 690,220,135 | MDExOlB1bGxSZXF1ZXN0NDc3MTQ1NjAx | 557 | Fix a few typos | [] | closed | false | null | 0 | 2020-09-01T15:03:24Z | 2020-09-02T07:39:08Z | 2020-09-02T07:39:07Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/557/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/557/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/557",
"merged_at": "2020-09-02T07:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/557"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/5101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5101/comments | https://api.github.com/repos/huggingface/datasets/issues/5101/events | https://github.com/huggingface/datasets/pull/5101 | 1,404,513,085 | PR_kwDODunzps5AkHJc | 5,101 | Free the "hf" filesystem protocol for `hffs` | [] | closed | false | null | 1 | 2022-10-11T11:57:21Z | 2022-10-12T15:32:59Z | 2022-10-12T15:30:38Z | null | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5101/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5101.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5101",
"merged_at": "2022-10-12T15:30:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5101.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5101"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2109/comments | https://api.github.com/repos/huggingface/datasets/issues/2109/events | https://github.com/huggingface/datasets/pull/2109 | 840,746,598 | MDExOlB1bGxSZXF1ZXN0NjAwNTg1MzM5 | 2,109 | Add more issue templates and customize issue template chooser | [] | closed | false | null | 2 | 2021-03-25T09:41:53Z | 2021-04-19T06:20:11Z | 2021-04-19T06:20:11Z | null | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset.
~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~
With this PR:
- more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request`
- the issue template chooser is customized, so that it now includes a link to `Discussions` for questions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2109/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2109.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2109",
"merged_at": "2021-04-19T06:20:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2109.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2109"
} | true | [
"If you agree, I could also add a link to [Discussions](https://github.com/huggingface/datasets/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).\r\n\r\nI could also add some other templates: Bug, Feature Request,...",
"@theo-m we wrote our same comments at the same t... |
https://api.github.com/repos/huggingface/datasets/issues/1224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1224/comments | https://api.github.com/repos/huggingface/datasets/issues/1224/events | https://github.com/huggingface/datasets/pull/1224 | 758,022,998 | MDExOlB1bGxSZXF1ZXN0NTMzMjY2Njg1 | 1,224 | adding conceptnet5 | [] | closed | false | null | 11 | 2020-12-06T21:06:53Z | 2020-12-09T16:38:16Z | 2020-12-09T14:37:17Z | null | Adding the conceptnet5 and omcs txt files used to create the conceptnet5 dataset. Conceptne5 is a common sense dataset. More info can be found here: https://github.com/commonsense/conceptnet5/wiki | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1224/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1224.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1224",
"merged_at": "2020-12-09T14:37:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1224.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1224"
} | true | [
"Thank you. I'll make those changes. but I'm having problems trying to push my changes to my fork\r\n",
"Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n",
"Also, what docstring are you recommending?\r\n",
"> Hi, I've removed the TODO, and added a README.md. How do I push the... |
https://api.github.com/repos/huggingface/datasets/issues/153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/153/comments | https://api.github.com/repos/huggingface/datasets/issues/153/events | https://github.com/huggingface/datasets/issues/153 | 619,972,246 | MDU6SXNzdWU2MTk5NzIyNDY= | 153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | 4 | 2020-05-18T07:24:22Z | 2020-05-18T21:18:16Z | null | null | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself.
Let's take GLUE as an example:
The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/153/timeline | null | null | null | null | false | [
"As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.",
"Actually, double checki... |
https://api.github.com/repos/huggingface/datasets/issues/3852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3852/comments | https://api.github.com/repos/huggingface/datasets/issues/3852/events | https://github.com/huggingface/datasets/pull/3852 | 1,162,252,337 | PR_kwDODunzps40Fb26 | 3,852 | Redundant add dataset information and dead link. | [] | closed | false | null | 1 | 2022-03-08T05:57:05Z | 2022-03-08T16:54:36Z | 2022-03-08T16:54:36Z | null | > Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation.
The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3852/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3852",
"merged_at": "2022-03-08T16:54:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3852"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3852). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5763/comments | https://api.github.com/repos/huggingface/datasets/issues/5763/events | https://github.com/huggingface/datasets/pull/5763 | 1,670,476,302 | PR_kwDODunzps5OcMI7 | 5,763 | fix typo: "mow" -> "now" | [] | closed | false | null | 2 | 2023-04-17T06:03:44Z | 2023-04-17T15:01:53Z | 2023-04-17T14:54:46Z | null | I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now." | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5763/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5763",
"merged_at": "2023-04-17T14:54:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5763"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2863/comments | https://api.github.com/repos/huggingface/datasets/issues/2863/events | https://github.com/huggingface/datasets/pull/2863 | 986,156,755 | MDExOlB1bGxSZXF1ZXN0NzI1MzkwMTkx | 2,863 | Update dataset URL | [] | closed | false | null | 1 | 2021-09-02T05:22:18Z | 2021-09-02T08:10:50Z | 2021-09-02T08:10:50Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2863/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2863.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2863",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2863.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2863"
} | true | [
"Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. 😉 "
] |
https://api.github.com/repos/huggingface/datasets/issues/5735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5735/comments | https://api.github.com/repos/huggingface/datasets/issues/5735/events | https://github.com/huggingface/datasets/pull/5735 | 1,662,150,903 | PR_kwDODunzps5OAY3A | 5,735 | Implement sharding on merged iterable datasets | [] | closed | false | null | 11 | 2023-04-11T10:02:25Z | 2023-04-27T16:39:04Z | 2023-04-27T16:32:09Z | null | This PR allows sharding of merged iterable datasets.
Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged.
With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sharded sub-iterable, ensuring that there is no duplication of data.
As a result it is now possible to set any amount of workers in the dataloader as long as it is lower or equal to the lowest amount of shards amongst the datasets. Before it had to be set to 0.
I previously talked about this issue on the forum [here](https://discuss.huggingface.co/t/interleaving-iterable-dataset-with-num-workers-0/35801) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5735/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5735",
"merged_at": "2023-04-27T16:32:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5735"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable",
"Hi ! \r\nI just tested this ou... |
https://api.github.com/repos/huggingface/datasets/issues/3380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3380/comments | https://api.github.com/repos/huggingface/datasets/issues/3380/events | https://github.com/huggingface/datasets/issues/3380 | 1,071,166,270 | I_kwDODunzps4_2LM- | 3,380 | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | [] | closed | false | null | 0 | 2021-12-04T09:18:33Z | 2022-01-11T12:29:53Z | 2022-01-11T12:29:53Z | null | Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week!
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://hf.co/oss-survey)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3380/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3380/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3081/comments | https://api.github.com/repos/huggingface/datasets/issues/3081/events | https://github.com/huggingface/datasets/pull/3081 | 1,026,383,749 | PR_kwDODunzps4tM1Gy | 3,081 | [Audio datasets] Adapting all audio datasets | [] | closed | false | null | 4 | 2021-10-14T13:13:45Z | 2021-10-15T12:52:03Z | 2021-10-15T12:22:33Z | null | This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets:
- Librispeech
- Timit
- Common Voice
- AMI
- ... (others I'm forgetting now)
The PR is curently blocked because the following leads to a problem:
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
As soon as it's unblocked, I'll adapt the other audio datasets as well. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3081/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3081",
"merged_at": "2021-10-15T12:22:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3081"
} | true | [
"@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise",
"@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section",
"Hi @patri... |
https://api.github.com/repos/huggingface/datasets/issues/5163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5163/comments | https://api.github.com/repos/huggingface/datasets/issues/5163/events | https://github.com/huggingface/datasets/pull/5163 | 1,422,540,337 | PR_kwDODunzps5BgQxp | 5,163 | Reduce default max `writer_batch_size` | [] | closed | false | null | 1 | 2022-10-25T14:14:52Z | 2022-10-27T12:19:27Z | 2022-10-27T12:16:47Z | null | Reduce the default writer_batch_size from 10k to 1k examples. Additionally, align the default values of `batch_size` and `writer_batch_size` in `Dataset.cast` with the values from the corresponding docstring. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5163/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5163.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5163",
"merged_at": "2022-10-27T12:16:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5163.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5163"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5974/comments | https://api.github.com/repos/huggingface/datasets/issues/5974/events | https://github.com/huggingface/datasets/pull/5974 | 1,767,981,231 | PR_kwDODunzps5TkXCb | 5,974 | Deprecate `errors` param in favor of `encoding_errors` in text builder | [] | closed | false | null | 3 | 2023-06-21T16:31:38Z | 2023-06-26T10:34:43Z | 2023-06-26T10:27:40Z | null | For consistency with the JSON builder and Pandas | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5974/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5974",
"merged_at": "2023-06-26T10:27:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5974"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4172/comments | https://api.github.com/repos/huggingface/datasets/issues/4172/events | https://github.com/huggingface/datasets/pull/4172 | 1,204,433,160 | PR_kwDODunzps42O7LW | 4,172 | Update assin2 dataset_infos.json | [] | closed | false | null | 1 | 2022-04-14T11:53:06Z | 2022-04-15T14:47:42Z | 2022-04-15T14:41:22Z | null | Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4172/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4172.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4172",
"merged_at": "2022-04-15T14:41:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4172.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4172"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4735/comments | https://api.github.com/repos/huggingface/datasets/issues/4735/events | https://github.com/huggingface/datasets/pull/4735 | 1,314,501,641 | PR_kwDODunzps477CuP | 4,735 | Pin rouge_score test dependency | [] | closed | false | null | 1 | 2022-07-22T07:18:21Z | 2022-07-22T07:58:14Z | 2022-07-22T07:45:18Z | null | Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed.
Fix #4734 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4735/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4735",
"merged_at": "2022-07-22T07:45:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4735"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/121/comments | https://api.github.com/repos/huggingface/datasets/issues/121/events | https://github.com/huggingface/datasets/pull/121 | 618,790,040 | MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx | 121 | make style | [] | closed | false | null | 0 | 2020-05-15T08:23:36Z | 2020-05-15T08:25:39Z | 2020-05-15T08:25:38Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/121/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/121",
"merged_at": "2020-05-15T08:25:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/121"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/1458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1458/comments | https://api.github.com/repos/huggingface/datasets/issues/1458/events | https://github.com/huggingface/datasets/pull/1458 | 761,235,962 | MDExOlB1bGxSZXF1ZXN0NTM1OTMyMTA1 | 1,458 | Add id_nergrit_corpus | [] | closed | false | null | 1 | 2020-12-10T13:20:34Z | 2020-12-17T10:45:15Z | 2020-12-17T10:45:15Z | null | Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis.
Recently my PR for id_nergrit_ner has been accepted and merged to the main branch. The id_nergrit_ner has only one dataset (NER), and this new PR renamed the dataset from id_nergrit_ner to id_nergrit_corpus and added 2 other remaining datasets (Statement Extraction, and Sentiment Analysis.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1458/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1458/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1458.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1458",
"merged_at": "2020-12-17T10:45:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1458.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1458"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3458/comments | https://api.github.com/repos/huggingface/datasets/issues/3458/events | https://github.com/huggingface/datasets/pull/3458 | 1,084,926,025 | PR_kwDODunzps4wFiRb | 3,458 | Fix duplicated tag in wikicorpus dataset card | [] | closed | false | null | 1 | 2021-12-20T15:34:16Z | 2021-12-20T16:03:25Z | 2021-12-20T16:03:24Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3458/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3458/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3458.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3458",
"merged_at": "2021-12-20T16:03:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3458.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3458"
} | true | [
"CI is failing just because of empty sections - merging"
] |
https://api.github.com/repos/huggingface/datasets/issues/5662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5662/comments | https://api.github.com/repos/huggingface/datasets/issues/5662/events | https://github.com/huggingface/datasets/pull/5662 | 1,637,140,813 | PR_kwDODunzps5MtvsM | 5,662 | Fix unnecessary dict comprehension | [] | closed | false | null | 3 | 2023-03-23T09:18:58Z | 2023-03-23T09:46:59Z | 2023-03-23T09:37:49Z | null | After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See:
- https://github.com/charliermarsh/ruff/releases/tag/v0.0.258
- https://github.com/charliermarsh/ruff/pull/3605
This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple values.
Fix #5661 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5662/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5662",
"merged_at": "2023-03-23T09:37:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5662"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I am merging because the CI error is unrelated.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | re... |
https://api.github.com/repos/huggingface/datasets/issues/1338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1338/comments | https://api.github.com/repos/huggingface/datasets/issues/1338/events | https://github.com/huggingface/datasets/pull/1338 | 759,725,770 | MDExOlB1bGxSZXF1ZXN0NTM0Njc5ODcz | 1,338 | Add GigaFren Dataset | [] | closed | false | null | 1 | 2020-12-08T19:42:04Z | 2020-12-14T10:03:47Z | 2020-12-14T10:03:46Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1338/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1338",
"merged_at": "2020-12-14T10:03:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1338"
} | true | [
"@lhoestq fixed"
] | |
https://api.github.com/repos/huggingface/datasets/issues/1661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1661/comments | https://api.github.com/repos/huggingface/datasets/issues/1661/events | https://github.com/huggingface/datasets/pull/1661 | 775,840,801 | MDExOlB1bGxSZXF1ZXN0NTQ2NDQzNjYx | 1,661 | updated dataset cards | [] | closed | false | null | 0 | 2020-12-29T11:20:40Z | 2020-12-30T17:15:16Z | 2020-12-30T17:15:16Z | null | added dataset instance in the card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1661/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1661.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1661",
"merged_at": "2020-12-30T17:15:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1661.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1661"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4072/comments | https://api.github.com/repos/huggingface/datasets/issues/4072/events | https://github.com/huggingface/datasets/pull/4072 | 1,188,266,410 | PR_kwDODunzps41aIUG | 4,072 | Add installation instructions to image_process doc | [] | closed | false | null | 1 | 2022-03-31T15:29:37Z | 2022-03-31T17:05:46Z | 2022-03-31T17:00:19Z | null | This PR adds the installation instructions for the Image feature to the image process doc. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4072",
"merged_at": "2022-03-31T17:00:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4072"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5434/comments | https://api.github.com/repos/huggingface/datasets/issues/5434/events | https://github.com/huggingface/datasets/issues/5434 | 1,536,090,042 | I_kwDODunzps5bjt-6 | 5,434 | sample_dataset module not found | [] | closed | false | null | 3 | 2023-01-17T09:57:54Z | 2023-01-19T13:52:12Z | 2023-01-19T07:55:11Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5434/timeline | null | completed | null | null | false | [
"Hi! Can you describe what the actual error is?",
"working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from t... |
https://api.github.com/repos/huggingface/datasets/issues/5703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5703/comments | https://api.github.com/repos/huggingface/datasets/issues/5703/events | https://github.com/huggingface/datasets/pull/5703 | 1,653,158,955 | PR_kwDODunzps5NjCCV | 5,703 | [WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only | [] | closed | false | null | 4 | 2023-04-04T04:37:49Z | 2023-04-20T03:17:37Z | 2023-04-20T03:17:32Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5703/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5703.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5703",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5703.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5703"
} | true | [
"`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).",
"That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impa... |
https://api.github.com/repos/huggingface/datasets/issues/5669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5669/comments | https://api.github.com/repos/huggingface/datasets/issues/5669/events | https://github.com/huggingface/datasets/issues/5669 | 1,638,070,046 | I_kwDODunzps5hovce | 5,669 | Almost identical datasets, huge performance difference | [] | open | false | null | 7 | 2023-03-23T18:20:20Z | 2023-04-09T18:56:23Z | null | null | ### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("beans", split="train")
for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
The above pass over the dataset takes about 1.5 seconds on my computer.
However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce:
```python
def transform(example):
example["image2"] = cv2.imread(example["image_file_path"])
return example
dataset2 = dataset.map(transform, remove_columns=["image"])
for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8):
pass
```
### Expected behavior
Same timings
### Environment info
python==3.10.9
datasets==2.10.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5669/timeline | null | null | null | null | false | [
"Do I miss something here?",
"Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `... |
https://api.github.com/repos/huggingface/datasets/issues/1665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1665/comments | https://api.github.com/repos/huggingface/datasets/issues/1665/events | https://github.com/huggingface/datasets/pull/1665 | 776,431,087 | MDExOlB1bGxSZXF1ZXN0NTQ2OTI1NTgw | 1,665 | Add language to dataset card for Counter dataset. | [] | closed | false | null | 0 | 2020-12-30T12:23:20Z | 2020-12-30T17:20:20Z | 2020-12-30T17:20:20Z | null | Add language. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1665/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1665.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1665",
"merged_at": "2020-12-30T17:20:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1665.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1665"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2465/comments | https://api.github.com/repos/huggingface/datasets/issues/2465/events | https://github.com/huggingface/datasets/pull/2465 | 915,525,071 | MDExOlB1bGxSZXF1ZXN0NjY1MzMxMDMz | 2,465 | adding masahaner dataset | [] | closed | false | null | 3 | 2021-06-08T21:20:25Z | 2021-06-14T14:59:05Z | 2021-06-14T14:59:05Z | null | Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner
@lhoestq , can you please review | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2465/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2465.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2465",
"merged_at": "2021-06-14T14:59:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2465.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2465"
} | true | [
"Thank you for the review. ",
"Thanks a lot for the corrections and comments. \r\n\r\nI have resolved point 2. The make style still throws some errors, please see below\r\n\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets/**/*.py metrics\r\n/bin/sh: 1: black: not found\r\nMakefile:13... |
https://api.github.com/repos/huggingface/datasets/issues/5461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5461/comments | https://api.github.com/repos/huggingface/datasets/issues/5461/events | https://github.com/huggingface/datasets/issues/5461 | 1,555,532,719 | I_kwDODunzps5ct4uv | 5,461 | Discrepancy in `nyu_depth_v2` dataset | [] | open | false | null | 37 | 2023-01-24T19:15:46Z | 2023-02-06T20:52:00Z | null | null | ### Describe the bug
I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison,

I tried to find the origin of this issue but sadly as I mentioned in tensorflow/datasets/issues/4674, the download link from `fast-depth` doesn't work anymore hence couldn't verify if the error originated there or during porting data from there to HF.
Hi @sayakpaul, as you worked on huggingface/datasets/issues/5255, if you still have access to that data could you please share the data or perhaps checkout this issue?
### Steps to reproduce the bug
This [notebook](https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing#scrollTo=UEW7QSh0jf0i) from @sayakpaul could be used to generate depth maps and actual ground truths could be checked from this [dataset](https://www.kaggle.com/datasets/awsaf49/nyuv2-bts-dataset) from BTS repo.
> Note: BTS dataset has only 36K data compared to the train-test 50K. They sampled the data as adjacent frames look quite the same
### Expected behavior
Expected depth maps should be smooth rather than discrete/clipped.
### Environment info
- `datasets` version: 2.8.1.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5461/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5461/timeline | null | null | null | null | false | [
"Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/ny... |
https://api.github.com/repos/huggingface/datasets/issues/5099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5099/comments | https://api.github.com/repos/huggingface/datasets/issues/5099/events | https://github.com/huggingface/datasets/issues/5099 | 1,404,370,191 | I_kwDODunzps5TtP0P | 5,099 | datasets doesn't support # in data paths | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | closed | false | null | 9 | 2022-10-11T10:05:32Z | 2022-10-13T13:14:20Z | 2022-10-13T13:14:20Z | null | ## Describe the bug
dataset files with `#` symbol their paths aren't read correctly.
## Steps to reproduce the bug
The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly
```python
ds = load_dataset('loubnabnl/bigcode_csharp', split="train", data_files=["data/c#/*"])
```
```
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/27a3166cff4bb18e11919cafa6f169c0f57483de/data/c#/data_0003.jsonl
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5099/timeline | null | completed | null | null | false | [
"`datasets` doesn't seem to urlencode the directory names here\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/file_utils.py#L109-L111\r\n\r\nfor example we should have\r\n```python\r\nfrom datasets.utils.file_utils import hf_hub_url\r\n\r\nurl = hf_hu... |
https://api.github.com/repos/huggingface/datasets/issues/1976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1976/comments | https://api.github.com/repos/huggingface/datasets/issues/1976/events | https://github.com/huggingface/datasets/pull/1976 | 820,228,538 | MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4 | 1,976 | Add datasets full offline mode with HF_DATASETS_OFFLINE | [] | closed | false | null | 0 | 2021-03-02T17:26:59Z | 2021-03-03T15:45:31Z | 2021-03-03T15:45:30Z | null | Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939
cc @stas00 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1976/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1976",
"merged_at": "2021-03-03T15:45:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1976"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/488/comments | https://api.github.com/repos/huggingface/datasets/issues/488/events | https://github.com/huggingface/datasets/issues/488 | 676,299,993 | MDU6SXNzdWU2NzYyOTk5OTM= | 488 | issues with downloading datasets for wmt16 and wmt19 | [] | closed | false | null | 3 | 2020-08-10T17:32:51Z | 2022-10-04T17:46:59Z | 2022-10-04T17:46:58Z | null | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed.
2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for.
I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)
3. my machine has crushed and when I retried I got:
```
Traceback (most recent call last):
File "./download.py", line 9, in <module>
dataset = nlp.load_dataset('wmt16', 'ru-en')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'
```
it can't handle resumes. but neither allows a new start. Had to delete it manually.
4. and finally when it downloaded the dataset, it then failed to fetch the metrics:
```
Traceback (most recent call last):
File "./download.py", line 15, in <module>
metric = nlp.load_metric('wmt16')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric
module_path, hash = prepare_module(path, download_config=download_config, dataset=False)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py
```
5. If I run the same code with `wmt19`, it fails too:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/488/timeline | null | completed | null | null | false | [
"I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.... |
https://api.github.com/repos/huggingface/datasets/issues/1760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1760/comments | https://api.github.com/repos/huggingface/datasets/issues/1760/events | https://github.com/huggingface/datasets/pull/1760 | 791,110,857 | MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0 | 1,760 | More tags | [] | closed | false | null | 2 | 2021-01-21T13:50:10Z | 2021-01-22T09:40:01Z | 2021-01-22T09:40:00Z | null | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1760/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"merged_at": "2021-01-22T09:40:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1760"
} | true | [
"Conll has `multilingual` but is only tagged as `en`",
"good catch, that was a bad copy paste x)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1634/comments | https://api.github.com/repos/huggingface/datasets/issues/1634/events | https://github.com/huggingface/datasets/issues/1634 | 774,487,934 | MDU6SXNzdWU3NzQ0ODc5MzQ= | 1,634 | Inspecting datasets per category | [] | closed | false | null | 4 | 2020-12-24T15:26:34Z | 2022-10-04T14:57:33Z | 2022-10-04T14:57:33Z | null | Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1634/timeline | null | completed | null | null | false | [
"That's interesting, can you tell me what you think would be useful to access to inspect a dataset?\r\n\r\nYou can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it?",
"Hi @thomwolf \r\nthank you, I was not aware of this, I was looking into the data viewer linked ... |
https://api.github.com/repos/huggingface/datasets/issues/2411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2411/comments | https://api.github.com/repos/huggingface/datasets/issues/2411/events | https://github.com/huggingface/datasets/pull/2411 | 903,671,778 | MDExOlB1bGxSZXF1ZXN0NjU0OTAzNjg2 | 2,411 | Add DOI badge to README | [] | closed | false | null | 0 | 2021-05-27T12:36:47Z | 2021-05-27T13:42:54Z | 2021-05-27T13:42:54Z | null | Once published the latest release, the DOI badge has been automatically generated by Zenodo. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2411/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2411",
"merged_at": "2021-05-27T13:42:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2411"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4737/comments | https://api.github.com/repos/huggingface/datasets/issues/4737/events | https://github.com/huggingface/datasets/issues/4737 | 1,315,011,004 | I_kwDODunzps5OYXm8 | 4,737 | Download error on scene_parse_150 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-07-22T13:28:28Z | 2022-09-01T15:37:11Z | 2022-09-01T15:37:11Z | null | ```
from datasets import load_dataset
dataset = load_dataset("scene_parse_150", "scene_parsing")
FileNotFoundError: Couldn't find file at http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4737/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4737/timeline | null | completed | null | null | false | [
"Hi! The server with the data seems to be down. I've reported this issue (https://github.com/CSAILVision/sceneparsing/issues/34) in the dataset repo. ",
"The URL seems to work now, and therefore the script as well."
] |
https://api.github.com/repos/huggingface/datasets/issues/5065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5065/comments | https://api.github.com/repos/huggingface/datasets/issues/5065/events | https://github.com/huggingface/datasets/pull/5065 | 1,396,003,362 | PR_kwDODunzps5AHxlQ | 5,065 | Ci py3.10 | [] | closed | false | null | 2 | 2022-10-04T10:13:51Z | 2022-11-29T15:28:05Z | 2022-11-29T15:25:26Z | null | Added a CI job for python 3.10
Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case.
I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5065/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5065/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5065.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5065",
"merged_at": "2022-11-29T15:25:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5065.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5065"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Does it sound good to you @albertvillanova ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2074/comments | https://api.github.com/repos/huggingface/datasets/issues/2074/events | https://github.com/huggingface/datasets/pull/2074 | 834,268,463 | MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw | 2,074 | Fix size categories in YAML Tags | [] | closed | false | null | 9 | 2021-03-18T00:02:36Z | 2021-03-23T17:11:10Z | 2021-03-23T17:11:10Z | null | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2074/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2074",
"merged_at": "2021-03-23T17:11:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2074"
} | true | [
"> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nWe can also update the task lists here: https://github.com/huggingface/dat... |
https://api.github.com/repos/huggingface/datasets/issues/5197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5197/comments | https://api.github.com/repos/huggingface/datasets/issues/5197/events | https://github.com/huggingface/datasets/pull/5197 | 1,434,676,150 | PR_kwDODunzps5CI0Ac | 5,197 | [zstd] Use max window log size | [] | open | false | null | 2 | 2022-11-03T13:35:58Z | 2022-11-03T13:45:19Z | null | null | ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags.
Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the max window size. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5197/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5197",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5197"
} | true | [
"@albertvillanova Please take a review.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5197). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5297/comments | https://api.github.com/repos/huggingface/datasets/issues/5297/events | https://github.com/huggingface/datasets/pull/5297 | 1,464,554,491 | PR_kwDODunzps5DtZjg | 5,297 | Fix xjoin for Windows pathnames | [] | closed | false | null | 1 | 2022-11-25T13:30:17Z | 2022-11-29T08:07:39Z | 2022-11-29T08:05:12Z | null | This PR fixes a bug in `xjoin` function with Windows pathnames.
Fix #5296. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5297/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5297.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5297",
"merged_at": "2022-11-29T08:05:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5297.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5297"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1873/comments | https://api.github.com/repos/huggingface/datasets/issues/1873/events | https://github.com/huggingface/datasets/pull/1873 | 807,750,745 | MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy | 1,873 | add iapp_wiki_qa_squad | [] | closed | false | null | 0 | 2021-02-13T13:34:27Z | 2021-02-16T14:21:58Z | 2021-02-16T14:21:58Z | null | `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.
It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)
to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in
5761/742/739 questions from 1529/191/192 articles. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1873/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1873",
"merged_at": "2021-02-16T14:21:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1873"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2034/comments | https://api.github.com/repos/huggingface/datasets/issues/2034/events | https://github.com/huggingface/datasets/pull/2034 | 829,381,388 | MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw | 2,034 | Fix typo | [] | closed | false | null | 0 | 2021-03-11T17:46:13Z | 2021-03-11T18:06:25Z | 2021-03-11T18:06:25Z | null | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2034/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"merged_at": "2021-03-11T18:06:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2034"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5225/comments | https://api.github.com/repos/huggingface/datasets/issues/5225/events | https://github.com/huggingface/datasets/issues/5225 | 1,444,305,183 | I_kwDODunzps5WFlkf | 5,225 | Add video feature | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "008672",
"default": true... | open | false | null | 7 | 2022-11-10T17:36:11Z | 2022-12-02T15:13:15Z | null | null | ### Feature request
Add a `Video` feature to the library so folks can include videos in their datasets.
### Motivation
Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos:
1. Videos, unlike images, can end up being extremely large files
2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference
3. Videos have an additional audio stream, which must be accounted for
4. The feature needs to be able to encode/decode videos (with right video settings) from bytes.
### Your contribution
I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though.
Would love to use this issue as a place to:
- brainstorm ideas on how to do this right
- list ways/examples to work around it for now
CC @sayakpaul @mariosasko @fcakyon | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5225/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5225/timeline | null | null | null | null | false | [
"@NielsRogge @rwightman may have additional requirements regarding this feature.\r\n\r\nWhen adding a new (decodable) type, the hardest part is choosing the right decoding library. What I mean by \"right\" here is that it has all the features we need and is easy to install (with GPU support?).\r\n\r\nSome candidate... |
https://api.github.com/repos/huggingface/datasets/issues/1090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1090/comments | https://api.github.com/repos/huggingface/datasets/issues/1090/events | https://github.com/huggingface/datasets/pull/1090 | 756,825,941 | MDExOlB1bGxSZXF1ZXN0NTMyMzA1OTk1 | 1,090 | add thaisum | [] | closed | false | null | 0 | 2020-12-04T05:54:48Z | 2020-12-04T11:16:06Z | 2020-12-04T11:16:06Z | null | ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. We evaluate the performance of various existing summarization models on ThaiSum dataset and analyse the characteristic of the dataset to present its difficulties. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1090/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1090.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1090",
"merged_at": "2020-12-04T11:16:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1090.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1090"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1725/comments | https://api.github.com/repos/huggingface/datasets/issues/1725/events | https://github.com/huggingface/datasets/issues/1725 | 784,182,273 | MDU6SXNzdWU3ODQxODIyNzM= | 1,725 | load the local dataset | [] | closed | false | null | 7 | 2021-01-12T12:12:55Z | 2022-06-01T16:00:59Z | 2022-06-01T16:00:59Z | null | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1725/timeline | null | completed | null | null | false | [
"You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit’s not possible to understand it and help you with only this information.",
"sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\... |
https://api.github.com/repos/huggingface/datasets/issues/6001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6001/comments | https://api.github.com/repos/huggingface/datasets/issues/6001/events | https://github.com/huggingface/datasets/pull/6001 | 1,782,516,627 | PR_kwDODunzps5UVMMh | 6,001 | Align `column_names` type check with type hint in `sort` | [] | closed | false | null | 3 | 2023-06-30T13:15:50Z | 2023-06-30T14:18:32Z | 2023-06-30T14:11:24Z | null | Fix #5998 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6001/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6001.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6001",
"merged_at": "2023-06-30T14:11:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6001.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6001"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5343/comments | https://api.github.com/repos/huggingface/datasets/issues/5343/events | https://github.com/huggingface/datasets/issues/5343 | 1,485,297,823 | I_kwDODunzps5Yh9if | 5,343 | T5 for Q&A produces truncated sentence | [] | closed | false | null | 0 | 2022-12-08T19:48:46Z | 2022-12-08T19:57:17Z | 2022-12-08T19:57:17Z | null | Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions.
For example, I set both the max_length, max_input_length, max_output_length to 128.
How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question?
Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue?
Any suggestions are highly appreciated.
Below is some code snippet.
`
import pytorch_lightning as pl
from torch.utils.data import DataLoader
import torch
import numpy as np
import time
from pathlib import Path
from transformers import (
Adafactor,
T5ForConditionalGeneration,
T5Tokenizer,
get_linear_schedule_with_warmup
)
from torch.utils.data import RandomSampler
from question_answering.utils import *
class T5FineTuner(pl.LightningModule):
def __init__(self, hyparams):
super(T5FineTuner, self).__init__()
self.hyparams = hyparams
self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path)
self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path)
if self.hyparams.freeze_embeds:
self.freeze_embeds()
if self.hyparams.freeze_encoder:
self.freeze_params(self.model.get_encoder())
# assert_all_frozen()
self.step_count = 0
self.output_dir = Path(self.hyparams.output_dir)
n_observations_per_split = {
'train': self.hyparams.n_train,
'validation': self.hyparams.n_val,
'test': self.hyparams.n_test
}
self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()}
self.em_score_list = []
self.subset_score_list = []
data_folder = r'C:\Datasets\MedQuAD-master'
self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder)
def freeze_params(self, model):
for param in model.parameters():
param.requires_grad = False
def freeze_embeds(self):
try:
self.freeze_params(self.model.model.shared)
for d in [self.model.model.encoder, self.model.model.decoder]:
self.freeze_params(d.embed_positions)
self.freeze_params(d.embed_tokens)
except AttributeError:
self.freeze_params(self.model.shared)
for d in [self.model.encoder, self.model.decoder]:
self.freeze_params(d.embed_tokens)
def lmap(self, f, x):
return list(map(f, x))
def is_logger(self):
return self.trainer.proc_rank <= 0
def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None):
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
labels=labels
)
def _step(self, batch):
labels = batch['target_ids']
labels[labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids = batch['source_ids'],
attention_mask=batch['source_mask'],
labels=labels,
decoder_attention_mask=batch['target_mask']
)
loss = outputs[0]
return loss
def ids_to_clean_text(self, generated_ids):
gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
return self.lmap(str.strip, gen_text)
def _generative_step(self, batch):
t0 = time.time()
generated_ids = self.model.generate(
batch["source_ids"],
attention_mask=batch["source_mask"],
use_cache=True,
decoder_attention_mask=batch['target_mask'],
max_length=128,
num_beams=2,
early_stopping=True
)
preds = self.ids_to_clean_text(generated_ids)
targets = self.ids_to_clean_text(batch["target_ids"])
gen_time = (time.time() - t0) / batch["source_ids"].shape[0]
loss = self._step(batch)
base_metrics = {'val_loss': loss}
summ_len = np.mean(self.lmap(len, generated_ids))
base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets)
em_score, subset_match_score = calculate_scores(preds, targets)
self.em_score_list.append(em_score)
self.subset_score_list.append(subset_match_score)
em_score = torch.tensor(em_score, dtype=torch.float32)
subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32)
base_metrics.update(em_score=em_score, subset_match_score=subset_match_score)
# rouge_results = self.rouge_metric.compute()
# rouge_dict = self.parse_score(rouge_results)
return base_metrics
def training_step(self, batch, batch_idx):
loss = self._step(batch)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def training_epoch_end(self, outputs):
avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean()
tensorboard_logs = {'avg_train_loss': avg_train_loss}
# return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs}
def validation_step(self, batch, batch_idx):
return self._generative_step(batch)
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
if len(self.em_score_list) <= 2:
average_em_score = sum(self.em_score_list) / len(self.em_score_list)
average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list)
else:
latest_em_score = self.em_score_list[:-2]
latest_subset_score = self.subset_score_list[:-2]
average_em_score = sum(latest_em_score) / len(latest_em_score)
average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score)
average_em_score = torch.tensor(average_em_score, dtype=torch.float32)
average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32)
tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score)
self.target_gen = []
self.prediction_gen = []
return {
'avg_val_loss': avg_loss,
'em_score': average_em_score,
'subset_match_socre': average_subset_match_score,
'log': tensorboard_logs,
'progress_bar': tensorboard_logs
}
def configure_optimizers(self):
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": self.hyparams.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False,
relative_step=False)
self.opt = optimizer
return [optimizer]
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None,
on_tpu=False, using_native_amp=False, using_lbfgs=False):
optimizer.step(closure=optimizer_closure)
optimizer.zero_grad()
self.lr_scheduler.step()
def get_tqdm_dict(self):
tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]}
return tqdm_dict
def train_dataloader(self):
n_samples = self.n_obs['train']
train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples,
args=self.hyparams)
sampler = RandomSampler(train_dataset)
dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size,
drop_last=True, num_workers=4)
# t_total = (
# (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu)))
# // self.hyparams.gradient_accumulation_steps
# * float(self.hyparams.num_train_epochs)
# )
t_total = 100000
scheduler = get_linear_schedule_with_warmup(
self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total
)
self.lr_scheduler = scheduler
return dataloader
def val_dataloader(self):
n_samples = self.n_obs['validation']
validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples,
args=self.hyparams)
sampler = RandomSampler(validation_dataset)
return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4)
def test_dataloader(self):
n_samples = self.n_obs['test']
test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams)
return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4)
def on_save_checkpoint(self, checkpoint):
save_path = self.output_dir.joinpath("best_tfmr")
self.model.config.save_step = self.step_count
self.model.save_pretrained(save_path)
self.tokenizer.save_pretrained(save_path)
import os
import argparse
import pytorch_lightning as pl
from question_answering.t5_closed_book import T5FineTuner
if __name__ == '__main__':
args_dict = dict(
output_dir="", # path to save the checkpoints
model_name_or_path='t5-large',
tokenizer_name_or_path='t5-large',
max_input_length=128,
max_output_length=128,
freeze_encoder=False,
freeze_embeds=False,
learning_rate=1e-5,
weight_decay=0.0,
adam_epsilon=1e-8,
warmup_steps=0,
train_batch_size=4,
eval_batch_size=4,
num_train_epochs=2,
gradient_accumulation_steps=10,
n_gpu=1,
resume_from_checkpoint=None,
val_check_interval=0.5,
n_val=4000,
n_train=-1,
n_test=-1,
early_stop_callback=False,
fp_16=False,
opt_level='O1',
max_grad_norm=1.0,
seed=101,
)
args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100,
'train_batch_size': 16, 'eval_batch_size': 16, 'learning_rate': 1e-3})
args = argparse.Namespace(**args_dict)
checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1)
## If resuming from checkpoint, add an arg resume_from_checkpoint
train_params = dict(
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=args.n_gpu,
max_epochs=args.num_train_epochs,
# early_stop_callback=False,
precision=16 if args.fp_16 else 32,
# amp_level=args.opt_level,
# resume_from_checkpoint=args.resume_from_checkpoint,
gradient_clip_val=args.max_grad_norm,
checkpoint_callback=checkpoint_callback,
val_check_interval=args.val_check_interval,
# accelerator='dp'
# logger=wandb_logger,
# callbacks=[LoggingCallback()],
)
model = T5FineTuner(args)
trainer = pl.Trainer(**train_params)
trainer.fit(model)
` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5343/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5343/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1339/comments | https://api.github.com/repos/huggingface/datasets/issues/1339/events | https://github.com/huggingface/datasets/pull/1339 | 759,744,088 | MDExOlB1bGxSZXF1ZXN0NTM0Njk0NDI4 | 1,339 | hate_speech_18 initial commit | [] | closed | false | null | 2 | 2020-12-08T20:10:08Z | 2020-12-12T16:17:32Z | 2020-12-12T16:17:32Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1339/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1339",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1339"
} | true | [
"> Nice thanks !\r\n> \r\n> Can you rename the dataset folder and the dataset script name `hate_speech18` instead of `hate_speech_18` to follow the snake case convention we're using ?\r\n> \r\n> Also it looks like the dummy_data.zip file is quite big (almost 4MB).\r\n> Can you try to reduce its size ?\r\n> \r\n> To... | |
https://api.github.com/repos/huggingface/datasets/issues/6055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6055/comments | https://api.github.com/repos/huggingface/datasets/issues/6055/events | https://github.com/huggingface/datasets/issues/6055 | 1,813,524,145 | I_kwDODunzps5sGC6x | 6,055 | Fix host URL in The Pile datasets | [] | open | false | null | 0 | 2023-07-20T09:08:52Z | 2023-07-20T09:09:37Z | null | null | ### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
### Steps to reproduce the bug
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
And
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
### Expected behavior
Downloading as normal.
### Environment info
Environment info
`datasets` version: 2.9.0
Platform: Windows
Python version: 3.9.13
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6055/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2804/comments | https://api.github.com/repos/huggingface/datasets/issues/2804/events | https://github.com/huggingface/datasets/pull/2804 | 971,353,437 | MDExOlB1bGxSZXF1ZXN0NzEzMTA2NTMw | 2,804 | Add Food-101 | [] | closed | false | null | 0 | 2021-08-16T04:26:15Z | 2021-08-20T14:31:33Z | 2021-08-19T12:48:06Z | null | Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2804/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2804.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2804",
"merged_at": "2021-08-19T12:48:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2804.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2804"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/248/comments | https://api.github.com/repos/huggingface/datasets/issues/248/events | https://github.com/huggingface/datasets/pull/248 | 633,390,427 | MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0 | 248 | add Toronto BooksCorpus | [] | closed | false | null | 11 | 2020-06-07T12:54:56Z | 2020-06-12T08:45:03Z | 2020-06-12T08:45:02Z | null | 1. I knew there is a branch `toronto_books_corpus`
- After I downloaded it, I found it is all non-english, and only have one row.
- It seems that it cites the wrong paper
- according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus`
2. It use a text mirror in google drive
- `bookscorpus.py` include a function `download_file_from_google_drive` , maybe you will want to put it elsewhere.
- text mirror is found in this [comment on the issue](https://github.com/soskek/bookcorpus/issues/24#issuecomment-556024973), and it said to have the same statistics as the one in the paper.
- You may want to download it and put it on your gs in case of it disappears someday.
3. Copyright ?
The paper has said
> **The BookCorpus Dataset.** In order to train our sentence similarity model we collected a corpus of 11,038 books ***from the web***. These are __**free books written by yet unpublished authors**__. We only included books that had more than 20K words in order to filter out perhaps noisier shorter stories. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus.
and we have changed the form (not books), so I don't think it should have that problems. Or we can state that use it at your own risk or only for academic use. I know @thomwolf should know these things more.
This should solved #131 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/248/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/248",
"merged_at": "2020-06-12T08:45:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/248"
} | true | [
"Thanks for adding this one !\r\n\r\nAbout the three points you mentioned:\r\n1. I think the `toronto_books_corpus` branch can be removed @mariamabarham ? \r\n2. You can use the download manager to download from google drive. For you case you can just do something like \r\n```python\r\nURL = \"https://drive.google.... |
https://api.github.com/repos/huggingface/datasets/issues/656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/656/comments | https://api.github.com/repos/huggingface/datasets/issues/656/events | https://github.com/huggingface/datasets/pull/656 | 705,736,319 | MDExOlB1bGxSZXF1ZXN0NDkwNDEwODAz | 656 | Use multiprocess from pathos for multiprocessing | [] | closed | false | null | 4 | 2020-09-21T16:12:19Z | 2020-09-28T14:45:40Z | 2020-09-28T14:45:39Z | null | [Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map.
It was suggested to use it by @kandorm.
We're already using dill which is its only dependency. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/656/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/656.diff",
"html_url": "https://github.com/huggingface/datasets/pull/656",
"merged_at": "2020-09-28T14:45:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/656.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/656"
} | true | [
"We can just install multiprocess actually, I'll change that",
"Just an FYI: I remember that I wanted to try pathos a couple of years back and I ran into issues considering cross-platform; the code would just break on Windows. If I can verify this PR by running CPU tests on Windows, let me know!",
"That's good ... |
https://api.github.com/repos/huggingface/datasets/issues/1432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1432/comments | https://api.github.com/repos/huggingface/datasets/issues/1432/events | https://github.com/huggingface/datasets/pull/1432 | 760,808,449 | MDExOlB1bGxSZXF1ZXN0NTM1NTc3ODk3 | 1,432 | Adding journalists questions dataset | [] | closed | false | null | 2 | 2020-12-10T01:44:47Z | 2020-12-14T13:51:05Z | 2020-12-14T13:51:04Z | null | This is my first dataset to be added to HF. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1432/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1432",
"merged_at": "2020-12-14T13:51:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1432"
} | true | [
"@lhoestq Thanks a lot for checking! I hope I addressed all your comments. ",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/603/comments | https://api.github.com/repos/huggingface/datasets/issues/603/events | https://github.com/huggingface/datasets/pull/603 | 697,758,750 | MDExOlB1bGxSZXF1ZXN0NDgzNjY2ODk5 | 603 | Set scripts version to master | [] | closed | false | null | 0 | 2020-09-10T10:47:44Z | 2020-09-10T11:02:05Z | 2020-09-10T11:02:04Z | null | By default the scripts version is master, so that if the library is installed with
```
pip install git+http://github.com/huggingface/nlp.git
```
or
```
git clone http://github.com/huggingface/nlp.git
pip install -e ./nlp
```
will use the latest scripts, and not the ones from the previous version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/603/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/603",
"merged_at": "2020-09-10T11:02:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/603"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2956/comments | https://api.github.com/repos/huggingface/datasets/issues/2956/events | https://github.com/huggingface/datasets/issues/2956 | 1,004,306,367 | I_kwDODunzps473H-_ | 2,956 | Cache problem in the `load_dataset` method for local compressed file(s) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2021-09-22T13:34:32Z | 2021-09-22T13:34:32Z | null | null | ## Describe the bug
Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version.
## Steps to reproduce the bug
To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.google.com/drive/11Em_Amoc-aPGhSBIkSHU2AvEh24nVayy?usp=sharing) that shows this behavior.
For this example, I have created a toy dataset at: https://huggingface.co/datasets/SaulLu/toy_struc_dataset
This dataset is composed of two versions:
- v1 on commit `a6beb46` which has a single example `{'id': 1, 'value': {'tag': 'a', 'value': 1}}` in file `train.jsonl.gz`
- v2 on commit `e7935f4` (`main` head) which has a single example `{'attr': 1, 'id': 1, 'value': 'a'}` in file `train.jsonl.gz`
With a terminal, we can start to get the v1 version of the dataset
```bash
git lfs install
git clone https://huggingface.co/datasets/SaulLu/toy_struc_dataset
cd toy_struc_dataset
git checkout a6beb46
```
Then we can load it with python and look at the content:
```python
from datasets import load_dataset
path = "/content/toy_struc_dataset"
dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"})
print(dataset["train"][0])
```
Output
```
{'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1
```
With a terminal, we can now start to get the v1 version of the dataset
```bash
git checkout main
```
Then we can load it with python and look at the content:
```python
from datasets import load_dataset
path = "/content/toy_struc_dataset"
dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"})
print(dataset["train"][0])
```
Output
```
{'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 (not v2)
```
## Expected results
The last output should have been
```
{"id":1, "value": "a", "attr": 1} # This is the example in v2
```
## Ideas
As discussed offline with Quentin, if the cache hash was ever sensitive to changes in a compressed file we would probably not have the problem anymore.
This situation leads me to suggest 2 other features:
- to also have an `load_from_cache_file` argument in the "load_dataset" method
- to reorganize the cache so that we can delete the caches related to a dataset (cf issue #ToBeFilledSoon)
And thanks again for this great library :hugs:
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2956/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5784/comments | https://api.github.com/repos/huggingface/datasets/issues/5784/events | https://github.com/huggingface/datasets/pull/5784 | 1,680,950,726 | PR_kwDODunzps5O_G9S | 5,784 | Raise subprocesses traceback when interrupting | [] | closed | false | null | 4 | 2023-04-24T10:34:03Z | 2023-04-26T16:04:42Z | 2023-04-26T15:54:44Z | null | When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing.
To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess is hanging or crashed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5784/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5784.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5784",
"merged_at": "2023-04-26T15:54:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5784.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5784"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4146/comments | https://api.github.com/repos/huggingface/datasets/issues/4146/events | https://github.com/huggingface/datasets/issues/4146 | 1,200,215,789 | I_kwDODunzps5Hidbt | 4,146 | SAMSum dataset viewer not working | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-04-11T16:22:57Z | 2022-04-29T16:26:09Z | 2022-04-29T16:26:09Z | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4146/timeline | null | completed | null | null | false | [
"https://huggingface.co/datasets/samsum\r\n\r\n```\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n```",
"Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details abo... |
https://api.github.com/repos/huggingface/datasets/issues/3726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3726/comments | https://api.github.com/repos/huggingface/datasets/issues/3726/events | https://github.com/huggingface/datasets/pull/3726 | 1,138,870,362 | PR_kwDODunzps4y3iSv | 3,726 | Use config pandas version in CSV dataset builder | [] | closed | false | null | 0 | 2022-02-15T15:47:49Z | 2022-02-15T16:55:45Z | 2022-02-15T16:55:44Z | null | Fix #3724. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3726/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3726.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3726",
"merged_at": "2022-02-15T16:55:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3726.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3726"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3396/comments | https://api.github.com/repos/huggingface/datasets/issues/3396/events | https://github.com/huggingface/datasets/issues/3396 | 1,073,467,183 | I_kwDODunzps4_-88v | 3,396 | Install Audio dependencies to support audio decoding | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
},
{
"color": "F... | closed | false | null | 5 | 2021-12-07T15:11:36Z | 2022-04-25T16:12:22Z | 2022-04-25T16:12:01Z | null | ## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*'
**Link:** *https://huggingface.co/datasets/openslr*
**Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla*
Error:
```
Status code: 400
Exception: ImportError
Message: To support decoding audio files, please install 'librosa'.
```
Am I the one who added this dataset ? Yes-No
- openslr: No
- projecte-aina/parlament_parla: Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3396/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3396/timeline | null | completed | null | null | false | [
"https://huggingface.co/datasets/projecte-aina/parlament_parla -> works (but we still have to show an audio player)\r\n\r\nhttps://huggingface.co/datasets/openslr -> another issue: `Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/zip:/asr_javanese/data/00/00004fe6aa.flac'`",
... |
https://api.github.com/repos/huggingface/datasets/issues/1846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1846/comments | https://api.github.com/repos/huggingface/datasets/issues/1846/events | https://github.com/huggingface/datasets/pull/1846 | 803,806,380 | MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy | 1,846 | Make DownloadManager downloaded/extracted paths accessible | [] | closed | false | null | 3 | 2021-02-08T18:14:42Z | 2021-02-25T14:10:18Z | 2021-02-25T14:10:18Z | null | Make accessible the file paths downloaded/extracted by DownloadManager.
Close #1831.
The approach:
- I set these paths as DownloadManager attributes: these are DownloadManager's concerns
- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1846/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1846",
"merged_at": "2021-02-25T14:10:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1846"
} | true | [
"First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...",
"There could ... |
https://api.github.com/repos/huggingface/datasets/issues/2783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2783/comments | https://api.github.com/repos/huggingface/datasets/issues/2783/events | https://github.com/huggingface/datasets/pull/2783 | 965,461,382 | MDExOlB1bGxSZXF1ZXN0NzA3NzcxOTM3 | 2,783 | Add KS task to SUPERB | [] | closed | false | null | 5 | 2021-08-10T22:14:07Z | 2021-08-12T16:45:01Z | 2021-08-11T20:19:17Z | null | Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051).
- [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting)
- [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_commands/dataset.py)
- [TFDS implementation](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/speech_commands.py)
Some notable quirks:
- The dataset is originally single-archive (train+val+test all in one), but the test set has a "canonical" distribution in a separate archive, which is also used here (see `_split_ks_files()`).
- The `_background_noise_`/`_silence_` audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime)
Related to #2619. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2783/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2783",
"merged_at": "2021-08-11T20:19:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2783"
} | true | [
"thanks a lot for implementing this @anton-l !!\r\n\r\ni won't have time to review this while i'm away, so happy for @albertvillanova and @patrickvonplaten to decide when to merge :)",
"@albertvillanova thanks! Everything should be ready now :)",
"> The _background_noise_/_silence_ audio files are much longer t... |
https://api.github.com/repos/huggingface/datasets/issues/2817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2817/comments | https://api.github.com/repos/huggingface/datasets/issues/2817/events | https://github.com/huggingface/datasets/pull/2817 | 974,486,051 | MDExOlB1bGxSZXF1ZXN0NzE1NzgzMDQ3 | 2,817 | Rename The Pile subsets | [] | closed | false | null | 2 | 2021-08-19T09:56:22Z | 2021-08-23T16:24:10Z | 2021-08-23T16:24:09Z | null | After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names.
I'm doing the changes for the subsets that @richarddwang added:
- [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801
- [x] stack_exchange -> the_pile_stack_exchange https://github.com/huggingface/datasets/pull/2803
- [x] openwebtext2 -> the_pile_openwebtext2 https://github.com/huggingface/datasets/pull/2802
For consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think.
(we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2817/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2817.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2817",
"merged_at": "2021-08-23T16:24:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2817.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2817"
} | true | [
"Sounds good. Should we also have a “the_pile” dataset with the subsets as configuration?",
"I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https://the-eye.eu/public/AI/pile/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subset... |
https://api.github.com/repos/huggingface/datasets/issues/4082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4082/comments | https://api.github.com/repos/huggingface/datasets/issues/4082/events | https://github.com/huggingface/datasets/pull/4082 | 1,189,965,845 | PR_kwDODunzps41f3fb | 4,082 | Add chrF(++) Metric Card | [] | closed | false | null | 1 | 2022-04-01T15:32:12Z | 2022-04-12T20:43:55Z | 2022-04-12T20:38:06Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4082/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4082",
"merged_at": "2022-04-12T20:38:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4082"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2774/comments | https://api.github.com/repos/huggingface/datasets/issues/2774/events | https://github.com/huggingface/datasets/pull/2774 | 963,932,199 | MDExOlB1bGxSZXF1ZXN0NzA2NDY2MDc0 | 2,774 | Prevent .map from using multiprocessing when loading from cache | [] | closed | false | null | 6 | 2021-08-09T12:11:38Z | 2021-09-09T10:20:28Z | 2021-09-09T10:20:28Z | null | ## Context
On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get:
```
Traceback (most recent call last):
File "lib/python3.8/site-packages/multiprocess/pool.py", line 131, in worker
put((job, i, result))
File "lib/python3.8/site-packages/multiprocess/queues.py", line 371, in put
self._writer.send_bytes(obj)
File "lib/python3.8/site-packages/multiprocess/connection.py", line 203, in send_bytes
self._send_bytes(m[offset:offset + size])
File "lib/python3.8/site-packages/multiprocess/connection.py", line 414, in _send_bytes
self._send(header + buf)
File "lib/python3.8/site-packages/multiprocess/connection.py", line 371, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
```
Our current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint.
Instead what we suggest:
- Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache.
## Current issues
~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~
**EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`:
- sequential : `datasets.arrow_dataset.Dataset._map_single`
- multiprocessing: `datasets.arrow_dataset._map_single`
This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue.
## What was done
~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~
I couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method.
## TODO
- [x] Check that the multiprocessed version and the sequential version output the same output
- [x] Check that sequential can load multiprocessed
- [x] Check that multiprocessed can load sequential
## Test
```python
from datasets import load_dataset
from multiprocessing import Pool
import random
def process(batch, rng):
length = len(batch["text"])
return {**batch, "processed_text": [f"PROCESSED {rng.random()}" for _ in range(length)]}
dataset = load_dataset("stas/openwebtext-10k", split="train")
print(dataset.column_names)
print(type(dataset))
rng = random.Random(42)
dataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng})
# This one should be loaded from cache
rng = random.Random(42)
dataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}, sequential=True)
# Just to check that the random generator was correct
print(dataset1[-1]["processed_text"])
print(dataset2[-1]["processed_text"])
```
## Other solutions
I chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache).
Also we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing).
cc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2774/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2774.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2774",
"merged_at": "2021-09-09T10:20:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2774.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2774"
} | true | [
"I'm guessing tests are failling, because this was pushed before https://github.com/huggingface/datasets/pull/2779 was merged? cc @albertvillanova ",
"Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r... |
https://api.github.com/repos/huggingface/datasets/issues/5452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5452/comments | https://api.github.com/repos/huggingface/datasets/issues/5452/events | https://github.com/huggingface/datasets/pull/5452 | 1,552,655,939 | PR_kwDODunzps5ITcA3 | 5,452 | Swap log messages for symbolic/hard links in tar extractor | [] | closed | false | null | 2 | 2023-01-23T07:53:38Z | 2023-01-23T09:40:55Z | 2023-01-23T08:31:17Z | null | The log messages do not match their if-condition. This PR swaps them.
Found while investigating:
- #5441
CC: @lhoestq | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5452/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5452/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5452.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5452",
"merged_at": "2023-01-23T08:31:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5452.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5452"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3735/comments | https://api.github.com/repos/huggingface/datasets/issues/3735/events | https://github.com/huggingface/datasets/issues/3735 | 1,140,087,891 | I_kwDODunzps5D9FxT | 3,735 | Performance of `datasets` at scale | [] | open | false | null | 5 | 2022-02-16T14:23:32Z | 2022-03-15T09:15:29Z | null | null | # Performance of `datasets` at 1TB scale
## What is this?
During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library.
## Dataset
The dataset is a 1.1TB extract from GitHub with 120M code files and is stored as 5000 `.json.gz` files. The goal of the preprocessing is to remove duplicates and filter files based on their stats. While the calculating of the hashes for deduplication and stats for filtering can be parallelized the filtering itself is run with a single process. After processing the files are pushed to the hub.
## Machine
The experiment was run on a `m1` machine on GCP with 96 CPU cores and 1.3TB RAM.
## Performance breakdown
- Loading the data **3.5h** (_30sec_ from cache)
- **1h57min** single core loading (not sure what is going on here, corresponds to second progress bar)
- **1h10min** multi core json reading
- **20min** remaining time before and after the two main processes mentioned above
- Process the data **2h** (_20min_ from cache)
- **20min** Getting reading for processing
- **40min** Hashing and files stats (96 workers)
- **58min** Deduplication filtering (single worker)
- Save parquet files **5h**
- Saving 1000 parquet files (16 workers)
- Push to hub **37min**
- **34min** git add
- **3min** git push (several hours with `Repository.git_push()`)
## Conclusion
It appears that loading and saving the data is the main bottleneck at that scale (**8.5h**) whereas processing (**2h**) and pushing the data to the hub (**0.5h**) is relatively fast. To optimize the performance at this scale it would make sense to consider such an end-to-end example and target the bottlenecks which seem to be loading from and saving to disk. The processing itself seems to run relatively fast.
## Notes
- map operation on a 1TB dataset with 96 workers requires >1TB RAM
- map operation does not maintain 100% CPU utilization with 96 workers
- sometimes when the script crashes all the data files have a corresponding `*.lock` file in the data folder (or multiple e.g. `*.lock.lock` when it happened a several times). This causes the cache **not** to be triggered (which is significant at that scale) - i guess because there are new data files
- parallelizing `to_parquet` decreased the saving time from 17h to 5h, however adding more workers at this point had almost no effect. not sure if this is:
a) a bug in my parallelization logic,
b) i/o limit to load data form disk to memory or
c) i/o limit to write from memory to disk.
- Using `Repository.git_push()` was much slower than using command line `git-lfs` - 10-20MB/s vs. 300MB/s! The `Dataset.push_to_hub()` function is even slower as it only uploads one file at a time with only a few MB/s, whereas `Repository.git_push()` pushes files in parallel (each at a similar speed).
cc @lhoestq @julien-c @LysandreJik @SBrandeis
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 11,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 15,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3735/timeline | null | null | null | null | false | [
"> using command line git-lfs - [...] 300MB/s!\r\n\r\nwhich server location did you upload from?",
"From GCP region `us-central1-a`.",
"The most surprising part to me is the saving time. Wondering if it could be due to compression (`ParquetWriter` uses SNAPPY compression by default; it can be turned off with `... |
https://api.github.com/repos/huggingface/datasets/issues/1779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/events | https://github.com/huggingface/datasets/pull/1779 | 793,539,703 | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | 1,779 | Ignore definition line number of functions for caching | [] | closed | false | null | 0 | 2021-01-25T16:42:29Z | 2021-01-26T10:20:20Z | 2021-01-26T10:20:19Z | null | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition for such functions (even though we're doing it for lambda functions).
For example this code currently prints False:
```python
from datasets.fingerprint import Hasher
# define once
def foo(x):
return x
h = Hasher.hash(foo)
# define a second time elsewhere
def foo(x):
return x
print(h == Hasher.hash(foo))
```
I changed this by ignoring the line number for all functions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"merged_at": "2021-01-26T10:20:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1779"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5114/comments | https://api.github.com/repos/huggingface/datasets/issues/5114/events | https://github.com/huggingface/datasets/issues/5114 | 1,409,236,738 | I_kwDODunzps5T_z8C | 5,114 | load_from_disk with remote filesystem fails due to a wrong temporary local folder path | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 2 | 2022-10-14T11:54:53Z | 2022-11-19T07:13:10Z | null | null | ## Describe the bug
The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py:
```python
if is_remote_filesystem(fs):
src_dataset_path = extract_path_from_uri(dataset_path)
dataset_path = Dataset._build_local_temp_path(src_dataset_path)
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
```
If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train`
Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice)
Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right:
```python
fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True)
```
## Steps to reproduce the bug
```python
fs = gcsfs.GCSFileSystem(**storage_options)
dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine
dataset.save_to_disk(output_dir, fs=fs) #works fine
dataset = load_from_disk(output_dir, fs=fs) # crashes
```
## Expected results
The dataset is loaded
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.6.1.dev0
- Platform: mac os monterey 12.5.1
- Python version: 3.8.13
- PyArrow version:pyarrow==9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5114/timeline | null | null | null | null | false | [
"Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.",
"What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1866/comments | https://api.github.com/repos/huggingface/datasets/issues/1866/events | https://github.com/huggingface/datasets/pull/1866 | 807,017,816 | MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1 | 1,866 | Add dataset for Financial PhraseBank | [] | closed | false | null | 1 | 2021-02-12T07:30:56Z | 2021-02-17T14:22:36Z | 2021-02-17T14:22:36Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1866/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1866",
"merged_at": "2021-02-17T14:22:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1866"
} | true | [
"Thanks for the feedback. All accepted and metadata regenerated."
] | |
https://api.github.com/repos/huggingface/datasets/issues/2406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2406/comments | https://api.github.com/repos/huggingface/datasets/issues/2406/events | https://github.com/huggingface/datasets/issues/2406 | 902,643,844 | MDU6SXNzdWU5MDI2NDM4NDQ= | 2,406 | Add guide on using task templates to documentation | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-05-26T16:28:26Z | 2022-10-05T17:07:00Z | 2022-10-05T17:07:00Z | null | Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2406/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2021/comments | https://api.github.com/repos/huggingface/datasets/issues/2021/events | https://github.com/huggingface/datasets/issues/2021 | 826,988,016 | MDU6SXNzdWU4MjY5ODgwMTY= | 2,021 | Interactively doing save_to_disk and load_from_disk corrupts the datasets object? | [] | closed | false | null | 1 | 2021-03-10T02:48:34Z | 2021-03-13T10:07:41Z | 2021-03-13T10:07:41Z | null | dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a serious issue with cashing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2021/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching."
] |
https://api.github.com/repos/huggingface/datasets/issues/468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/468/comments | https://api.github.com/repos/huggingface/datasets/issues/468/events | https://github.com/huggingface/datasets/issues/468 | 671,622,441 | MDU6SXNzdWU2NzE2MjI0NDE= | 468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | [] | closed | false | null | 5 | 2020-08-02T14:05:10Z | 2020-08-20T08:16:08Z | 2020-08-20T08:16:08Z | null | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-5-1d61f439b843> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
528 ignore_verifications = ignore_verifications or save_infos
529 # Download/copy dataset processing script
--> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
531
532 # Get dataset builder class from the processing script
/usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs)
265
266 # Download external imports if needed
--> 267 imports = get_imports(local_path)
268 local_imports = []
269 library_imports = []
/usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path)
156 lines = []
157 with open(file_path, mode="r") as f:
--> 158 lines.extend(f.readlines())
159
160 logger.info("Checking %s for additional imports.", file_path)
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128)
```
## Steps to reproduce
Install from nlp's master branch
```python
pip install git+https://github.com/huggingface/nlp.git
```
then run
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
```
## OS / platform details
- `nlp` version: latest from master
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
## Proposed solution
Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding:
```
# old
with open(filepath) as f
# new
with open(filepath, encoding='utf-8') as f
```
or raise a warning that suggests setting the locale explicitly, e.g.
```python
import locale
locale.setlocale(locale.LC_ALL, 'C.UTF-8')
```
I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/468/timeline | null | completed | null | null | false | [
"Indeed. Solution 1 is the simplest.\r\n\r\nThis is actually a recurring problem.\r\nI think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.\r\nAnd probably add a test in the CI to forbid using this in the future.",
"I'm happy to tackle the broader problem - will open a ... |
https://api.github.com/repos/huggingface/datasets/issues/1137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1137/comments | https://api.github.com/repos/huggingface/datasets/issues/1137/events | https://github.com/huggingface/datasets/pull/1137 | 757,358,145 | MDExOlB1bGxSZXF1ZXN0NTMyNzQ4NDAx | 1,137 | add wmt mlqe 2020 shared task | [] | closed | false | null | 1 | 2020-12-04T19:45:34Z | 2020-12-06T19:59:44Z | 2020-12-06T19:53:46Z | null | First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation)
Note that I copied the tags in the README for only one (of the 7 configurations): `en-de`.
There is one configuration for each pair of languages. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1137/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1137",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1137"
} | true | [
"re-created in #1218 because this was too messy"
] |
https://api.github.com/repos/huggingface/datasets/issues/748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/748/comments | https://api.github.com/repos/huggingface/datasets/issues/748/events | https://github.com/huggingface/datasets/pull/748 | 726,196,589 | MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3 | 748 | New version of CompGuessWhat?! with refined annotations | [] | closed | false | null | 1 | 2020-10-21T06:55:41Z | 2020-10-21T08:52:42Z | 2020-10-21T08:46:19Z | null | This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/748/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/748",
"merged_at": "2020-10-21T08:46:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/748"
} | true | [
"No worries. Always happy to help and thanks for your support in fixing the issue :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1720/comments | https://api.github.com/repos/huggingface/datasets/issues/1720/events | https://github.com/huggingface/datasets/pull/1720 | 783,721,833 | MDExOlB1bGxSZXF1ZXN0NTUzMDM0MzYx | 1,720 | Adding the NorNE dataset for NER | [] | closed | false | null | 13 | 2021-01-11T21:34:13Z | 2021-03-31T14:23:49Z | 2021-03-31T14:13:17Z | null | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1720/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1720.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1720",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1720.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1720"
} | true | [
"Quick question, @lhoestq. In this specific dataset, two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. However, I have not found an easy... |
https://api.github.com/repos/huggingface/datasets/issues/1857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1857/comments | https://api.github.com/repos/huggingface/datasets/issues/1857/events | https://github.com/huggingface/datasets/issues/1857 | 805,391,107 | MDU6SXNzdWU4MDUzOTExMDc= | 1,857 | Unable to upload "community provided" dataset - 400 Client Error | [] | closed | false | null | 1 | 2021-02-10T10:39:01Z | 2021-08-03T05:06:13Z | 2021-08-03T05:06:13Z | null | Hi,
i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens:
```
$ datasets-cli login
$ datasets-cli upload_dataset my_dataset
About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username
About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username
Proceed? [Y/n] Y
Uploading... This might take a while if files are large
400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign
huggingface.co migrated to a new model hosting system.
You need to upgrade to transformers v3.5+ to upload new models.
More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you!
```
I'm using the latest releases of datasets and transformers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1857/timeline | null | completed | null | null | false | [
"Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c ma... |
https://api.github.com/repos/huggingface/datasets/issues/3282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3282/comments | https://api.github.com/repos/huggingface/datasets/issues/3282/events | https://github.com/huggingface/datasets/issues/3282 | 1,055,054,898 | I_kwDODunzps4-4twy | 3,282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 7 | 2021-11-16T16:05:19Z | 2022-04-12T11:57:43Z | 2022-04-12T11:57:43Z | null | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
```
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
```
Am I the one who added this dataset ? No
Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3282/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting :)\r\nI think this is because the dataset is behind an access page. We can fix the dataset viewer\r\n\r\nIf you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access... |
https://api.github.com/repos/huggingface/datasets/issues/3173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3173/comments | https://api.github.com/repos/huggingface/datasets/issues/3173/events | https://github.com/huggingface/datasets/pull/3173 | 1,038,404,300 | PR_kwDODunzps4typcA | 3,173 | Fix issue with filelock filename being too long on encrypted filesystems | [] | closed | false | null | 0 | 2021-10-28T11:28:57Z | 2021-10-29T09:42:24Z | 2021-10-29T09:42:24Z | null | Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs.
Fix #2924
cc: @lmmx | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3173/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3173/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3173",
"merged_at": "2021-10-29T09:42:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3173"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/666/comments | https://api.github.com/repos/huggingface/datasets/issues/666/events | https://github.com/huggingface/datasets/issues/666 | 707,608,578 | MDU6SXNzdWU3MDc2MDg1Nzg= | 666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | [] | closed | false | null | 1 | 2020-09-23T19:02:25Z | 2020-10-27T15:19:25Z | 2020-10-27T15:19:25Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/666/timeline | null | completed | null | null | false | [
"No they are other similar copies but they are not provided by the official Bert models authors."
] | |
https://api.github.com/repos/huggingface/datasets/issues/3824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3824/comments | https://api.github.com/repos/huggingface/datasets/issues/3824/events | https://github.com/huggingface/datasets/pull/3824 | 1,159,574,186 | PR_kwDODunzps4z85SO | 3,824 | Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute` | [] | closed | false | null | 1 | 2022-03-04T12:04:40Z | 2022-03-04T18:04:22Z | 2022-03-04T18:04:21Z | null | Fix #3818 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3824/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3824",
"merged_at": "2022-03-04T18:04:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3824"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3824). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/2666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2666/comments | https://api.github.com/repos/huggingface/datasets/issues/2666/events | https://github.com/huggingface/datasets/pull/2666 | 946,825,140 | MDExOlB1bGxSZXF1ZXN0NjkxOTMzMDM1 | 2,666 | Adds CodeClippy dataset [WIP] | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 2 | 2021-07-17T13:32:04Z | 2023-07-26T23:06:01Z | 2022-10-03T09:37:35Z | null | CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week
https://the-eye.eu/public/AI/training_data/code_clippy_data/ | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2666/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2666.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2666",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2666.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2666"
} | true | [
"Thanks for your contribution, @arampacha. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if... |
https://api.github.com/repos/huggingface/datasets/issues/1585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1585/comments | https://api.github.com/repos/huggingface/datasets/issues/1585/events | https://github.com/huggingface/datasets/issues/1585 | 768,831,171 | MDU6SXNzdWU3Njg4MzExNzE= | 1,585 | FileNotFoundError for `amazon_polarity` | [] | closed | false | null | 1 | 2020-12-16T12:51:05Z | 2020-12-16T16:02:56Z | 2020-12-16T16:02:56Z | null | Version: `datasets==v1.1.3`
### Reproduction
```python
from datasets import load_dataset
data = load_dataset("amazon_polarity")
```
crashes with
```bash
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py
```
and
```bash
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py
```
and
```bash
FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1585/timeline | null | completed | null | null | false | [
"Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`"
... |
https://api.github.com/repos/huggingface/datasets/issues/4457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4457/comments | https://api.github.com/repos/huggingface/datasets/issues/4457/events | https://github.com/huggingface/datasets/pull/4457 | 1,263,531,911 | PR_kwDODunzps45QZCU | 4,457 | First draft of the docs for TF + Datasets | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 4 | 2022-06-07T16:06:48Z | 2022-06-14T16:08:41Z | 2022-06-14T15:59:08Z | null | I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4457/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4457/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4457.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4457",
"merged_at": "2022-06-14T15:59:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4457.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4457"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some links are still missing I think :)",
"This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestio... |
https://api.github.com/repos/huggingface/datasets/issues/106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/106/comments | https://api.github.com/repos/huggingface/datasets/issues/106/events | https://github.com/huggingface/datasets/pull/106 | 618,361,418 | MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3 | 106 | Add data dir test command | [] | closed | false | null | 1 | 2020-05-14T16:18:39Z | 2020-05-14T16:49:11Z | 2020-05-14T16:49:10Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/106/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/106",
"merged_at": "2020-05-14T16:49:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/106"
} | true | [
"Nice - I think we can merge this. I will update the checksums for `wikihow` then as well"
] | |
https://api.github.com/repos/huggingface/datasets/issues/2322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2322/comments | https://api.github.com/repos/huggingface/datasets/issues/2322/events | https://github.com/huggingface/datasets/issues/2322 | 876,383,853 | MDU6SXNzdWU4NzYzODM4NTM= | 2,322 | Calls to map are not cached. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2021-05-05T12:11:27Z | 2021-06-08T19:10:02Z | 2021-06-08T19:08:21Z | null | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])
return samples
# first call
x = sst.map(foo, batched=True, with_indices=True, num_proc=2)
print('\n'*3, "#" * 30, '\n'*3)
# second call
y = sst.map(foo, batched=True, with_indices=True, num_proc=2)
# print version
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
## Actual results
This code prints the following output for me:
```bash
No config specified, defaulting to: sst/default
Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff)
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
#0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s]
##############################
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
#0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s]
- Datasets: 1.6.1
- Python: 3.8.3 (default, May 19 2020, 18:47:26)
[GCC 7.3.0]
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
```
## Expected results
Caching should work.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2322/timeline | null | completed | null | null | false | [
"I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] ... |
https://api.github.com/repos/huggingface/datasets/issues/6061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6061/comments | https://api.github.com/repos/huggingface/datasets/issues/6061/events | https://github.com/huggingface/datasets/pull/6061 | 1,818,337,136 | PR_kwDODunzps5WOi79 | 6,061 | Dill 3.7 support | [] | closed | false | null | 5 | 2023-07-24T12:33:58Z | 2023-07-24T14:13:20Z | 2023-07-24T14:04:36Z | null | Adds support for dill 3.7. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6061/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6061.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6061",
"merged_at": "2023-07-24T14:04:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6061.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6061"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/605/comments | https://api.github.com/repos/huggingface/datasets/issues/605/events | https://github.com/huggingface/datasets/pull/605 | 697,887,401 | MDExOlB1bGxSZXF1ZXN0NDgzNzg1Mjc1 | 605 | [Datasets] Transmit format to children | [] | closed | false | null | 1 | 2020-09-10T12:30:18Z | 2020-09-10T16:15:21Z | 2020-09-10T16:15:21Z | null | Transmit format to children obtained when processing a dataset.
Added a test.
When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/605/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/605",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/605"
} | true | [
"Closing as #607 was merged"
] |
https://api.github.com/repos/huggingface/datasets/issues/4165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4165/comments | https://api.github.com/repos/huggingface/datasets/issues/4165/events | https://github.com/huggingface/datasets/pull/4165 | 1,203,730,187 | PR_kwDODunzps42MubF | 4,165 | Fix google bleu typos, examples | [] | closed | false | null | 1 | 2022-04-13T19:59:54Z | 2022-05-03T12:23:52Z | 2022-05-03T12:16:44Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4165/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4165.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4165",
"merged_at": "2022-05-03T12:16:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4165.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4165"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2081/comments | https://api.github.com/repos/huggingface/datasets/issues/2081/events | https://github.com/huggingface/datasets/pull/2081 | 835,112,968 | MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4 | 2,081 | Fix docstrings issues | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 0 | 2021-03-18T18:11:01Z | 2021-04-07T14:37:43Z | 2021-04-07T14:37:43Z | null | Fix docstring issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2081/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2081",
"merged_at": "2021-04-07T14:37:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2081"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2996/comments | https://api.github.com/repos/huggingface/datasets/issues/2996/events | https://github.com/huggingface/datasets/pull/2996 | 1,013,266,373 | PR_kwDODunzps4sjrP6 | 2,996 | Remove all query parameters when extracting protocol | [] | closed | false | null | 4 | 2021-10-01T12:05:34Z | 2021-10-04T08:48:13Z | 2021-10-04T08:48:13Z | null | Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2996/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2996.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2996",
"merged_at": "2021-10-04T08:48:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2996.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2996"
} | true | [
"Beware of cases like: `http://ufal.ms.mff.cuni.cz/umc/005-en-ur/download.php?f=umc005-corpus.zip` or `gzip://bg-cs.xml::https://opus.nlpl.eu/download.php?f=Europarl/v8/xml/bg-cs.xml.gz`. I see these URLs in the errors (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading?collection=@hugging... |
https://api.github.com/repos/huggingface/datasets/issues/4698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4698/comments | https://api.github.com/repos/huggingface/datasets/issues/4698/events | https://github.com/huggingface/datasets/pull/4698 | 1,307,539,585 | PR_kwDODunzps47i9gN | 4,698 | Enable streaming dataset to use the "all" split | [] | open | false | null | 9 | 2022-07-18T07:47:39Z | 2023-01-19T10:11:38Z | null | null | Fixes #4637 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4698/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4698",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4698"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4698). All of your documentation changes will be reflected on that endpoint.",
"@albertvillanova \r\nAdding the validation split causes these two `assert_called_once` assertions to fail with `AssertionError: Expected 'ArrowWrit... |
https://api.github.com/repos/huggingface/datasets/issues/1140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1140/comments | https://api.github.com/repos/huggingface/datasets/issues/1140/events | https://github.com/huggingface/datasets/pull/1140 | 757,399,142 | MDExOlB1bGxSZXF1ZXN0NTMyNzgyODc0 | 1,140 | Add Urdu Sentiment Corpus (USC). | [] | closed | false | null | 2 | 2020-12-04T20:55:27Z | 2020-12-07T03:27:23Z | 2020-12-07T03:27:23Z | null | Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1140/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1140",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1140"
} | true | [
"@lhoestq have made the suggested changes in the README file.",
"@lhoestq Created a new PR #1231 with only the relevant files.\r\nclosing this one :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5231/comments | https://api.github.com/repos/huggingface/datasets/issues/5231/events | https://github.com/huggingface/datasets/issues/5231 | 1,445,883,267 | I_kwDODunzps5WLm2D | 5,231 | Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly | [] | closed | false | null | 1 | 2022-11-11T18:54:36Z | 2022-11-11T20:42:29Z | 2022-11-11T18:59:50Z | null | I have a Dataset with two Features defined as follows:
```
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
```
On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of (batch_size, 3, 224, 244) for example.
However, if I `dataset.set_format(type='torch', columns=['image', 'bbox'])` these columns are cast to Lists of tensors and miss the batch size completely (the 3 dimension is the list length).
I'm currently digging through datasets formatting code to try and find out why, but was curious if someone knew an immediate solution for this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5231/timeline | null | completed | null | null | false | [
"In case others find this, the problem was not with set_format, but my usages of `to_pandas()` and `from_pandas()` which I was using during dataset splitting; somewhere in the chain of converting to and from pandas the `Array2D/Array3D` types get converted to series of `Sequence()` types"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.