url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
list
https://api.github.com/repos/huggingface/datasets/issues/5263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5263/comments
https://api.github.com/repos/huggingface/datasets/issues/5263/events
https://github.com/huggingface/datasets/issues/5263
1,455,252,626
I_kwDODunzps5WvWSS
5,263
Save a dataset in a determined number of shards
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
0
2022-11-18T14:43:54Z
2022-12-14T18:22:59Z
2022-12-14T18:22:59Z
null
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5263/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1105/comments
https://api.github.com/repos/huggingface/datasets/issues/1105/events
https://github.com/huggingface/datasets/pull/1105
757,024,162
MDExOlB1bGxSZXF1ZXN0NTMyNDY4NDIw
1,105
add xquad_r dataset
[]
closed
false
null
2
2020-12-04T11:19:35Z
2020-12-04T16:37:00Z
2020-12-04T16:37:00Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1105/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1105/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1105.diff", "html_url": "https://github.com/huggingface/datasets/pull/1105", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1105.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1105" }
true
[ "looks like this PR includes changes in many files than the ones for xquad_r, could you create a new branch and a new PR ?", "Sure, I will close this then.\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/2828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2828/comments
https://api.github.com/repos/huggingface/datasets/issues/2828/events
https://github.com/huggingface/datasets/pull/2828
977,181,517
MDExOlB1bGxSZXF1ZXN0NzE3OTYwODg3
2,828
Add code-mixed Kannada Hope speech dataset
[]
closed
false
null
0
2021-08-23T15:55:09Z
2021-10-01T17:21:03Z
2021-10-01T17:21:03Z
null
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2828/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2828/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2828.diff", "html_url": "https://github.com/huggingface/datasets/pull/2828", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2828.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2828" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4250/comments
https://api.github.com/repos/huggingface/datasets/issues/4250/events
https://github.com/huggingface/datasets/pull/4250
1,219,093,830
PR_kwDODunzps429yjN
4,250
Bump PyArrow Version to 6
[]
closed
false
null
4
2022-04-28T18:10:50Z
2022-05-04T09:36:52Z
2022-05-04T09:29:46Z
null
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4250/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4250/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4250.diff", "html_url": "https://github.com/huggingface/datasets/pull/4250", "merged_at": "2022-05-04T09:29:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4250.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4250" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Updated meta.yaml as well. Thanks.", "I'm OK with bumping PyArrow to version 6 to match the version in Colab, but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.", "> but ma...
https://api.github.com/repos/huggingface/datasets/issues/3135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3135/comments
https://api.github.com/repos/huggingface/datasets/issues/3135/events
https://github.com/huggingface/datasets/issues/3135
1,033,294,299
I_kwDODunzps49ltHb
3,135
Make inspect.get_dataset_config_names always return a non-empty list of configs
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": fals...
closed
false
null
2
2021-10-22T08:02:50Z
2021-10-28T05:44:49Z
2021-10-28T05:44:49Z
null
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`). https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3135/timeline
null
completed
null
null
false
[ "Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?", "Yes, maybe t...
https://api.github.com/repos/huggingface/datasets/issues/1428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1428/comments
https://api.github.com/repos/huggingface/datasets/issues/1428/events
https://github.com/huggingface/datasets/pull/1428
760,736,726
MDExOlB1bGxSZXF1ZXN0NTM1NTE4MzIy
1,428
Add twi wordsim353
[]
closed
false
null
0
2020-12-09T22:59:19Z
2020-12-11T13:57:32Z
2020-12-11T13:57:32Z
null
Add twi WordSim 353
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1428/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1428.diff", "html_url": "https://github.com/huggingface/datasets/pull/1428", "merged_at": "2020-12-11T13:57:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1428.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1428" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3471/comments
https://api.github.com/repos/huggingface/datasets/issues/3471/events
https://github.com/huggingface/datasets/pull/3471
1,086,588,074
PR_kwDODunzps4wLAk6
3,471
Fix Tashkeela dataset to yield stripped text
[]
closed
false
null
0
2021-12-22T08:41:30Z
2021-12-22T10:12:08Z
2021-12-22T10:12:07Z
null
This PR: - Yields stripped text - Fix path for Windows - Adds license - Adds more info in dataset card Close bigscience-workshop/data_tooling#279
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3471/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3471.diff", "html_url": "https://github.com/huggingface/datasets/pull/3471", "merged_at": "2021-12-22T10:12:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3471.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3471" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5832/comments
https://api.github.com/repos/huggingface/datasets/issues/5832/events
https://github.com/huggingface/datasets/issues/5832
1,702,135,336
I_kwDODunzps5ldIYo
5,832
404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
[]
closed
false
null
1
2023-05-09T14:14:59Z
2023-05-09T14:25:59Z
2023-05-09T14:25:59Z
null
### Describe the bug Running [Bert-Large-Cased](https://huggingface.co/bert-large-cased) model causes `HTTPError`, with the following traceback- ``` HTTPError Traceback (most recent call last) <ipython-input-6-5c580443a1ad> in <module> ----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased') ~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name 1647 fast_tokenizer_file = get_fast_tokenizer_file( -> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token 1649 ) 1650 additional_files_names = { ~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token) 3406 """ 3407 # Inspect all files from the repo/folder. -> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token) 3409 tokenizer_files_map = {} 3410 for file_name in all_files: ~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token) 1685 token = None 1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info( -> 1687 path_or_repo, revision=revision, token=token 1688 ) 1689 return [f.rfilename for f in model_info.siblings] ~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token) 246 ) 247 r = requests.get(path, headers=headers) --> 248 r.raise_for_status() 249 d = r.json() 250 return ModelInfo(**d) ~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self) 951 952 if http_error_msg: --> 953 raise HTTPError(http_error_msg, response=self) 954 955 def close(self): HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased ``` I have also tried running in offline mode, as [discussed here](https://huggingface.co/docs/transformers/installation#offline-mode) ``` HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 ``` ### Steps to reproduce the bug 1. `from transformers import BertTokenizer, BertModel` 2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')` ### Expected behavior Run without the HTTP error. ### Environment info | # Name | Version | Build | Channel | | |--------------------|------------|-----------------------------|---------|---| | _libgcc_mutex | 0.1 | main | | | | _openmp_mutex | 4.5 | 1_gnu | | | | _pytorch_select | 0.1 | cpu_0 | | | | appdirs | 1.4.4 | pypi_0 | pypi | | | backcall | 0.2.0 | pypi_0 | pypi | | | blas | 1.0 | mkl | | | | bzip2 | 1.0.8 | h7b6447c_0 | | | | ca-certificates | 2021.7.5 | h06a4308_1 | | | | certifi | 2021.5.30 | py37h06a4308_0 | | | | cffi | 1.14.6 | py37h400218f_0 | | | | charset-normalizer | 2.0.3 | pypi_0 | pypi | | | click | 8.0.1 | pypi_0 | pypi | | | colorama | 0.4.4 | pypi_0 | pypi | | | cudatoolkit | 11.1.74 | h6bb024c_0 | nvidia | | | cycler | 0.11.0 | pypi_0 | pypi | | | decorator | 5.0.9 | pypi_0 | pypi | | | docker-pycreds | 0.4.0 | pypi_0 | pypi | | | docopt | 0.6.2 | pypi_0 | pypi | | | dominate | 2.6.0 | pypi_0 | pypi | | | ffmpeg | 4.3 | hf484d3e_0 | pytorch | | | filelock | 3.0.12 | pypi_0 | pypi | | | fonttools | 4.38.0 | pypi_0 | pypi | | | freetype | 2.10.4 | h5ab3b9f_0 | | | | gitdb | 4.0.7 | pypi_0 | pypi | | | gitpython | 3.1.18 | pypi_0 | pypi | | | gmp | 6.2.1 | h2531618_2 | | | | gnutls | 3.6.15 | he1e5248_0 | | | | huggingface-hub | 0.0.12 | pypi_0 | pypi | | | humanize | 3.10.0 | pypi_0 | pypi | | | idna | 3.2 | pypi_0 | pypi | | | importlib-metadata | 4.6.1 | pypi_0 | pypi | | | intel-openmp | 2019.4 | 243 | | | | ipdb | 0.13.9 | pypi_0 | pypi | | | ipython | 7.25.0 | pypi_0 | pypi | | | ipython-genutils | 0.2.0 | pypi_0 | pypi | | | jedi | 0.18.0 | pypi_0 | pypi | | | joblib | 1.0.1 | pypi_0 | pypi | | | jpeg | 9b | h024ee3a_2 | | | | jsonpickle | 1.5.2 | pypi_0 | pypi | | | kiwisolver | 1.4.4 | pypi_0 | pypi | | | lame | 3.100 | h7b6447c_0 | | | | lcms2 | 2.12 | h3be6417_0 | | | | ld_impl_linux-64 | 2.35.1 | h7274673_9 | | | | libffi | 3.3 | he6710b0_2 | | | | libgcc-ng | 9.3.0 | h5101ec6_17 | | | | libgomp | 9.3.0 | h5101ec6_17 | | | | libiconv | 1.15 | h63c8f33_5 | | | | libidn2 | 2.3.2 | h7f8727e_0 | | | | libmklml | 2019.0.5 | 0 | | | | libpng | 1.6.37 | hbc83047_0 | | | | libstdcxx-ng | 9.3.0 | hd4cf53a_17 | | | | libtasn1 | 4.16.0 | h27cfd23_0 | | | | libtiff | 4.2.0 | h85742a9_0 | | | | libunistring | 0.9.10 | h27cfd23_0 | | | | libuv | 1.40.0 | h7b6447c_0 | | | | libwebp-base | 1.2.0 | h27cfd23_0 | | | | lz4-c | 1.9.3 | h2531618_0 | | | | matplotlib | 3.5.3 | pypi_0 | pypi | | | matplotlib-inline | 0.1.2 | pypi_0 | pypi | | | mergedeep | 1.3.4 | pypi_0 | pypi | | | mkl | 2020.2 | 256 | | | | mkl-service | 2.3.0 | py37he8ac12f_0 | | | | mkl_fft | 1.3.0 | py37h54f3939_0 | | | | mkl_random | 1.1.1 | py37h0573a6f_0 | | | | msgpack | 1.0.2 | pypi_0 | pypi | | | munch | 2.5.0 | pypi_0 | pypi | | | ncurses | 6.2 | he6710b0_1 | | | | nettle | 3.7.3 | hbbd107a_1 | | | | ninja | 1.10.2 | hff7bd54_1 | | | | nltk | 3.8.1 | pypi_0 | pypi | | | numpy | 1.19.2 | py37h54aff64_0 | | | | numpy-base | 1.19.2 | py37hfa32c7d_0 | | | | olefile | 0.46 | py37_0 | | | | openh264 | 2.1.0 | hd408876_0 | | | | openjpeg | 2.3.0 | h05c96fa_1 | | | | openssl | 1.1.1k | h27cfd23_0 | | | | packaging | 21.0 | pypi_0 | pypi | | | pandas | 1.3.1 | pypi_0 | pypi | | | parso | 0.8.2 | pypi_0 | pypi | | | pathtools | 0.1.2 | pypi_0 | pypi | | | pexpect | 4.8.0 | pypi_0 | pypi | | | pickleshare | 0.7.5 | pypi_0 | pypi | | | pillow | 8.3.1 | py37h2c7a002_0 | | | | pip | 21.1.3 | py37h06a4308_0 | | | | prompt-toolkit | 3.0.19 | pypi_0 | pypi | | | protobuf | 4.21.12 | pypi_0 | pypi | | | psutil | 5.8.0 | pypi_0 | pypi | | | ptyprocess | 0.7.0 | pypi_0 | pypi | | | py-cpuinfo | 8.0.0 | pypi_0 | pypi | | | pycparser | 2.20 | py_2 | | | | pygments | 2.9.0 | pypi_0 | pypi | | | pyparsing | 2.4.7 | pypi_0 | pypi | | | python | 3.7.10 | h12debd9_4 | | | | python-dateutil | 2.8.2 | pypi_0 | pypi | | | pytorch | 1.9.0 | py3.7_cuda11.1_cudnn8.0.5_0 | pytorch | | | pytz | 2021.1 | pypi_0 | pypi | | | pyyaml | 5.4.1 | pypi_0 | pypi | | | readline | 8.1 | h27cfd23_0 | | | | regex | 2022.10.31 | pypi_0 | pypi | | | requests | 2.26.0 | pypi_0 | pypi | | | sacred | 0.8.2 | pypi_0 | pypi | | | sacremoses | 0.0.45 | pypi_0 | pypi | | | scikit-learn | 0.24.2 | pypi_0 | pypi | | | scipy | 1.7.0 | pypi_0 | pypi | | | sentry-sdk | 1.15.0 | pypi_0 | pypi | | | setproctitle | 1.3.2 | pypi_0 | pypi | | | setuptools | 52.0.0 | py37h06a4308_0 | | | | six | 1.16.0 | pyhd3eb1b0_0 | | | | smmap | 4.0.0 | pypi_0 | pypi | | | sqlite | 3.36.0 | hc218d9a_0 | | | | threadpoolctl | 2.2.0 | pypi_0 | pypi | | | tk | 8.6.10 | hbc83047_0 | | | | tokenizers | 0.10.3 | pypi_0 | pypi | | | toml | 0.10.2 | pypi_0 | pypi | | | torchaudio | 0.9.0 | py37 | pytorch | | | torchvision | 0.10.0 | py37_cu111 | pytorch | | | tqdm | 4.61.2 | pypi_0 | pypi | | | traitlets | 5.0.5 | pypi_0 | pypi | | | transformers | 4.9.1 | pypi_0 | pypi | | | typing-extensions | 3.10.0.0 | hd3eb1b0_0 | | | | typing_extensions | 3.10.0.0 | pyh06a4308_0 | | | | urllib3 | 1.26.14 | pypi_0 | pypi | | | wandb | 0.13.10 | pypi_0 | pypi | | | wcwidth | 0.2.5 | pypi_0 | pypi | | | wheel | 0.36.2 | pyhd3eb1b0_0 | | | | wrapt | 1.12.1 | pypi_0 | pypi | | | xz | 5.2.5 | h7b6447c_0 | | | | zipp | 3.5.0 | pypi_0 | pypi | | | zlib | 1.2.11 | h7b6447c_3 | | | | zstd | 1.4.9 | haebb681_0 | | |
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5832/timeline
null
completed
null
null
false
[ "moved to https://github.com/huggingface/transformers/issues/23233" ]
https://api.github.com/repos/huggingface/datasets/issues/6018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6018/comments
https://api.github.com/repos/huggingface/datasets/issues/6018/events
https://github.com/huggingface/datasets/pull/6018
1,799,411,999
PR_kwDODunzps5VOmKY
6,018
test1
[]
closed
false
null
1
2023-07-11T17:25:49Z
2023-07-20T10:11:41Z
2023-07-20T10:11:41Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6018/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6018/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6018.diff", "html_url": "https://github.com/huggingface/datasets/pull/6018", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6018.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6018" }
true
[ "We no longer host datasets in this repo. You should use the HF Hub instead." ]
https://api.github.com/repos/huggingface/datasets/issues/3881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3881/comments
https://api.github.com/repos/huggingface/datasets/issues/3881/events
https://github.com/huggingface/datasets/issues/3881
1,164,452,005
I_kwDODunzps5FaCCl
3,881
How to use Image folder
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
8
2022-03-09T21:18:52Z
2022-03-11T08:45:52Z
2022-03-11T08:45:52Z
null
Ran this code ``` load_dataset("imagefolder", data_dir="./my-dataset") ``` `https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /tmp/ipykernel_33/1648737256.py in <module> ----> 1 load_dataset("imagefolder", data_dir="./my-dataset") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1684 revision=revision, 1685 use_auth_token=use_auth_token, -> 1686 **config_kwargs, 1687 ) 1688 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1511 download_config.use_auth_token = use_auth_token 1512 dataset_module = dataset_module_factory( -> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1514 ) 1515 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1202 ) from None 1203 raise e1 from None 1204 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3881/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3881/timeline
null
completed
null
null
false
[ "Even this from docs throw same error\r\n```\r\ndataset = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\n\r\n```", "Hi @INF800,\r\n\r\nPlease note that the `imagefolder` feature enhanc...
https://api.github.com/repos/huggingface/datasets/issues/3448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3448/comments
https://api.github.com/repos/huggingface/datasets/issues/3448/events
https://github.com/huggingface/datasets/issues/3448
1,083,231,080
I_kwDODunzps5AkMto
3,448
JSONDecodeError with HuggingFace dataset viewer
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
3
2021-12-17T12:52:41Z
2022-02-24T09:10:26Z
2022-02-24T09:10:26Z
null
## Dataset viewer issue for 'pubmed_neg' **Link:** https://huggingface.co/datasets/IGESML/pubmed_neg I am getting the error: Status code: 400 Exception: JSONDecodeError Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202) I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue. Am I the one who added this dataset ? Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3448/timeline
null
completed
null
null
false
[ "Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?", "Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\n...
https://api.github.com/repos/huggingface/datasets/issues/1097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1097/comments
https://api.github.com/repos/huggingface/datasets/issues/1097/events
https://github.com/huggingface/datasets/pull/1097
756,955,729
MDExOlB1bGxSZXF1ZXN0NTMyNDExNzQ4
1,097
Add MSRA NER labels
[]
closed
false
null
0
2020-12-04T09:38:16Z
2020-12-04T13:31:59Z
2020-12-04T13:31:58Z
null
Fixes #940
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1097/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1097/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1097.diff", "html_url": "https://github.com/huggingface/datasets/pull/1097", "merged_at": "2020-12-04T13:31:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1097.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1097" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/707/comments
https://api.github.com/repos/huggingface/datasets/issues/707/events
https://github.com/huggingface/datasets/issues/707
713,954,666
MDU6SXNzdWU3MTM5NTQ2NjY=
707
Requirements should specify pyarrow<1
[]
closed
false
null
7
2020-10-02T23:39:39Z
2020-12-04T08:22:39Z
2020-10-04T20:50:28Z
null
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file. https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68 Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/707/timeline
null
completed
null
null
false
[ "Hello @mathcass I would want to work on this issue. May I do the same? ", "@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.", "Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish o...
https://api.github.com/repos/huggingface/datasets/issues/2112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2112/comments
https://api.github.com/repos/huggingface/datasets/issues/2112/events
https://github.com/huggingface/datasets/pull/2112
841,098,008
MDExOlB1bGxSZXF1ZXN0NjAwODgyMjA0
2,112
Support for legal NLP datasets (EURLEX and ECtHR cases)
[]
closed
false
null
0
2021-03-25T16:24:17Z
2021-03-25T18:39:31Z
2021-03-25T18:34:31Z
null
Add support for two legal NLP datasets: - EURLEX (https://www.aclweb.org/anthology/P19-1636/) - ECtHR cases (https://arxiv.org/abs/2103.13084)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2112/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2112.diff", "html_url": "https://github.com/huggingface/datasets/pull/2112", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2112.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2112" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3899/comments
https://api.github.com/repos/huggingface/datasets/issues/3899/events
https://github.com/huggingface/datasets/pull/3899
1,166,931,812
PR_kwDODunzps40UzR3
3,899
Add exact match metric
[]
closed
false
null
1
2022-03-11T22:21:40Z
2022-03-21T16:10:03Z
2022-03-21T16:05:35Z
null
Adding the exact match metric and its metric card. Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3899/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3899/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3899.diff", "html_url": "https://github.com/huggingface/datasets/pull/3899", "merged_at": "2022-03-21T16:05:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/3899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3899" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4208/comments
https://api.github.com/repos/huggingface/datasets/issues/4208/events
https://github.com/huggingface/datasets/pull/4208
1,213,716,426
PR_kwDODunzps42r7bW
4,208
Add CMU MoCap Dataset
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
11
2022-04-24T17:31:08Z
2022-10-03T09:38:24Z
2022-10-03T09:36:30Z
null
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and description information. Some of the subjects do not have category/subcategory/description as well. I am using a subject to categories, subcategories and description map (metadata file). Currently the loading of the dataset works for "asf/amc" and "avi" formats since they have a single download link. But "c3d" and "mpg" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ? Any suggestions/inputs on this would be helpful. Thank you.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4208/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4208/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4208.diff", "html_url": "https://github.com/huggingface/datasets/pull/4208", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4208" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple...
https://api.github.com/repos/huggingface/datasets/issues/1815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1815/comments
https://api.github.com/repos/huggingface/datasets/issues/1815/events
https://github.com/huggingface/datasets/pull/1815
800,610,017
MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1
1,815
Add CCAligned Multilingual Dataset
[]
closed
false
null
7
2021-02-03T18:59:52Z
2021-03-01T12:33:03Z
2021-03-01T10:36:21Z
null
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to download one particular language and not all. To provide this feature, `load_dataset`'s `**config_kwargs` should allow some random keyword args, in this case -`language_code`. This will be needed before the dataset is downloaded and extracted. I'm expecting the usage to be something like - `load_dataset('ccaligned_multilingual','documents',language_code='en_XX-af_ZA')`. Ofcourse, at a later stage we can provide just two character language codes. This also has an issue where one language has multiple files (`my_MM` and `my_MM_zaw` on the link), but before that the required functionality must be added to `load_dataset`. It would be great if someone could either tell me an alternative way to do this, or point me to where changes need to be made, if any, apart from the `BuilderConfig` definition. Additionally, I believe the tests will also have to be modified if this change is made, since it would not be possible to test for any random keyword arguments. A decent way to go about this would be to provide all the options in a list/dictionary for `language_code` and use that to test the arguments. In essence, this is similar to the pre-trained checkpoint dictionary as `transformers`. That means writing dataset specific tests, or adding something new to dataset generation script to make it easier for everyone to add keyword arguments without having to worry about the tests. Thanks, Gunjan Requesting @lhoestq / @yjernite to review.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1815/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1815.diff", "html_url": "https://github.com/huggingface/datasets/pull/1815", "merged_at": "2021-03-01T10:36:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1815.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1815" }
true
[ "Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For example the [bible_para](https://github.com/huggi...
https://api.github.com/repos/huggingface/datasets/issues/3711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3711/comments
https://api.github.com/repos/huggingface/datasets/issues/3711/events
https://github.com/huggingface/datasets/pull/3711
1,134,050,545
PR_kwDODunzps4ymmlK
3,711
Fix the error of _load_table_data function in msr_sqa dataset
[]
closed
false
null
0
2022-02-12T13:20:53Z
2022-02-12T13:30:43Z
2022-02-12T13:30:43Z
null
The _load_table_data function from the last version is wrong, it is wrong to use comma to split each row.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3711/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3711.diff", "html_url": "https://github.com/huggingface/datasets/pull/3711", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3711.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3711" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4391/comments
https://api.github.com/repos/huggingface/datasets/issues/4391/events
https://github.com/huggingface/datasets/pull/4391
1,244,839,185
PR_kwDODunzps44RpGv
4,391
Refactor column mappings for question answering datasets
[]
closed
false
null
5
2022-05-23T09:13:14Z
2022-05-24T12:57:00Z
2022-05-24T12:48:48Z
null
This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain. As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR. cc @sashavor
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4391/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4391/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4391.diff", "html_url": "https://github.com/huggingface/datasets/pull/4391", "merged_at": "2022-05-24T12:48:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4391.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4391" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a per...
https://api.github.com/repos/huggingface/datasets/issues/4065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4065/comments
https://api.github.com/repos/huggingface/datasets/issues/4065/events
https://github.com/huggingface/datasets/pull/4065
1,186,722,478
PR_kwDODunzps41U5rq
4,065
Create metric card for METEOR
[]
closed
false
null
1
2022-03-30T16:40:30Z
2022-03-31T17:12:10Z
2022-03-31T17:07:50Z
null
Proposing a metric card for METEOR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4065/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4065/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4065.diff", "html_url": "https://github.com/huggingface/datasets/pull/4065", "merged_at": "2022-03-31T17:07:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/4065.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4065" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5378/comments
https://api.github.com/repos/huggingface/datasets/issues/5378/events
https://github.com/huggingface/datasets/issues/5378
1,503,887,508
I_kwDODunzps5Zo4CU
5,378
The dataset "the_pile", subset "enron_emails" , load_dataset() failure
[]
closed
false
null
1
2022-12-20T02:19:13Z
2022-12-20T07:52:54Z
2022-12-20T07:52:54Z
null
### Describe the bug When run "datasets.load_dataset("the_pile","enron_emails")" failure ![image](https://user-images.githubusercontent.com/52023469/208565302-cfab7b89-0b97-4fa6-a5ba-c11b0b629b1a.png) ### Steps to reproduce the bug Run below code in python cli: >>> import datasets >>> datasets.load_dataset("the_pile","enron_emails") ### Expected behavior Load dataset "the_pile", "enron_emails" successfully. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.7.1 - Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - PyArrow version: 10.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5378/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5378/timeline
null
completed
null
null
false
[ "Thanks for reporting @shaoyuta. We are investigating it.\r\n\r\nWe are transferring the issue to \"the_pile\" Community tab on the Hub: https://huggingface.co/datasets/the_pile/discussions/4" ]
https://api.github.com/repos/huggingface/datasets/issues/832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/832/comments
https://api.github.com/repos/huggingface/datasets/issues/832/events
https://github.com/huggingface/datasets/issues/832
740,077,228
MDU6SXNzdWU3NDAwNzcyMjg=
832
[GEM] add WikiAuto text simplification dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
0
2020-11-10T16:53:23Z
2020-12-03T13:38:08Z
2020-12-03T13:38:08Z
null
## Adding a Dataset - **Name:** WikiAuto - **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf - **Data:** https://github.com/chaojiang06/wiki-auto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/832/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4188/comments
https://api.github.com/repos/huggingface/datasets/issues/4188/events
https://github.com/huggingface/datasets/pull/4188
1,209,740,957
PR_kwDODunzps42fpMv
4,188
Support streaming cnn_dailymail dataset
[]
closed
false
null
2
2022-04-20T14:04:36Z
2022-05-11T13:39:06Z
2022-04-20T15:52:49Z
null
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4188/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4188/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4188.diff", "html_url": "https://github.com/huggingface/datasets/pull/4188", "merged_at": "2022-04-20T15:52:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4188.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4188" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Did you run the `datasets-cli` command before merging to make sure you generate all the examples ?" ]
https://api.github.com/repos/huggingface/datasets/issues/3539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3539/comments
https://api.github.com/repos/huggingface/datasets/issues/3539/events
https://github.com/huggingface/datasets/pull/3539
1,094,813,242
PR_kwDODunzps4wlXU4
3,539
Research wording for nc licenses
[]
closed
false
null
1
2022-01-05T23:01:38Z
2022-01-06T18:58:20Z
2022-01-06T18:58:19Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3539/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3539/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3539.diff", "html_url": "https://github.com/huggingface/datasets/pull/3539", "merged_at": "2022-01-06T18:58:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3539.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3539" }
true
[ "The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging" ]
https://api.github.com/repos/huggingface/datasets/issues/1802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1802/comments
https://api.github.com/repos/huggingface/datasets/issues/1802/events
https://github.com/huggingface/datasets/pull/1802
797,924,468
MDExOlB1bGxSZXF1ZXN0NTY0ODE4NDIy
1,802
add github of contributors
[]
closed
false
null
3
2021-02-01T03:49:19Z
2021-02-03T10:09:52Z
2021-02-03T10:06:30Z
null
This PR will add contributors GitHub id at the end of every dataset cards.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1802/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1802.diff", "html_url": "https://github.com/huggingface/datasets/pull/1802", "merged_at": "2021-02-03T10:06:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1802" }
true
[ "@lhoestq Can you confirm if this format is fine? I will update cards based on your feedback.", "On HuggingFace side we also have a mapping of hf user => github user (GitHub info used to be required when signing up until not long ago – cc @gary149 @beurkinger) so we can also add a link to HF profile", "All the ...
https://api.github.com/repos/huggingface/datasets/issues/836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/836/comments
https://api.github.com/repos/huggingface/datasets/issues/836/events
https://github.com/huggingface/datasets/issues/836
740,187,613
MDU6SXNzdWU3NDAxODc2MTM=
836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
8
2020-11-10T19:35:40Z
2021-11-24T16:59:19Z
2020-11-19T17:35:38Z
null
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/836/timeline
null
completed
null
null
false
[ "Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?", "Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5", "I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nTh...
https://api.github.com/repos/huggingface/datasets/issues/113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/113/comments
https://api.github.com/repos/huggingface/datasets/issues/113/events
https://github.com/huggingface/datasets/pull/113
618,590,562
MDExOlB1bGxSZXF1ZXN0NDE4MjkxNjIx
113
Adding docstrings and some doc
[]
closed
false
null
0
2020-05-14T23:14:41Z
2020-05-14T23:22:45Z
2020-05-14T23:22:44Z
null
Some doc
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/113/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/113.diff", "html_url": "https://github.com/huggingface/datasets/pull/113", "merged_at": "2020-05-14T23:22:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/113.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/113" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4120/comments
https://api.github.com/repos/huggingface/datasets/issues/4120/events
https://github.com/huggingface/datasets/issues/4120
1,195,887,430
I_kwDODunzps5HR8tG
4,120
Representing dictionaries (json) objects as features
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2022-04-07T11:07:41Z
2022-04-07T11:07:41Z
null
null
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442). For instance: ``` sample1 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, }} sample2 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, }} sample3 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, "d": {"id": 3, "text": "text4"}, }} ``` the `nps` field cannot be represented as a Feature while maintaining its original structure. @lhoestq suggested to add JSON as a new feature type, which will solve this problem. It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4120/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5282/comments
https://api.github.com/repos/huggingface/datasets/issues/5282/events
https://github.com/huggingface/datasets/pull/5282
1,460,238,928
PR_kwDODunzps5Det2_
5,282
Release: 2.7.1
[]
closed
false
null
0
2022-11-22T16:58:54Z
2022-11-22T17:21:28Z
2022-11-22T17:21:27Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5282/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5282/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5282.diff", "html_url": "https://github.com/huggingface/datasets/pull/5282", "merged_at": "2022-11-22T17:21:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/5282.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5282" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1647/comments
https://api.github.com/repos/huggingface/datasets/issues/1647/events
https://github.com/huggingface/datasets/issues/1647
775,525,799
MDU6SXNzdWU3NzU1MjU3OTk=
1,647
NarrativeQA fails to load with `load_dataset`
[]
closed
false
null
3
2020-12-28T18:16:09Z
2021-01-05T12:05:08Z
2021-01-03T17:58:05Z
null
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/narrativeqa/narrativeqa.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/narrativeqa/narrativeqa.py Workaround: manually copy the `narrativeqa.py` builder into my local directory with curl https://raw.githubusercontent.com/huggingface/datasets/master/datasets/narrativeqa/narrativeqa.py -o narrativeqa.py and load the dataset as `load_dataset('narrativeqa.py')` everything works fine. I'm on datasets v1.1.3 using Python 3.6.10.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1647/timeline
null
completed
null
null
false
[ "Hi @eric-mitchell,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip i...
https://api.github.com/repos/huggingface/datasets/issues/5432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5432/comments
https://api.github.com/repos/huggingface/datasets/issues/5432/events
https://github.com/huggingface/datasets/pull/5432
1,535,893,019
PR_kwDODunzps5HhEA8
5,432
Fix CI benchmarks by temporarily pinning Docker image version
[]
closed
false
null
2
2023-01-17T07:15:31Z
2023-01-17T08:58:22Z
2023-01-17T08:51:17Z
null
This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag. It also updates deprecated `cml-send-comment` command and using `cml comment create` instead. Fix #5431.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5432/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5432/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5432.diff", "html_url": "https://github.com/huggingface/datasets/pull/5432", "merged_at": "2023-01-17T08:51:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5432.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5432" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/3671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3671/comments
https://api.github.com/repos/huggingface/datasets/issues/3671/events
https://github.com/huggingface/datasets/issues/3671
1,122,864,253
I_kwDODunzps5C7Yx9
3,671
Give an estimate of the dataset size in DatasetInfo
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2022-02-03T09:47:10Z
2022-02-03T09:47:10Z
null
null
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the solution you'd like** - get access to the git information for the dataset files hosted on the hub - look at the [`Content-Length`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length) for the files served by HTTP
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3671/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3193/comments
https://api.github.com/repos/huggingface/datasets/issues/3193/events
https://github.com/huggingface/datasets/issues/3193
1,041,971,117
I_kwDODunzps4-Gzet
3,193
Update link to datasets-tagging app
[]
closed
false
null
0
2021-11-02T07:39:59Z
2021-11-08T10:36:22Z
2021-11-08T10:36:22Z
null
Once datasets-tagging has been transferred to Spaces: - huggingface/datasets-tagging#22 We should update the link in Datasets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3193/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3193/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1688/comments
https://api.github.com/repos/huggingface/datasets/issues/1688/events
https://github.com/huggingface/datasets/pull/1688
779,029,685
MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0
1,688
Fix DaNE last example
[]
closed
false
null
0
2021-01-05T13:29:37Z
2021-01-05T14:00:15Z
2021-01-05T14:00:13Z
null
The last example from the DaNE dataset is empty. Fix #1686
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1688/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1688.diff", "html_url": "https://github.com/huggingface/datasets/pull/1688", "merged_at": "2021-01-05T14:00:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/1688.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1688" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2522/comments
https://api.github.com/repos/huggingface/datasets/issues/2522/events
https://github.com/huggingface/datasets/issues/2522
925,334,379
MDU6SXNzdWU5MjUzMzQzNzk=
2,522
Documentation Mistakes in Dataset: emotion
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2021-06-19T07:08:57Z
2023-01-02T12:04:58Z
2023-01-02T12:04:58Z
null
As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper. But when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2522/timeline
null
completed
null
null
false
[ "Hi,\r\n\r\nthis issue has been already reported in the dataset repo (https://github.com/dair-ai/emotion_dataset/issues/2), so this is a bug on their side.", "The documentation has another bug in the dataset card [here](https://huggingface.co/datasets/emotion). \r\n\r\nIn the dataset summary **six** emotions are ...
https://api.github.com/repos/huggingface/datasets/issues/3615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3615/comments
https://api.github.com/repos/huggingface/datasets/issues/3615/events
https://github.com/huggingface/datasets/issues/3615
1,111,576,876
I_kwDODunzps5CQVEs
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2022-01-22T14:12:59Z
2022-02-04T14:05:21Z
2022-02-04T14:05:21Z
null
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3615/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3615/timeline
null
completed
null
null
false
[ "@albertvillanova let me know if there is anything I can do to help with this. I had a quick look at the code again and though I could try the following changes:\r\n- use `download` instead of `download_and_extract`\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bn...
https://api.github.com/repos/huggingface/datasets/issues/4436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4436/comments
https://api.github.com/repos/huggingface/datasets/issues/4436/events
https://github.com/huggingface/datasets/pull/4436
1,257,758,834
PR_kwDODunzps449FsU
4,436
Fix directory names for LDC data in timit_asr dataset
[]
closed
false
null
1
2022-06-02T06:45:04Z
2022-06-02T09:32:56Z
2022-06-02T09:24:27Z
null
Related to: - #4422
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4436/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4436/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4436.diff", "html_url": "https://github.com/huggingface/datasets/pull/4436", "merged_at": "2022-06-02T09:24:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/4436.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4436" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5102/comments
https://api.github.com/repos/huggingface/datasets/issues/5102/events
https://github.com/huggingface/datasets/issues/5102
1,404,746,554
I_kwDODunzps5Turs6
5,102
Error in create a dataset from a Python generator
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "descript...
closed
false
null
2
2022-10-11T14:28:58Z
2022-10-12T11:31:56Z
2022-10-12T11:31:56Z
null
## Describe the bug In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in. ```Python >>> from datasets import Dataset >>> def my_gen(): ... for i in range(1, 4): ... yield {"a": i} >>> dataset = Dataset.from_generator(my_dict) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5102/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5102/timeline
null
completed
null
null
false
[ "Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.", "Can I work on this one?" ]
https://api.github.com/repos/huggingface/datasets/issues/1801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1801/comments
https://api.github.com/repos/huggingface/datasets/issues/1801/events
https://github.com/huggingface/datasets/pull/1801
797,814,275
MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw
1,801
[GEM] Updated the source link of the data to update correct tokenized version.
[]
closed
false
null
2
2021-01-31T21:17:19Z
2021-02-02T13:17:38Z
2021-02-02T13:17:28Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1801/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1801/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1801.diff", "html_url": "https://github.com/huggingface/datasets/pull/1801", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1801.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1801" }
true
[ "@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ", "Closed by https://github.com/huggingface/datasets/pull/1807" ]
https://api.github.com/repos/huggingface/datasets/issues/3938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3938/comments
https://api.github.com/repos/huggingface/datasets/issues/3938/events
https://github.com/huggingface/datasets/pull/3938
1,170,875,417
PR_kwDODunzps40hnjM
3,938
Avoid info log messages from transformers in FrugalScore metric
[]
closed
false
null
1
2022-03-16T11:11:29Z
2022-03-17T08:37:25Z
2022-03-17T08:37:24Z
null
Fix #3928.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3938/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3938/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3938.diff", "html_url": "https://github.com/huggingface/datasets/pull/3938", "merged_at": "2022-03-17T08:37:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/3938.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3938" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3938). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/907/comments
https://api.github.com/repos/huggingface/datasets/issues/907/events
https://github.com/huggingface/datasets/pull/907
752,422,351
MDExOlB1bGxSZXF1ZXN0NTI4NzQ4ODMx
907
Remove os.path.join from all URLs
[]
closed
false
null
0
2020-11-27T18:55:30Z
2020-11-29T22:48:20Z
2020-11-29T22:48:19Z
null
Remove `os.path.join` from all URLs in dataset scripts.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/907/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/907/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/907.diff", "html_url": "https://github.com/huggingface/datasets/pull/907", "merged_at": "2020-11-29T22:48:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/907.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/907" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5990/comments
https://api.github.com/repos/huggingface/datasets/issues/5990/events
https://github.com/huggingface/datasets/issues/5990
1,774,389,854
I_kwDODunzps5pwwpe
5,990
Pushing a large dataset on the hub consistently hangs
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
42
2023-06-10T14:46:47Z
2023-07-24T18:40:06Z
null
null
### Describe the bug Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help. I already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it. ### Reproduction ```python import multiprocessing as mp import pathlib from math import ceil import datasets import numpy as np from tqdm.auto import tqdm from tali.data.data import select_subtitles_between_timestamps from tali.utils import load_json tali_dataset_dir = "/data/" if __name__ == "__main__": full_dataset = datasets.load_dataset( "Antreas/TALI", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir ) def data_generator(set_name, percentage: float = 1.0): dataset = full_dataset[set_name] for item in tqdm(dataset): video_list = item["youtube_content_video"] video_list = np.random.choice( video_list, int(ceil(len(video_list) * percentage)) ) if len(video_list) == 0: continue captions = item["youtube_subtitle_text"] captions = select_subtitles_between_timestamps( subtitle_dict=load_json( captions.replace( "/data/", tali_dataset_dir, ) ), starting_timestamp=0, ending_timestamp=100000000, ) for video_path in video_list: temp_path = video_path.replace("/data/", tali_dataset_dir) video_path_actual: pathlib.Path = pathlib.Path(temp_path) if video_path_actual.exists(): item["youtube_content_video"] = open(video_path_actual, "rb").read() item["youtube_subtitle_text"] = captions yield item train_generator = lambda: data_generator("train", percentage=0.1) val_generator = lambda: data_generator("val") test_generator = lambda: data_generator("test") train_data = datasets.Dataset.from_generator( train_generator, num_proc=mp.cpu_count(), writer_batch_size=5000, cache_dir=tali_dataset_dir, ) val_data = datasets.Dataset.from_generator( val_generator, writer_batch_size=5000, num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir, ) test_data = datasets.Dataset.from_generator( test_generator, writer_batch_size=5000, num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir, ) dataset = datasets.DatasetDict( { "train": train_data, "val": val_data, "test": test_data, } ) succesful_competion = False while not succesful_competion: try: dataset.push_to_hub(repo_id="Antreas/TALI-small", max_shard_size="5GB") succesful_competion = True except Exception as e: print(e) ``` ### Logs ```shell Pushing dataset shards to the dataset hub: 33%|██████████████████████████████████████▎ | 7/21 [24:33<49:06, 210.45s/it] Error while uploading 'data/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub. Pushing split train to the Hub. Resuming upload of the dataset shards. Pushing dataset shards to the dataset hub: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [42:10<00:00, 55.01s/it] Pushing split val to the Hub. Resuming upload of the dataset shards. Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.55ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.51s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.39ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:30<00:00, 30.19s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.28ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:24<00:00, 24.08s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.42ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.97s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.49ba/s] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.54ba/s^ Upload 1 LFS files: 0%| | 0/1 [04:42<?, ?it/s] Pushing dataset shards to the dataset hub: 52%|████████████████████████████████████████████████████████████▏ | 11/21 [17:23<15:48, 94.82s/it] That's where it got stuck ``` ### System info ```shell - huggingface_hub version: 0.15.1 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /root/.cache/huggingface/token - Has saved token ?: True - Who am I ?: Antreas - Configured git credential helpers: store - FastAI: N/A - Tensorflow: N/A - Torch: 2.1.0.dev20230606+cu121 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 9.5.0 - hf_transfer: N/A - gradio: N/A - numpy: 1.24.3 - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5990/timeline
null
null
null
null
false
[ "Hi @AntreasAntoniou , sorry to know you are facing this issue. To help debugging it, could you tell me:\r\n- What is the total dataset size?\r\n- Is it always failing on the same shard or is the hanging problem happening randomly?\r\n- Were you able to save the dataset as parquet locally? This would help us determ...
https://api.github.com/repos/huggingface/datasets/issues/1449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1449/comments
https://api.github.com/repos/huggingface/datasets/issues/1449/events
https://github.com/huggingface/datasets/pull/1449
761,083,210
MDExOlB1bGxSZXF1ZXN0NTM1ODA0MzEy
1,449
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC) [PROPER]
[]
closed
false
null
2
2020-12-10T09:51:08Z
2020-12-11T17:07:46Z
2020-12-11T17:07:46Z
null
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1449/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1449.diff", "html_url": "https://github.com/huggingface/datasets/pull/1449", "merged_at": "2020-12-11T17:07:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/1449.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1449" }
true
[ "linter your code with flake8 and also run the commands present in Makefile for proper formatting \r\n", "merging since the CI is fixed on master" ]
https://api.github.com/repos/huggingface/datasets/issues/5185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5185/comments
https://api.github.com/repos/huggingface/datasets/issues/5185/events
https://github.com/huggingface/datasets/issues/5185
1,432,021,611
I_kwDODunzps5VWupr
5,185
Allow passing a subset of output features to Dataset.map
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2022-11-01T20:07:20Z
2022-11-01T20:07:34Z
null
null
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, sometimes you want to just pass some of the output types, particularly when the first of these modes makes an incorrect type. This currently crashes. ### Motivation To give a little background: this problem appears in converting labels to ids, where the labels happen to be floats rather than strings Consider the following use of map to convert from float to int ```python data = Dataset.from_dict({'y':[1.0,2.0,3.0]}) mapped = data.map(lambda r: {'y': int(r['y'])}) mapped['y'] # is floats, not ints ``` The result is a float again, since after the mapping operation it forces the old datatypes back on the data. Passing `features=Features({"y": Value(dtype="int64")})` to map works in principle, but then extending it a little to e.g. ```python def format_data(r): return {**tokenizer(r["text"]), "y": int(r["y"])} data = Dataset.from_dict({"y": [1.0, 2.0, 3.0], "text": ["one", "two", "three"]}) mapped = data.map( format_data, features=Features({'y': Value(dtype="int64")}), remove_columns=["text"], ) ``` Results in a crash in dataset internals, as it expects either all or no output features to be specified. Of course one can pass a full feature specification, but this becomes tokenizer specific and very awkward. ### Your contribution I've looked at `write_batch` and particularly `col_type = features[col] if features else None`, but checking for `col in features` here makes it fail elsewhere, but the structure makes it hard to understand how and why. I do not think I would have the time myself to get to the bottom of this anytime soon.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5185/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4558/comments
https://api.github.com/repos/huggingface/datasets/issues/4558/events
https://github.com/huggingface/datasets/pull/4558
1,283,479,650
PR_kwDODunzps46THl_
4,558
Add evaluation metadata to wmt14
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
2
2022-06-24T09:08:54Z
2022-09-23T09:36:50Z
2022-09-23T09:36:50Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4558/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4558/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4558.diff", "html_url": "https://github.com/huggingface/datasets/pull/4558", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4558.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4558" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4558). All of your documentation changes will be reflected on that endpoint.", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
https://api.github.com/repos/huggingface/datasets/issues/5275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5275/comments
https://api.github.com/repos/huggingface/datasets/issues/5275/events
https://github.com/huggingface/datasets/issues/5275
1,459,358,919
I_kwDODunzps5W_AzH
5,275
YAML integer keys are not preserved Hub server-side
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
13
2022-11-22T08:14:47Z
2023-01-26T10:52:35Z
2023-01-26T10:40:21Z
null
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml class_label: names: 0: B-long 1: B-short ``` - Returned by the server: ```yaml class_label: names: '0': B-long '1': B-short ``` - They are planning to enforce only string keys - Other projects already use interger-transformed-to string keys: e.g. `transformers` models `id2label`: https://huggingface.co/roberta-large-mnli/blob/main/config.json ```yaml "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" } ``` On the other hand, at `datasets` we are currently using YAML integer keys for `dataset_info` `class_label`. Please note (thanks @lhoestq for pointing out) that previous versions (2.6 and 2.7) of `datasets` need being patched: ```python In [18]: Features._from_yaml_list([{'dtype': {'class_label': {'names': {'0': 'neg', '1': 'pos'}}}, 'name': 'label'}]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-974f07eea526> in <module> ----> 1 Features._from_yaml_list(ry) ~/Desktop/hf/nlp/src/datasets/features/features.py in _from_yaml_list(cls, yaml_data) 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") 1744 -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) 1746 1747 def encode_example(self, example): ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] ~/Desktop/hf/nlp/src/datasets/features/features.py in unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." TypeError: can only concatenate str (not "int") to str ``` TODO: - [x] Remove YAML integer keys from `dataset_info` metadata - [x] Make a patch release for affected `datasets` versions: 2.6 and 2.7 - [x] Communicate on the fix - [x] Wait for adoption - [x] Bulk edit the Hub to fix this in all canonical datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5275/timeline
null
completed
null
null
false
[ "@huggingface/datasets if you agree, I can make the bulk edit on the Hub to fix integer keys into strings.", "Ok for me, and we can merge (internal) https://github.com/huggingface/moon-landing/pull/4609", "FYI there are still 2k+ weekly users on `datasets` 2.6.1 which doesn't support the string label format for...
https://api.github.com/repos/huggingface/datasets/issues/991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/991/comments
https://api.github.com/repos/huggingface/datasets/issues/991/events
https://github.com/huggingface/datasets/pull/991
755,117,902
MDExOlB1bGxSZXF1ZXN0NTMwODkyMDk0
991
Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets)
[]
closed
false
null
0
2020-12-02T09:52:19Z
2020-12-03T11:01:26Z
2020-12-03T11:01:26Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/991/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/991/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/991.diff", "html_url": "https://github.com/huggingface/datasets/pull/991", "merged_at": "2020-12-03T11:01:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/991.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/991" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1400/comments
https://api.github.com/repos/huggingface/datasets/issues/1400/events
https://github.com/huggingface/datasets/pull/1400
760,514,215
MDExOlB1bGxSZXF1ZXN0NTM1MzMzMDYz
1,400
Add European Union Education and Culture Translation Memory (EAC-TM) dataset
[]
closed
false
null
0
2020-12-09T17:14:52Z
2020-12-14T13:06:48Z
2020-12-14T13:06:47Z
null
Adding the EAC Translation Memory dataset : https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1400/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1400/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1400.diff", "html_url": "https://github.com/huggingface/datasets/pull/1400", "merged_at": "2020-12-14T13:06:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1400.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1400" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4134/comments
https://api.github.com/repos/huggingface/datasets/issues/4134/events
https://github.com/huggingface/datasets/issues/4134
1,197,937,146
I_kwDODunzps5HZxH6
4,134
ELI5 supporting documents
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
1
2022-04-08T23:36:27Z
2022-04-13T13:52:46Z
null
null
if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4134/timeline
null
null
null
null
false
[ "Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;)" ]
https://api.github.com/repos/huggingface/datasets/issues/5644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5644/comments
https://api.github.com/repos/huggingface/datasets/issues/5644/events
https://github.com/huggingface/datasets/pull/5644
1,626,204,046
PR_kwDODunzps5MJHUi
5,644
Allow direct cast from binary to Audio/Image
[]
closed
false
null
3
2023-03-15T20:02:54Z
2023-03-16T14:20:44Z
2023-03-16T14:12:55Z
null
To address https://github.com/huggingface/datasets/discussions/5593.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5644/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5644.diff", "html_url": "https://github.com/huggingface/datasets/pull/5644", "merged_at": "2023-03-16T14:12:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5644.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5644" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/1524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1524/comments
https://api.github.com/repos/huggingface/datasets/issues/1524/events
https://github.com/huggingface/datasets/pull/1524
764,521,672
MDExOlB1bGxSZXF1ZXN0NTM4NTQ2MjI0
1,524
ADD: swahili dataset for language modeling
[]
closed
false
null
0
2020-12-12T22:47:18Z
2020-12-17T16:37:16Z
2020-12-17T16:37:16Z
null
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1524/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1524.diff", "html_url": "https://github.com/huggingface/datasets/pull/1524", "merged_at": "2020-12-17T16:37:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/1524.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1524" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5418/comments
https://api.github.com/repos/huggingface/datasets/issues/5418/events
https://github.com/huggingface/datasets/issues/5418
1,530,111,184
I_kwDODunzps5bM6TQ
5,418
Add ProgressBar for `to_parquet`
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
4
2023-01-12T05:06:20Z
2023-01-24T18:18:24Z
2023-01-24T18:18:24Z
null
### Feature request Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works. ### Motivation It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar ### Your contribution Sure I can help if needed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5418/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5418/timeline
null
completed
null
null
false
[ "Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!", "@albertvillanova I’m happy to make a quick PR for the feature! let me know ", "That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review", ...
https://api.github.com/repos/huggingface/datasets/issues/621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/621/comments
https://api.github.com/repos/huggingface/datasets/issues/621/events
https://github.com/huggingface/datasets/pull/621
700,171,097
MDExOlB1bGxSZXF1ZXN0NDg1ODQ3ODYz
621
[docs] Index: The native emoji looks kinda ugly in large size
[]
closed
false
null
0
2020-09-12T09:48:40Z
2020-09-15T06:20:03Z
2020-09-15T06:20:02Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/621/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/621.diff", "html_url": "https://github.com/huggingface/datasets/pull/621", "merged_at": "2020-09-15T06:20:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/621.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/621" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5959/comments
https://api.github.com/repos/huggingface/datasets/issues/5959/events
https://github.com/huggingface/datasets/issues/5959
1,757,397,507
I_kwDODunzps5ov8ID
5,959
read metric glue.py from local file
[]
closed
false
null
1
2023-06-14T17:59:35Z
2023-06-14T18:04:16Z
2023-06-14T18:04:16Z
null
### Describe the bug Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets. My problem is about the load_metric. When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns ` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper return deprecated_function(*args, **kwargs) File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable` Thanks in advance for help! ### Steps to reproduce the bug N/A ### Expected behavior N/A ### Environment info `datasets == 2.12.0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5959/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5959/timeline
null
completed
null
null
false
[ "Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/1927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1927/comments
https://api.github.com/repos/huggingface/datasets/issues/1927/events
https://github.com/huggingface/datasets/pull/1927
813,768,935
MDExOlB1bGxSZXF1ZXN0NTc3ODYxODM5
1,927
Update dataset card of wino_bias
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
1
2021-02-22T18:51:34Z
2022-09-23T13:35:09Z
2022-09-23T13:35:08Z
null
Updated the info for the wino_bias dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1927/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1927/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1927.diff", "html_url": "https://github.com/huggingface/datasets/pull/1927", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1927.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1927" }
true
[ "Thanks @JieyuZhao.\r\n\r\nI think this PR was superseded by your other PRs:\r\n- #1930\r\n- #2152 \r\n\r\nI'm closing this." ]
https://api.github.com/repos/huggingface/datasets/issues/523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/523/comments
https://api.github.com/repos/huggingface/datasets/issues/523/events
https://github.com/huggingface/datasets/pull/523
682,573,232
MDExOlB1bGxSZXF1ZXN0NDcwNzkxMjA1
523
Speed up Tokenization by optimizing cast_to_python_objects
[]
closed
false
null
1
2020-08-20T09:42:02Z
2020-08-24T08:54:15Z
2020-08-24T08:54:14Z
null
I changed how `cast_to_python_objects` works to make it faster. It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively. To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted. If the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same. This trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example. Speed improvement: ```python import transformers import nlp tok = transformers.BertTokenizerFast.from_pretrained("bert-base-uncased") txt = ["a " * 512] * 1000 dataset = nlp.Dataset.from_dict({"txt": txt}) # Tokenization using .map is now faster. Previously it was taking 3.5s %time _ = dataset.map(lambda x: tok(x["txt"]), batched=True, load_from_cache_file=False) # 450ms # for comparison %time _ = tok(txt) # 280ms ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/523/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/523/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/523.diff", "html_url": "https://github.com/huggingface/datasets/pull/523", "merged_at": "2020-08-24T08:54:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/523.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/523" }
true
[ "I took your comments into account and added tests for `cast_to_python_objects`" ]
https://api.github.com/repos/huggingface/datasets/issues/2620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2620/comments
https://api.github.com/repos/huggingface/datasets/issues/2620/events
https://github.com/huggingface/datasets/pull/2620
940,893,389
MDExOlB1bGxSZXF1ZXN0Njg2ODk3MDky
2,620
Add speech processing tasks
[]
closed
false
null
2
2021-07-09T16:07:29Z
2021-07-12T18:32:59Z
2021-07-12T17:32:02Z
null
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2620/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2620/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2620.diff", "html_url": "https://github.com/huggingface/datasets/pull/2620", "merged_at": "2021-07-12T17:32:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/2620.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2620" }
true
[ "Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?", "> Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?\r\n\r\nYes there's a few - I'll fix them tomorrow :)" ]
https://api.github.com/repos/huggingface/datasets/issues/52
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/52/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/52/comments
https://api.github.com/repos/huggingface/datasets/issues/52/events
https://github.com/huggingface/datasets/pull/52
613,339,071
MDExOlB1bGxSZXF1ZXN0NDE0MTEyMDAy
52
allow dummy folder structure to handle dict of lists
[]
closed
false
null
0
2020-05-06T13:54:35Z
2020-05-06T13:55:19Z
2020-05-06T13:55:18Z
null
`esnli.py` needs that extension of the dummy data testing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/52/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/52/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/52.diff", "html_url": "https://github.com/huggingface/datasets/pull/52", "merged_at": "2020-05-06T13:55:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/52.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/52" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1144/comments
https://api.github.com/repos/huggingface/datasets/issues/1144/events
https://github.com/huggingface/datasets/pull/1144
757,452,831
MDExOlB1bGxSZXF1ZXN0NTMyODI3OTI4
1,144
Add JFLEG
[]
closed
false
null
2
2020-12-04T22:36:38Z
2020-12-06T18:16:04Z
2020-12-06T18:16:04Z
null
This PR adds [JFLEG ](https://www.aclweb.org/anthology/E17-2037/), an English grammatical error correction benchmark. The tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target sentences**. The original dataset comprise files in a flat structure, labelled by split then by source/target (e.g., dev.src, dev.ref0, ..., dev.ref3). Not sure what is the best way of adding this. I imagine I can treat each distinct source-target pair as its own split? But having so many copies of the source sentence feels redundant, and it would make it less convenient to end-users who might want to access multiple gold standard targets simultaneously.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1144/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1144/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1144.diff", "html_url": "https://github.com/huggingface/datasets/pull/1144", "merged_at": "2020-12-06T18:16:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/1144.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1144" }
true
[ "Hi @j-chim ! You're right it does feel redundant: your option works better, but I'd even suggest having the references in a Sequence feature, which you can declare as:\r\n```\r\n\t features=datasets.Features(\r\n {\r\n \"sentence\": datasets.Value(\"string\"),\r\n ...
https://api.github.com/repos/huggingface/datasets/issues/4276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4276/comments
https://api.github.com/repos/huggingface/datasets/issues/4276/events
https://github.com/huggingface/datasets/issues/4276
1,224,949,252
I_kwDODunzps5JAz4E
4,276
OpenBookQA has missing and inconsistent field names
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
11
2022-05-04T05:51:52Z
2022-10-11T17:11:53Z
2022-10-05T13:50:03Z
null
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanScore'], - 'clarity': row['clarity'], - 'turkIdAnonymized': row['turkIdAnonymized'] 3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Expected results The structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4276/timeline
null
completed
null
null
false
[ "Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ", "Ok, awesome @albertvillanova How about #4275 ?", "On the other hand, I am not sure if we should always preserve the original nested structure. I think we shoul...
https://api.github.com/repos/huggingface/datasets/issues/1322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1322/comments
https://api.github.com/repos/huggingface/datasets/issues/1322/events
https://github.com/huggingface/datasets/pull/1322
759,576,003
MDExOlB1bGxSZXF1ZXN0NTM0NTU3Njg3
1,322
add indonlu benchmark datasets
[]
closed
false
null
0
2020-12-08T16:10:58Z
2020-12-13T02:11:27Z
2020-12-13T01:54:28Z
null
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1322/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1322/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1322.diff", "html_url": "https://github.com/huggingface/datasets/pull/1322", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1322.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1322" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3431/comments
https://api.github.com/repos/huggingface/datasets/issues/3431/events
https://github.com/huggingface/datasets/issues/3431
1,079,866,083
I_kwDODunzps5AXXLj
3,431
Unable to resolve any data file after loading once
[]
closed
false
null
2
2021-12-14T15:02:15Z
2022-12-11T10:53:04Z
2022-02-24T09:13:52Z
null
when I rerun my program, it occurs this error " Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem? thx. And below is my code . ![image](https://user-images.githubusercontent.com/84694183/146023446-d75fdec8-65c1-484f-80d8-6c20ff5e994b.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3431/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3431/timeline
null
completed
null
null
false
[ "Hi ! `load_dataset` accepts as input either a local dataset directory or a dataset name from the Hugging Face Hub.\r\n\r\nSo here you are getting this error the second time because it tries to load the local `wiki_dpr` directory, instead of `wiki_dpr` from the Hub. It doesn't work since it's a **cache** directory,...
https://api.github.com/repos/huggingface/datasets/issues/3526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3526/comments
https://api.github.com/repos/huggingface/datasets/issues/3526/events
https://github.com/huggingface/datasets/pull/3526
1,093,833,446
PR_kwDODunzps4wiMaQ
3,526
Update license to bookcorpus dataset card
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
2
2022-01-04T23:25:23Z
2022-09-30T10:23:38Z
2022-09-30T10:21:20Z
null
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3526/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3526/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3526.diff", "html_url": "https://github.com/huggingface/datasets/pull/3526", "merged_at": "2022-09-30T10:21:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/3526.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3526" }
true
[ "The smashwords ToS apply for this dataset, we did the same for https://github.com/huggingface/datasets/pull/3525", "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/165/comments
https://api.github.com/repos/huggingface/datasets/issues/165/events
https://github.com/huggingface/datasets/issues/165
620,758,221
MDU6SXNzdWU2MjA3NTgyMjE=
165
ANLI
[]
closed
false
null
0
2020-05-19T07:50:57Z
2020-05-20T12:23:07Z
2020-05-20T12:23:07Z
null
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART". The current naming will confuse people :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/165/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/165/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/713/comments
https://api.github.com/repos/huggingface/datasets/issues/713/events
https://github.com/huggingface/datasets/pull/713
714,475,732
MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy
713
Fix reading text files with carriage return symbols
[]
closed
false
null
1
2020-10-05T03:07:03Z
2020-10-09T05:58:25Z
2020-10-05T13:49:29Z
null
The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`). It fails with the following error message: ``` ... File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. ``` ___ I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine. Please, consider this PR as it seems to be a common issue to solve.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/713/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/713.diff", "html_url": "https://github.com/huggingface/datasets/pull/713", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/713.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/713" }
true
[ "Discussed in #622, fixed in #715. Closing the issue. Thanks @lhoestq, it works now! 👍 " ]
https://api.github.com/repos/huggingface/datasets/issues/2006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2006/comments
https://api.github.com/repos/huggingface/datasets/issues/2006/events
https://github.com/huggingface/datasets/pull/2006
824,457,794
MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2
2,006
Don't gitignore dvc.lock
[]
closed
false
null
0
2021-03-08T11:13:08Z
2021-03-08T11:28:35Z
2021-03-08T11:28:34Z
null
The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of ``` ERROR: 'dvc.lock' is git-ignored. ``` I removed the dvc.lock file from the gitignore to fix that
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2006/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2006.diff", "html_url": "https://github.com/huggingface/datasets/pull/2006", "merged_at": "2021-03-08T11:28:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2006.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2006" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3958/comments
https://api.github.com/repos/huggingface/datasets/issues/3958/events
https://github.com/huggingface/datasets/pull/3958
1,172,657,981
PR_kwDODunzps40nQU2
3,958
Update Wikipedia metadata
[]
closed
false
null
2
2022-03-17T17:50:05Z
2022-03-21T12:26:48Z
2022-03-21T12:26:47Z
null
This PR updates: - dataset card - metadata JSON
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3958/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3958/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3958.diff", "html_url": "https://github.com/huggingface/datasets/pull/3958", "merged_at": "2022-03-21T12:26:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/3958.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3958" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3958). All of your documentation changes will be reflected on that endpoint.", "Once this last PR validated, I can take care of the integration of all the wikipedia update branch into master, @lhoestq. " ]
https://api.github.com/repos/huggingface/datasets/issues/328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/328/comments
https://api.github.com/repos/huggingface/datasets/issues/328/events
https://github.com/huggingface/datasets/issues/328
648,326,841
MDU6SXNzdWU2NDgzMjY4NDE=
328
Fork dataset
[]
closed
false
null
5
2020-06-30T16:42:53Z
2020-07-06T21:43:59Z
2020-07-06T21:43:59Z
null
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/328/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/328/timeline
null
completed
null
null
false
[ "To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for exa...
https://api.github.com/repos/huggingface/datasets/issues/3947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3947/comments
https://api.github.com/repos/huggingface/datasets/issues/3947/events
https://github.com/huggingface/datasets/pull/3947
1,171,452,854
PR_kwDODunzps40jfLq
3,947
BLEU metric card
[]
closed
false
null
2
2022-03-16T19:20:07Z
2022-03-29T14:59:50Z
2022-03-29T14:54:14Z
null
Add BLEU metric card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3947/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3947/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3947.diff", "html_url": "https://github.com/huggingface/datasets/pull/3947", "merged_at": "2022-03-29T14:54:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3947.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3947" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Some thoughts:\r\n- For values, e.g. \"Defaults to False\", I would put False in code: `False`. Same for : \"Defaults to `4`.\"\r\n- I would put the following remark in \"Limitations\": \r\n> \"BLEU's output is always a number betwee...
https://api.github.com/repos/huggingface/datasets/issues/1704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1704/comments
https://api.github.com/repos/huggingface/datasets/issues/1704/events
https://github.com/huggingface/datasets/pull/1704
781,402,757
MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1
1,704
Update XSUM Factuality DatasetCard
[]
closed
false
null
0
2021-01-07T15:37:14Z
2021-01-12T13:30:04Z
2021-01-12T13:30:04Z
null
Update XSUM Factuality DatasetCard
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1704/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1704.diff", "html_url": "https://github.com/huggingface/datasets/pull/1704", "merged_at": "2021-01-12T13:30:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/1704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1704" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4093/comments
https://api.github.com/repos/huggingface/datasets/issues/4093/events
https://github.com/huggingface/datasets/issues/4093
1,192,523,161
I_kwDODunzps5HFHWZ
4,093
elena-soare/crawled-ecommerce: missing dataset
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
3
2022-04-05T02:25:19Z
2022-04-12T09:34:53Z
2022-04-12T09:34:53Z
null
elena-soare/crawled-ecommerce **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4093/timeline
null
completed
null
null
false
[ "It's a bug! Thanks for reporting, I'm looking at it.", "By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.", "Fixed. See https://huggi...
https://api.github.com/repos/huggingface/datasets/issues/1837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1837/comments
https://api.github.com/repos/huggingface/datasets/issues/1837/events
https://github.com/huggingface/datasets/issues/1837
803,555,650
MDU6SXNzdWU4MDM1NTU2NTA=
1,837
Add VCTK
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b",...
closed
false
null
2
2021-02-08T13:15:28Z
2021-12-28T15:05:08Z
2021-12-28T15:05:08Z
null
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.* - **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443 - **Data:** https://datashare.ed.ac.uk/handle/10283/3443 - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1837/timeline
null
completed
null
null
false
[ "@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me k...
https://api.github.com/repos/huggingface/datasets/issues/6081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6081/comments
https://api.github.com/repos/huggingface/datasets/issues/6081/events
https://github.com/huggingface/datasets/pull/6081
1,824,486,278
PR_kwDODunzps5WjU0k
6,081
Deprecate `Dataset.export`
[]
open
false
null
1
2023-07-27T14:22:18Z
2023-07-27T14:27:56Z
null
null
Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) tutorial (on which this method is based) to write TFRecord files instead.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6081/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6081/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6081.diff", "html_url": "https://github.com/huggingface/datasets/pull/6081", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6081.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6081" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/341/comments
https://api.github.com/repos/huggingface/datasets/issues/341/events
https://github.com/huggingface/datasets/pull/341
650,611,969
MDExOlB1bGxSZXF1ZXN0NDQ0MDcwMjEx
341
add fever dataset
[]
closed
false
null
0
2020-07-03T13:53:07Z
2020-07-06T13:03:48Z
2020-07-06T13:03:47Z
null
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/341/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/341/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/341.diff", "html_url": "https://github.com/huggingface/datasets/pull/341", "merged_at": "2020-07-06T13:03:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/341.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/341" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/544/comments
https://api.github.com/repos/huggingface/datasets/issues/544/events
https://github.com/huggingface/datasets/pull/544
689,062,519
MDExOlB1bGxSZXF1ZXN0NDc2MTc4MDM2
544
[Distributed] Fix load_dataset error when multiprocessing + add test
[]
closed
false
null
0
2020-08-31T09:30:10Z
2020-08-31T11:15:11Z
2020-08-31T11:15:10Z
null
Fix #543 + add test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/544/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/544/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/544.diff", "html_url": "https://github.com/huggingface/datasets/pull/544", "merged_at": "2020-08-31T11:15:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/544.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/544" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3099/comments
https://api.github.com/repos/huggingface/datasets/issues/3099/events
https://github.com/huggingface/datasets/issues/3099
1,028,338,078
I_kwDODunzps49SzGe
3,099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
6
2021-10-17T14:17:47Z
2021-11-09T16:42:29Z
2021-11-09T16:42:28Z
null
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-fbe7981e6e21> in <module> 1 import torch 2 import transformers ----> 3 from datasets import load_dataset 4 5 dataset = load_dataset("sst", "default") ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/__init__.py in <module> 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ---> 37 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder 38 from .combine import interleave_datasets 39 from .dataset_dict import DatasetDict, IterableDatasetDict ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/builder.py in <module> 42 ) 43 from .arrow_writer import ArrowWriter, BeamWriter ---> 44 from .data_files import DataFilesDict, _sanitize_patterns 45 from .dataset_dict import DatasetDict, IterableDatasetDict 46 from .fingerprint import Hasher ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/data_files.py in <module> 118 119 def _exec_patterns_in_dataset_repository( --> 120 dataset_info: huggingface_hub.hf_api.DatasetInfo, 121 patterns: List[str], 122 allowed_extensions: Optional[list] = None, AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-11.3.1-arm64-arm-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3099/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3099/timeline
null
completed
null
null
false
[ "Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tre...
https://api.github.com/repos/huggingface/datasets/issues/5057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5057/comments
https://api.github.com/repos/huggingface/datasets/issues/5057/events
https://github.com/huggingface/datasets/pull/5057
1,394,827,216
PR_kwDODunzps5AD4c6
5,057
Support `converters` in `CsvBuilder`
[]
closed
false
null
1
2022-10-03T14:23:21Z
2022-10-04T11:19:28Z
2022-10-04T11:17:32Z
null
Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5057/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5057/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5057.diff", "html_url": "https://github.com/huggingface/datasets/pull/5057", "merged_at": "2022-10-04T11:17:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/5057.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5057" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4206/comments
https://api.github.com/repos/huggingface/datasets/issues/4206/events
https://github.com/huggingface/datasets/pull/4206
1,212,715,581
PR_kwDODunzps42pJQW
4,206
Add Nerval Metric
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
closed
false
null
1
2022-04-22T19:45:00Z
2023-07-11T09:34:56Z
2023-07-11T09:34:55Z
null
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4206/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4206.diff", "html_url": "https://github.com/huggingface/datasets/pull/4206", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4206.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4206" }
true
[ "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
https://api.github.com/repos/huggingface/datasets/issues/1461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1461/comments
https://api.github.com/repos/huggingface/datasets/issues/1461/events
https://github.com/huggingface/datasets/pull/1461
761,415,420
MDExOlB1bGxSZXF1ZXN0NTM2MDgzODY5
1,461
Adding NewsQA dataset
[]
closed
false
null
6
2020-12-10T17:01:10Z
2020-12-17T18:29:03Z
2020-12-17T18:27:36Z
null
Since the dataset has legal restrictions to circulate the original data. It has to be manually downloaded by the user and loaded to the library.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1461/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1461/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1461.diff", "html_url": "https://github.com/huggingface/datasets/pull/1461", "merged_at": "2020-12-17T18:27:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/1461.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1461" }
true
[ "Generate the dummy dataset then regenerate the dataset_info.json file, ", "> Generate the dummy dataset then regenerate the dataset_info.json file,\r\n\r\nThe pytest scripts do not accept manual directory inputs for the data provided manually. This is why the tests fail. ", "don't use the --auto-generate argum...
https://api.github.com/repos/huggingface/datasets/issues/1190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1190/comments
https://api.github.com/repos/huggingface/datasets/issues/1190/events
https://github.com/huggingface/datasets/pull/1190
757,833,698
MDExOlB1bGxSZXF1ZXN0NTMzMTMwNTM0
1,190
Add Fake News Detection in Filipino dataset
[]
closed
false
null
2
2020-12-06T03:12:15Z
2020-12-07T15:39:27Z
2020-12-07T15:39:27Z
null
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html Link to the dataset/repo: https://github.com/jcblaisecruz02/Tagalog-fake-news
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1190/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1190/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1190.diff", "html_url": "https://github.com/huggingface/datasets/pull/1190", "merged_at": "2020-12-07T15:39:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/1190.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1190" }
true
[ "Hi! I'm the author of this paper (surprised to see our datasets have been added already).\r\n\r\nThat paper link only leads to the conference index, here's a link to the actual paper: https://www.aclweb.org/anthology/2020.lrec-1.316/\r\n\r\nWould it be fine if I also edited your gsheet entry to reflect this change...
https://api.github.com/repos/huggingface/datasets/issues/2794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2794/comments
https://api.github.com/repos/huggingface/datasets/issues/2794/events
https://github.com/huggingface/datasets/issues/2794
969,728,545
MDU6SXNzdWU5Njk3Mjg1NDU=
2,794
Warnings and documentation about pickling incorrect
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
0
2021-08-12T23:09:13Z
2021-08-12T23:09:31Z
null
null
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L262 Docs: > For a transform to be hashable, it needs to be pickleable using dill or pickle. > – [docs](https://huggingface.co/docs/datasets/processing.html#fingerprinting) For my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https://github.com/huggingface/datasets/issues/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default: https://github.com/huggingface/datasets/blob/c93525dc291346e54212567fa72d7d607befe937/setup.py#L83 ... and the hashing will fail if it fails. ### Enhancement I think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https://github.com/huggingface/datasets/issues/2516#issuecomment-865173139: ```python from datasets.fingerprint import Hasher Hasher.hash(my_object) ``` I think add this to the docs will help future users quickly debug any hashing troubles of their own :-) ## Steps to reproduce the bug `dill` but not `pickle` hashing failure in https://github.com/huggingface/datasets/issues/2643 ## Expected results If either `dill` or `pickle` can successfully hash, the hashing will succeed. ## Actual results If `dill` or `pickle` cannot hash, the hashing fails. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2794/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2794/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/1002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1002/comments
https://api.github.com/repos/huggingface/datasets/issues/1002/events
https://github.com/huggingface/datasets/pull/1002
755,309,758
MDExOlB1bGxSZXF1ZXN0NTMxMDQ1MDIx
1,002
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
[]
closed
false
null
2
2020-12-02T14:13:17Z
2020-12-07T16:58:03Z
2020-12-03T13:14:33Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1002/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1002/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1002.diff", "html_url": "https://github.com/huggingface/datasets/pull/1002", "merged_at": "2020-12-03T13:14:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1002.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1002" }
true
[ "Could you fix the dummy data before we merge ?\r\nLooks like the dummy `train.csv` is missing", "Thanks @Narsil @lhoestq for adding MeDAL :)" ]
https://api.github.com/repos/huggingface/datasets/issues/2345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2345/comments
https://api.github.com/repos/huggingface/datasets/issues/2345/events
https://github.com/huggingface/datasets/issues/2345
886,586,872
MDU6SXNzdWU4ODY1ODY4NzI=
2,345
[Question] How to move and reuse preprocessed dataset?
[]
closed
false
null
4
2021-05-11T09:09:17Z
2021-06-11T04:39:11Z
2021-06-11T04:39:11Z
null
Hi, I am training a gpt-2 from scratch using run_clm.py. I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess), I tried to : copy path_to_cache_dir/datasets to new_cache_dir/datasets set export HF_DATASETS_CACHE="new_cache_dir/" but the program still re-preprocess the whole dataset without loading cache. I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M. What is the proper way to do this?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2345/timeline
null
completed
null
null
false
[ "@lhoestq @LysandreJik", "<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n", "Also note that for the caching to work, you must re...
https://api.github.com/repos/huggingface/datasets/issues/1970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1970/comments
https://api.github.com/repos/huggingface/datasets/issues/1970/events
https://github.com/huggingface/datasets/pull/1970
819,500,620
MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw
1,970
Fixing the URL filtering for bad MLSUM examples in GEM
[]
closed
false
null
0
2021-03-02T01:22:58Z
2021-03-02T03:19:06Z
2021-03-02T02:01:33Z
null
This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r cc @sebastianGehrmann
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1970/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1970.diff", "html_url": "https://github.com/huggingface/datasets/pull/1970", "merged_at": "2021-03-02T02:01:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1970.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1970" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1050/comments
https://api.github.com/repos/huggingface/datasets/issues/1050/events
https://github.com/huggingface/datasets/pull/1050
756,166,728
MDExOlB1bGxSZXF1ZXN0NTMxNzU1MDQ3
1,050
Add GoEmotions
[]
closed
false
null
1
2020-12-03T12:49:53Z
2020-12-03T17:37:45Z
2020-12-03T17:30:08Z
null
Adds the GoEmotions dataset, a nice emotion classification dataset with 27 (multi-)label annotations on reddit comments. Includes both a large raw version and a narrowed version with predefined train/test/val splits, which I've included as separate configs with the latter as a default. - Webpage/repo: https://github.com/google-research/google-research/tree/master/goemotions - Paper: https://arxiv.org/abs/2005.00547
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1050/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1050/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1050.diff", "html_url": "https://github.com/huggingface/datasets/pull/1050", "merged_at": "2020-12-03T17:30:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/1050.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1050" }
true
[ "Whoops, didn't mean for that to be merged yet (my bad). I'm reaching out to the authors since we'd like their feedback on the best way to have the `author` field anonymized or removed. Will send a patch once they get back to me." ]
https://api.github.com/repos/huggingface/datasets/issues/3463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3463/comments
https://api.github.com/repos/huggingface/datasets/issues/3463/events
https://github.com/huggingface/datasets/pull/3463
1,085,078,795
PR_kwDODunzps4wGB4P
3,463
Update swahili_news dataset
[]
closed
false
null
0
2021-12-20T18:20:20Z
2021-12-21T06:24:03Z
2021-12-21T06:24:02Z
null
Update dataset with latest verion data files. Fix #3462. Close bigscience-workshop/data_tooling#107
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3463/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3463.diff", "html_url": "https://github.com/huggingface/datasets/pull/3463", "merged_at": "2021-12-21T06:24:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/3463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3463" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4198/comments
https://api.github.com/repos/huggingface/datasets/issues/4198/events
https://github.com/huggingface/datasets/issues/4198
1,211,456,559
I_kwDODunzps5INVwv
4,198
There is no dataset
[]
closed
false
null
0
2022-04-21T19:19:26Z
2022-05-03T11:29:05Z
2022-04-22T06:12:25Z
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4198/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4319/comments
https://api.github.com/repos/huggingface/datasets/issues/4319/events
https://github.com/huggingface/datasets/pull/4319
1,232,982,023
PR_kwDODunzps43q0UY
4,319
Adding eval metadata for ade v2
[]
closed
false
null
1
2022-05-11T17:36:20Z
2022-05-12T13:29:51Z
2022-05-12T13:22:19Z
null
Adding metadata to allow evaluation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4319/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4319/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4319.diff", "html_url": "https://github.com/huggingface/datasets/pull/4319", "merged_at": "2022-05-12T13:22:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/4319.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4319" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/26
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/26/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/26/comments
https://api.github.com/repos/huggingface/datasets/issues/26/events
https://github.com/huggingface/datasets/pull/26
610,226,047
MDExOlB1bGxSZXF1ZXN0NDExNzA2NjA2
26
[Tests] Clean tests
[]
closed
false
null
0
2020-04-30T16:38:29Z
2020-04-30T20:12:04Z
2020-04-30T20:12:03Z
null
the abseil testing library (https://abseil.io/docs/python/quickstart.html) is better than the one I had before, so I decided to switch to that and changed the `setup.py` config file. Abseil has more support and a cleaner API for parametrized testing I think. I added a list of all dataset scripts that are currently on AWS, but will replace that once the API is integrated into this lib. One can now easily test for just a single function for a single dataset with: `tests/test_dataset_common.py::DatasetTest::test_load_dataset_wikipedia` NOTE: This PR is rebased on PR #29 so should be merged after.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/26/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/26/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/26.diff", "html_url": "https://github.com/huggingface/datasets/pull/26", "merged_at": "2020-04-30T20:12:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/26.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/26" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2428/comments
https://api.github.com/repos/huggingface/datasets/issues/2428/events
https://github.com/huggingface/datasets/pull/2428
907,169,746
MDExOlB1bGxSZXF1ZXN0NjU4MDU2MjI3
2,428
Add copyright info for wiki_lingua dataset
[]
closed
false
null
3
2021-05-31T07:22:52Z
2021-06-04T10:22:33Z
2021-06-04T10:22:33Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2428/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2428.diff", "html_url": "https://github.com/huggingface/datasets/pull/2428", "merged_at": "2021-06-04T10:22:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2428.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2428" }
true
[ "Build fails but this change should not be the reason...", "rebased on master", "rebased on master" ]
https://api.github.com/repos/huggingface/datasets/issues/421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/421/comments
https://api.github.com/repos/huggingface/datasets/issues/421/events
https://github.com/huggingface/datasets/pull/421
662,213,864
MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1
421
Style change
[]
closed
false
null
3
2020-07-20T20:08:29Z
2020-07-22T16:08:40Z
2020-07-22T16:08:39Z
null
make quality and make style ran on scripts
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/421/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/421/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/421.diff", "html_url": "https://github.com/huggingface/datasets/pull/421", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/421.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/421" }
true
[ "What about the other PR #419 ?", "Oh this is the PR where I ran make quality and make style and some previous files from master were changed", "Oh right ! Let me fix the style myself if you don't mind" ]
https://api.github.com/repos/huggingface/datasets/issues/5888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5888/comments
https://api.github.com/repos/huggingface/datasets/issues/5888/events
https://github.com/huggingface/datasets/issues/5888
1,722,290,363
I_kwDODunzps5mqBC7
5,888
A way to upload and visualize .mp4 files (millions of them) as part of a dataset
[]
open
false
null
9
2023-05-22T18:05:26Z
2023-06-23T03:37:16Z
null
null
**Is your feature request related to a problem? Please describe.** I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI It combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files. Hence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem. My dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload. So, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things. **Describe the solution you'd like** A native way to upload large datasets that include .mp4 or other video types. **Describe alternatives you've considered** Already explained earlier **Additional context** https://huggingface.co/datasets/Antreas/TALI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5888/timeline
null
null
null
null
false
[ "Hi! \r\n\r\nYou want to use `push_to_hub` (creates Parquet files) instead of `save_to_disk` (creates Arrow files) when creating a Hub dataset. Parquet is designed for long-term storage and takes less space than the Arrow format, and, most importantly, `load_dataset` can parse it, which should fix the viewer. \r\n\...
https://api.github.com/repos/huggingface/datasets/issues/703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/703/comments
https://api.github.com/repos/huggingface/datasets/issues/703/events
https://github.com/huggingface/datasets/pull/703
713,559,718
MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5
703
Add hotpot QA
[]
closed
false
null
5
2020-10-02T11:44:28Z
2020-10-02T12:54:41Z
2020-10-02T12:54:41Z
null
Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/703/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/703.diff", "html_url": "https://github.com/huggingface/datasets/pull/703", "merged_at": "2020-10-02T12:54:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/703.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/703" }
true
[ "Awesome :) \r\n\r\nDon't pay attention to the RemoteDatasetTest error, I'm fixing it right now", "You can rebase from master to fix the CI test :)", "If we're lucky we can even include this dataset in today's release", "Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `...
https://api.github.com/repos/huggingface/datasets/issues/5425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5425/comments
https://api.github.com/repos/huggingface/datasets/issues/5425/events
https://github.com/huggingface/datasets/issues/5425
1,534,581,850
I_kwDODunzps5bd9xa
5,425
Sort on multiple keys with datasets.Dataset.sort()
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
null
10
2023-01-16T09:22:26Z
2023-02-24T16:15:11Z
2023-02-24T16:15:11Z
null
### Feature request From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1 `sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function. The suggested solution: > ... having something similar to pandas and be able to specify multiple columns for sorting. We’re already using pandas under the hood to do the sorting in datasets. The suggested workaround: > convert your dataset to pandas and use `df.sort_values()` ### Motivation Preserved ordering when sorting is very handy when one needs to sort on multiple columns, A and B, so that e.g. whenever A is equal for two or more rows, B is kept sorted. Having a parameter to do this in 🤗datasets would be cleaner than going through pandas and back, and it wouldn't add much complexity to the library. Alternatives: - the possibility to specify multiple keys to sort by with decreasing priority (suggested solution), - the ability to provide a key function for sorting, so that one can manually specify the sorting criteria. ### Your contribution I'll be happy to contribute by submitting a PR. Will get documented on `CONTRIBUTING.MD`. Would love to get thoughts on this, if anyone has anything to add.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5425/timeline
null
completed
null
null
false
[ "Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multipl...
https://api.github.com/repos/huggingface/datasets/issues/3553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3553/comments
https://api.github.com/repos/huggingface/datasets/issues/3553/events
https://github.com/huggingface/datasets/issues/3553
1,097,252,275
I_kwDODunzps5BZr2z
3,553
set_format("np") no longer works for Image data
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
5
2022-01-09T17:18:13Z
2022-10-14T12:03:55Z
2022-10-14T12:03:54Z
null
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3553/timeline
null
completed
null
null
false
[ "A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]", "This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndat...
https://api.github.com/repos/huggingface/datasets/issues/96
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/96/comments
https://api.github.com/repos/huggingface/datasets/issues/96/events
https://github.com/huggingface/datasets/pull/96
617,739,521
MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4
96
lm1b
[]
closed
false
null
1
2020-05-13T20:38:44Z
2020-05-14T14:13:30Z
2020-05-14T14:13:29Z
null
Add lm1b dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/96/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/96/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/96.diff", "html_url": "https://github.com/huggingface/datasets/pull/96", "merged_at": "2020-05-14T14:13:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/96.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/96" }
true
[ "I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..." ]
https://api.github.com/repos/huggingface/datasets/issues/4227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4227/comments
https://api.github.com/repos/huggingface/datasets/issues/4227/events
https://github.com/huggingface/datasets/pull/4227
1,216,455,316
PR_kwDODunzps420-mc
4,227
Add f1 metric card, update docstring in py file
[]
closed
false
null
1
2022-04-26T20:41:03Z
2022-05-03T12:50:23Z
2022-05-03T12:43:33Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4227/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4227/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4227.diff", "html_url": "https://github.com/huggingface/datasets/pull/4227", "merged_at": "2022-05-03T12:43:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4227.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4227" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5266/comments
https://api.github.com/repos/huggingface/datasets/issues/5266/events
https://github.com/huggingface/datasets/pull/5266
1,455,281,310
PR_kwDODunzps5DN9BT
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
[]
closed
false
null
1
2022-11-18T14:58:47Z
2022-11-21T15:45:02Z
2022-11-21T15:41:57Z
null
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5266/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5266.diff", "html_url": "https://github.com/huggingface/datasets/pull/5266", "merged_at": "2022-11-21T15:41:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5266" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1764/comments
https://api.github.com/repos/huggingface/datasets/issues/1764/events
https://github.com/huggingface/datasets/issues/1764
791,486,860
MDU6SXNzdWU3OTE0ODY4NjA=
1,764
Connection Issues
[]
closed
false
null
1
2021-01-21T20:56:09Z
2021-01-21T21:00:19Z
2021-01-21T21:00:02Z
null
Today, I am getting connection issues while loading a dataset and the metric. ``` Traceback (most recent call last): File "src/train.py", line 180, in <module> train_dataset, dev_dataset, test_dataset = create_race_dataset() File "src/train.py", line 130, in create_race_dataset train_dataset = load_dataset("race", "all", split="train") File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path max_retries=download_config.max_retries, File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/race/race.py ``` Or ``` Traceback (most recent call last): File "src/train.py", line 105, in <module> rouge = datasets.load_metric("rouge") File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 500, in load_metric dataset=False, File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path max_retries=download_config.max_retries, File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/metrics/rouge/rouge.py ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1764/timeline
null
completed
null
null
false
[ "Academic WIFI was blocking." ]
https://api.github.com/repos/huggingface/datasets/issues/6054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6054/comments
https://api.github.com/repos/huggingface/datasets/issues/6054/events
https://github.com/huggingface/datasets/issues/6054
1,813,271,304
I_kwDODunzps5sFFMI
6,054
Multi-processed `Dataset.map` slows down a lot when `import torch`
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
1
2023-07-20T06:36:14Z
2023-07-21T15:19:37Z
2023-07-21T15:19:37Z
null
### Describe the bug When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it. I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result. BTW, `import lightning` also slows it down. Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times. - without `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/0233055a-ced4-424a-9f0f-32a2afd802c2) - with `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/463eafb7-b81e-4eb9-91ca-fd7fe20f3d59) ### Steps to reproduce the bug Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon. ```python3 from datasets import load_from_disk, disable_caching from transformers import AutoTokenizer # import torch # import lightning def rearrange_datapoints( batch, tokenizer, sequence_length, ): datapoints = [] input_ids = [] for x in batch['input_ids']: input_ids += x while len(input_ids) >= sequence_length: datapoint = input_ids[:sequence_length] datapoints.append(datapoint) input_ids[:sequence_length] = [] if input_ids: paddings = [-1] * (sequence_length - len(input_ids)) datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings datapoints.append(datapoint) batch['input_ids'] = datapoints return batch if __name__ == '__main__': disable_caching() tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False) dataset = load_from_disk('...') dataset = dataset.map( rearrange_datapoints, fn_kwargs=dict( tokenizer=tokenizer, sequence_length=2048, ), batched=True, num_proc=8, ) ``` ### Expected behavior The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6054/timeline
null
completed
null
null
false
[ "A duplicate of https://github.com/huggingface/datasets/issues/5929" ]