id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,087,352,041
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that. ## Expected results English movie reviews only. ## Actual results Example of a Spanish movie review (4173): > "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
open
https://github.com/huggingface/datasets/issues/3475
2021-12-23T03:56:43
2021-12-24T00:23:03
null
{ "login": "puzzler10", "id": 17426779, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,086,945,384
3,474
Decode images when iterating
If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned. This PR enables image decoding in `Dataset.__iter__` Close https://github.com/huggingface/datasets/issues/3473
closed
https://github.com/huggingface/datasets/pull/3474
2021-12-22T15:34:49
2023-09-24T09:54:04
2021-12-28T16:08:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,086,937,610
3,473
Iterating over a vision dataset doesn't decode the images
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
closed
https://github.com/huggingface/datasets/issues/3473
2021-12-22T15:26:32
2021-12-27T14:13:21
2021-12-23T15:21:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,086,908,508
3,472
Fix `str(Path(...))` conversion in streaming on Linux
Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets.
closed
https://github.com/huggingface/datasets/pull/3472
2021-12-22T15:06:03
2021-12-22T16:52:53
2021-12-22T16:52:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,086,588,074
3,471
Fix Tashkeela dataset to yield stripped text
This PR: - Yields stripped text - Fix path for Windows - Adds license - Adds more info in dataset card Close bigscience-workshop/data_tooling#279
closed
https://github.com/huggingface/datasets/pull/3471
2021-12-22T08:41:30
2021-12-22T10:12:08
2021-12-22T10:12:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,086,049,888
3,470
Fix rendering of docs
Minor fix in docs. Currently, `ClassLabel` docstring rendering is not right.
closed
https://github.com/huggingface/datasets/pull/3470
2021-12-21T17:17:01
2021-12-22T09:23:47
2021-12-22T09:23:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,085,882,664
3,469
Fix METEOR missing NLTK's omw-1.4
NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work. This should fix the CI on master
closed
https://github.com/huggingface/datasets/pull/3469
2021-12-21T14:19:11
2021-12-21T14:52:28
2021-12-21T14:49:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,085,871,301
3,468
Add COCO dataset
This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection. Some notes: * the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here * I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`) * this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427 TODOs: - [x] dataset card - [ ] dummy data cc @merveenoyan Closes #2526
closed
https://github.com/huggingface/datasets/pull/3468
2021-12-21T14:07:50
2023-09-24T09:33:31
2022-10-03T09:36:08
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,085,870,665
3,467
Push dataset infos.json to Hub
When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394). This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types. Other minor changes: - renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end. I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost. Close https://github.com/huggingface/datasets/issues/3394 I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes
closed
https://github.com/huggingface/datasets/pull/3467
2021-12-21T14:07:13
2021-12-21T17:00:10
2021-12-21T17:00:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,085,722,837
3,466
Add CRASS dataset
Added crass dataset
closed
https://github.com/huggingface/datasets/pull/3466
2021-12-21T11:17:22
2022-10-03T09:37:06
2022-10-03T09:37:06
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,085,400,432
3,465
Unable to load 'cnn_dailymail' dataset
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3465
2021-12-21T03:32:21
2024-06-12T14:41:17
2022-02-17T14:13:57
{ "login": "talha1503", "id": 42352729, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,085,399,097
3,464
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
## Describe the bug A clear and concise description of what the bug is. using latest datasets=datasets-1.16.1-py3-none-any.whl process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256: ![image](https://user-images.githubusercontent.com/30341159/146865779-3d25d011-1f42-4026-9e1b-76f6e1d172e9.png) then I get this error: ![image](https://user-images.githubusercontent.com/30341159/146865844-e60a404c-5f3a-403c-b2f1-acd943b5cdb8.png) I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux docker - Python version: 3.6
open
https://github.com/huggingface/datasets/issues/3464
2021-12-21T03:29:01
2022-11-21T19:55:11
null
{ "login": "koukoulala", "id": 30341159, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,085,078,795
3,463
Update swahili_news dataset
Update dataset with latest verion data files. Fix #3462. Close bigscience-workshop/data_tooling#107
closed
https://github.com/huggingface/datasets/pull/3463
2021-12-20T18:20:20
2021-12-21T06:24:03
2021-12-21T06:24:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,085,049,661
3,462
Update swahili_news dataset
Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203. ## Adding a Dataset - **Name:** swahili_news Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Related to: - bigscience-workshop/data_tooling#107
closed
https://github.com/huggingface/datasets/issues/3462
2021-12-20T17:44:01
2021-12-21T06:24:02
2021-12-21T06:24:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,085,007,346
3,461
Fix links in metrics description
Remove Markdown syntax for links in metrics description, as it is not properly rendered. Related to #3437.
closed
https://github.com/huggingface/datasets/pull/3461
2021-12-20T16:56:19
2021-12-20T17:14:52
2021-12-20T17:14:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,085,002,469
3,460
Don't encode lists as strings when using `Value("string")`
Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error. This PR fixes this and should fix the issue with WER showing low values if the input format is not right.
closed
https://github.com/huggingface/datasets/pull/3460
2021-12-20T16:50:49
2023-09-25T10:28:30
2023-09-25T09:20:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,084,969,672
3,459
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
## Describe the bug When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset. The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is. However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner. https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation. I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices. ## Steps to reproduce the bug ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print("initial 10 elements") print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) print("filtered 10 elements looking for label 0") print(dataset['label']) # -> [1, 1, 1, 1, 1, 1] ``` ## Actual results ``` $ python indices_bug.py initial 10 elements [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] filtered 10 elements looking for label 0 [1, 1, 1, 1, 1, 1] ``` This code block first shuffles the dataset (to get a mix of label 0 and label 1). Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset. Finally, a filter is applied to pull out just the elements with `label == 0`. The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter. In this case I have 2, shuffle and subset. If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up. The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results. ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Expected results In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set. If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected. ## Environment info Here are the commands required to rebuild the conda environment from scratch. ``` # create a virtual environment conda create -n dataset_indices python=3.8 -y # activate the virtual environment conda activate dataset_indices # install huggingface datasets conda install datasets ``` <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 3.0.0 ### Full Conda Environment ``` $ conda env export name: dasaset_indices channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - abseil-cpp=20210324.2=h2531618_0 - aiohttp=3.8.1=py38h7f8727e_0 - aiosignal=1.2.0=pyhd3eb1b0_0 - arrow-cpp=3.0.0=py38h6b21186_4 - attrs=21.2.0=pyhd3eb1b0_0 - aws-c-common=0.4.57=he6710b0_1 - aws-c-event-stream=0.1.6=h2531618_5 - aws-checksums=0.1.9=he6710b0_0 - aws-sdk-cpp=1.8.185=hce553d0_0 - bcj-cffi=0.5.1=py38h295c915_0 - blas=1.0=mkl - boost-cpp=1.73.0=h27cfd23_11 - bottleneck=1.3.2=py38heb32a55_1 - brotli=1.0.9=he6710b0_2 - brotli-python=1.0.9=py38heb0550a_2 - brotlicffi=1.0.9.2=py38h295c915_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.10.26=h06a4308_2 - certifi=2021.10.8=py38h06a4308_0 - cffi=1.14.6=py38h400218f_0 - conllu=4.4.1=pyhd3eb1b0_0 - cryptography=36.0.0=py38h9ce1e76_0 - dataclasses=0.8=pyh6d0b6a4_7 - dill=0.3.4=pyhd3eb1b0_0 - double-conversion=3.1.5=he6710b0_1 - et_xmlfile=1.1.0=py38h06a4308_0 - filelock=3.4.0=pyhd3eb1b0_0 - frozenlist=1.2.0=py38h7f8727e_0 - gflags=2.2.2=he6710b0_0 - glog=0.5.0=h2531618_0 - gmp=6.2.1=h2531618_2 - grpc-cpp=1.39.0=hae934f6_5 - huggingface_hub=0.0.17=pyhd3eb1b0_0 - icu=58.2=he6710b0_3 - idna=3.3=pyhd3eb1b0_0 - importlib-metadata=4.8.2=py38h06a4308_0 - importlib_metadata=4.8.2=hd3eb1b0_0 - intel-openmp=2021.4.0=h06a4308_3561 - krb5=1.19.2=hac12032_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libboost=1.73.0=h3ff78a5_11 - libcurl=7.80.0=h0b77cf5_0 - libedit=3.1.20210910=h7f8727e_0 - libev=4.33=h7f8727e_1 - libevent=2.1.8=h1ba5d50_1 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libnghttp2=1.46.0=hce63b2e_0 - libprotobuf=3.17.2=h4ff587b_1 - libssh2=1.9.0=h1ba5d50_1 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libthrift=0.14.2=hcc01f38_0 - libxml2=2.9.12=h03d6c58_0 - libxslt=1.1.34=hc22bd24_0 - lxml=4.6.3=py38h9120a33_0 - lz4-c=1.9.3=h295c915_1 - mkl=2021.4.0=h06a4308_640 - mkl-service=2.4.0=py38h7f8727e_0 - mkl_fft=1.3.1=py38hd3c417c_0 - mkl_random=1.2.2=py38h51133e4_0 - multiprocess=0.70.12.2=py38h7f8727e_0 - multivolumefile=0.2.3=pyhd3eb1b0_0 - ncurses=6.3=h7f8727e_2 - numexpr=2.7.3=py38h22e1b3c_1 - numpy=1.21.2=py38h20f2e39_0 - numpy-base=1.21.2=py38h79a1101_0 - openpyxl=3.0.9=pyhd3eb1b0_0 - openssl=1.1.1l=h7f8727e_0 - orc=1.6.9=ha97a36c_3 - packaging=21.3=pyhd3eb1b0_0 - pip=21.2.4=py38h06a4308_0 - py7zr=0.16.1=pyhd3eb1b0_1 - pycparser=2.21=pyhd3eb1b0_0 - pycryptodomex=3.10.1=py38h27cfd23_1 - pyopenssl=21.0.0=pyhd3eb1b0_1 - pyparsing=3.0.4=pyhd3eb1b0_0 - pyppmd=0.16.1=py38h295c915_0 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.12=h12debd9_0 - python-dateutil=2.8.2=pyhd3eb1b0_0 - python-xxhash=2.0.2=py38h7f8727e_0 - pyzstd=0.14.4=py38h7f8727e_3 - re2=2020.11.01=h2531618_1 - readline=8.1=h27cfd23_0 - requests=2.26.0=pyhd3eb1b0_0 - setuptools=58.0.4=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - snappy=1.1.8=he6710b0_0 - sqlite=3.36.0=hc218d9a_0 - texttable=1.6.4=pyhd3eb1b0_0 - tk=8.6.11=h1ccaba5_0 - typing_extensions=3.10.0.2=pyh06a4308_0 - uriparser=0.9.3=he6710b0_1 - utf8proc=2.6.1=h27cfd23_0 - wheel=0.37.0=pyhd3eb1b0_1 - xxhash=0.8.0=h7f8727e_3 - xz=5.2.5=h7b6447c_0 - zipp=3.6.0=pyhd3eb1b0_0 - zlib=1.2.11=h7f8727e_4 - zstd=1.4.9=haebb681_0 - pip: - async-timeout==4.0.2 - charset-normalizer==2.0.9 - datasets==1.16.1 - fsspec==2021.11.1 - huggingface-hub==0.2.1 - multidict==5.2.0 - pandas==1.3.5 - pyarrow==6.0.1 - pytz==2021.3 - pyyaml==6.0 - tqdm==4.62.3 - typing-extensions==4.0.1 - urllib3==1.26.7 - yarl==1.7.2 ```
closed
https://github.com/huggingface/datasets/issues/3459
2021-12-20T16:16:49
2021-12-20T16:34:57
2021-12-20T16:34:57
{ "login": "mmajurski", "id": 9354454, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,084,926,025
3,458
Fix duplicated tag in wikicorpus dataset card
null
closed
https://github.com/huggingface/datasets/pull/3458
2021-12-20T15:34:16
2021-12-20T16:03:25
2021-12-20T16:03:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,084,862,121
3,457
Add CMU Graphics Lab Motion Capture dataset
## Adding a Dataset - **Name:** CMU Graphics Lab Motion Capture database - **Description:** The database contains free motions which you can download and use. - **Data:** http://mocap.cs.cmu.edu/ - **Motivation:** Nice motion capture dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/3457
2021-12-20T14:34:39
2022-03-16T16:53:09
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,084,687,973
3,456
[WER] Better error message for wer
Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following: ```python from datasets import load_metric wer = load_metric("wer") target_str = ["hello this is nice", "hello the weather is bloomy"] pred_str = [["hello it's nice"], ["hello it's the weather"]] print("Wrong:", wer.compute(predictions=pred_str, references=target_str)) print("Correct", wer.compute(predictions=[x[0] for x in pred_str], references=target_str)) ``` We get: ``` Wrong: 1.0 Correct 0.5555555555555556 ``` meaning that we get a word-error rate for incorrectly passed input formats. We should raise an error here instead so that people don't spent hours fixing a model while it's their incorrect evaluation metric is the problem for a low WER.
closed
https://github.com/huggingface/datasets/pull/3456
2021-12-20T11:38:40
2021-12-20T16:53:37
2021-12-20T16:53:36
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,084,599,650
3,455
Easier information editing
**Is your feature request related to a problem? Please describe.** It requires a lot of effort to improve a datasheet. **Describe the solution you'd like** UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.) **Describe alternatives you've considered** The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
closed
https://github.com/huggingface/datasets/issues/3455
2021-12-20T10:10:43
2023-07-25T15:36:14
2023-07-25T15:36:14
{ "login": "borgr", "id": 6416600, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,084,519,107
3,454
Fix iter_archive generator
This PR: - Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs - Fixes bugs in `iter_archive` introduced in: - #3443 Fix #3453.
closed
https://github.com/huggingface/datasets/pull/3454
2021-12-20T08:50:15
2021-12-20T10:05:00
2021-12-20T10:04:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,084,515,911
3,453
ValueError while iter_archive
## Describe the bug After the merge of: - #3443 the method `iter_archive` throws a ValueError: ``` ValueError: read of closed file ``` ## Steps to reproduce the bug ```python for path, file in dl_manager.iter_archive(archive_path): pass ```
closed
https://github.com/huggingface/datasets/issues/3453
2021-12-20T08:46:18
2021-12-20T10:04:59
2021-12-20T10:04:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,083,803,178
3,452
why the stratify option is omitted from test_train_split function?
why the stratify option is omitted from test_train_split function? is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
closed
https://github.com/huggingface/datasets/issues/3452
2021-12-18T10:37:47
2022-05-25T20:43:51
2022-05-25T20:43:51
{ "login": "j-sieger", "id": 9985334, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,083,459,137
3,451
[Staging] Update dataset repos automatically on the Hub
Let's have a script that updates the dataset repositories on staging for now. This way we can make sure it works fine before going in prod. Related to https://github.com/huggingface/datasets/issues/3341 The script runs on each commit on `master`. It checks the datasets that were changed, and it pushes the changes to the corresponding repositories on the Hub. If there's a new dataset, then a new repository is created. If the commit is a new release of `datasets`, it also pushes the tag to all the repositories.
closed
https://github.com/huggingface/datasets/pull/3451
2021-12-17T17:12:11
2021-12-21T10:25:46
2021-12-20T14:09:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,083,450,158
3,450
Unexpected behavior doing Split + Filter
## Describe the bug I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter') ## Steps to reproduce the bug ``` from datasets import Dataset import pandas as pd dic = {'x': [1,2,3,4,5,6,7,8,9], 'y':['q','w','e','r','t','y','u','i','o']} df = pd.DataFrame.from_dict(dic) dataset = Dataset.from_pandas(df) split_dataset = dataset.train_test_split(test_size=0.5, shuffle=False, seed=42) train_dataset = split_dataset["train"] eval_dataset = split_dataset["test"] eval_dataset_2 = eval_dataset.filter(lambda example: example['x'] % 2 == 0) print( eval_dataset['x']) print(eval_dataset_2['x']) ``` One observes that elements in eval_dataset2 are actually coming from the training dataset... ## Expected results The expected results would be that the filtered eval dataset would only contain elements from the original eval dataset. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows 10 - Python version: 3.7 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/3450
2021-12-17T17:00:39
2023-07-25T15:38:47
2023-07-25T15:38:47
{ "login": "jbrachat", "id": 26432605, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,083,373,018
3,449
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
**Is your feature request related to a problem? Please describe.** No. **Describe the solution you'd like** I would like to be able to concatenate datasets as follows: ```python >>> dataset["train"] += dataset["validation"] ``` ... instead of using `concatenate_datasets()`: ```python >>> raw_datasets["train"] = concatenate_datasets([raw_datasets["train"], raw_datasets["validation"]]) >>> del raw_datasets["validation"] ``` **Describe alternatives you've considered** Well, I have considered `concatenate_datasets()` 😀 **Additional context** N.a.
closed
https://github.com/huggingface/datasets/issues/3449
2021-12-17T15:29:11
2024-02-29T16:47:56
2023-07-25T15:33:56
{ "login": "sgraaf", "id": 8904453, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,083,231,080
3,448
JSONDecodeError with HuggingFace dataset viewer
## Dataset viewer issue for 'pubmed_neg' **Link:** https://huggingface.co/datasets/IGESML/pubmed_neg I am getting the error: Status code: 400 Exception: JSONDecodeError Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202) I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue. Am I the one who added this dataset ? Yes
closed
https://github.com/huggingface/datasets/issues/3448
2021-12-17T12:52:41
2022-02-24T09:10:26
2022-02-24T09:10:26
{ "login": "kathrynchapman", "id": 57716109, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,082,539,790
3,447
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
## Describe the bug According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir. "Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here. ## Steps to reproduce the bug ``` export HF_DATASETS_OFFLINE=1 python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2 ``` ## Expected results datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time. ## Actual results The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426". ``` 12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53 12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426) Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 17623.13it/s] 12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min 12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1206.99it/s] 12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums. 12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train 12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation 12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data. 100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 53.54it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux - Python version: 3.8.10 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3447
2021-12-16T18:51:13
2022-02-17T14:16:27
2022-02-17T14:16:27
{ "login": "dunalduck0", "id": 51274745, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,082,414,229
3,446
Remove redundant local path information in audio/image datasets
Remove the redundant path information in the audio/image dataset as discussed in https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828 TODOs: * [ ] merge https://github.com/huggingface/datasets/pull/3430 * [ ] merge https://github.com/huggingface/datasets/pull/3364 * [ ] re-generate the info files of the updated audio datasets cc: @patrickvonplaten @anton-l @nateraw (I expect this to break the audio/vision examples in Transformers; after this change you'll be able to access underlying paths as follows `dset = dset.cast_column("audio", Audio(..., decode=False)); path = dset[0]["audio"]`)
closed
https://github.com/huggingface/datasets/pull/3446
2021-12-16T16:35:15
2023-09-24T10:09:30
2023-09-24T10:09:27
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,082,370,968
3,445
question
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3445
2021-12-16T15:57:00
2022-01-03T10:09:00
2022-01-03T10:09:00
{ "login": "BAKAYOKO0232", "id": 38075175, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,082,078,961
3,444
Align the Dataset and IterableDataset processing API
## Intro items marked like <s>this</s> are done already :) Currently the two classes have two distinct API for processing: ### The `.map()` method Both have those parameters in common: function, batched, batch_size - IterableDataset is missing those parameters: <s>with_indices</s>, with_rank, <s>input_columns</s>, <s>drop_last_batch</s>, <s>remove_columns</s>, features, disable_nullable, fn_kwargs, num_proc - Dataset also has additional parameters that are exclusive, due to caching: keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, suffix_template, new_fingerprint - <s>There is also an important difference in terms of behavior: **Dataset.map adds new columns** (with dict.update) BUT **IterableDataset discards previous columns** (it overwrites the dict) IMO the two methods should have the same behavior. This would be an important breaking change though.</s> - Dataset.map is eager while IterableDataset.map is lazy ### The `.shuffle()` method - <s>Both have an optional seed parameter, but IterableDataset requires a mandatory parameter buffer_size to control the size of the local buffer used for approximate shuffling.</s> - <s>IterableDataset is missing the parameter generator</s> - Also Dataset has exclusive parameters due to caching: keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint ### The `.with_format()` method - <s>IterableDataset only supports "torch" (it misses tf, jax, pandas, arrow)</s> and is missing the parameters: columns, output_all_columns and format_kwargs - other methods like `set_format`, `reset_format` or `formatted_as` are also missing ### Other methods - Both have the same `remove_columns` method - IterableDataset is missing: <s>cast</s>, <s>cast_column</s>, <s>filter</s>, <s>rename_column</s>, <s>rename_columns</s>, class_encode_column, flatten, train_test_split, <s>shard</s> - Some other methods are missing but we can discuss them: set_transform, formatted_as, with_transform - And others don't really make sense for an iterable dataset: select, sort, <s>add_column</s>, add_item - Dataset is missing skip and take, that IterableDataset implements. ## Questions I think it would be nice to be able to switch between streaming and regular dataset easily, without changing the processing code significantly. 1. What should be aligned and what shouldn't between those two APIs ? IMO the minimum is to align the main processing methods. It would mean aligning breaking the current `Iterable.map` to have the same behavior as `Dataset.map` (add columns with dict.update), and add multiprocessing as well as the missing parameters. DONE ✅ It would also mean implementing the missing methods: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard. WIP 🟠 2. What are the breaking changes for IterableDataset ? The main breaking change would be the change of behavior of `IterableDataset.map`, because currently it discards all the previous columns instead of keeping them. DONE ✅ 3. Shall we also do some changes for regular datasets ? I agree the simplest would be to have the exact same methods for both Dataset and IterableDataset. However this is probably not a good idea because it would prevent users from using the best benefits of them. That's why we can keep some aspects of regular datasets as they are: - keep the eager Dataset.map with caching - keep the with_transform method for lazy processing - keep Dataset.select (it could also be added to IterableDataset even though it's not recommended) We could have a completely aligned `map` method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that. For information, TFDS does lazy map by default, and has an additional `.cache()` method. ## Opinions ? I'd love to gather some opinions about this here. If the two APIs are more aligned it would be awesome for the examples in `transformers`, and it would create a satisfactory experience for users that want to switch from one mode to the other. cc @mariosasko @albertvillanova @thomwolf @patrickvonplaten @sgugger
open
https://github.com/huggingface/datasets/issues/3444
2021-12-16T11:26:11
2025-01-31T11:07:07
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,082,052,833
3,443
Extend iter_archive to support file object input
This PR adds support to passing a file object to `[Streaming]DownloadManager.iter_archive`. With this feature, we can iterate over a tar file inside another tar file.
closed
https://github.com/huggingface/datasets/pull/3443
2021-12-16T10:59:14
2021-12-17T17:53:03
2021-12-17T17:53:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,081,862,747
3,442
Extend text to support yielding lines, paragraphs or documents
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents. Feel free to comment on the name of the config parameter `row`: - Currently, the docs state datasets are made of rows and columns - Other names I considered: `example`, `item`
closed
https://github.com/huggingface/datasets/pull/3442
2021-12-16T07:33:17
2021-12-20T16:59:10
2021-12-20T16:39:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,081,571,784
3,441
Add QuALITY dataset
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf) - **Data:** GitHub repo [here](https://github.com/nyu-mll/quality) - **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/3441
2021-12-15T22:26:19
2021-12-28T15:17:05
null
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,081,528,426
3,440
datasets keeps reading from cached files, although I disabled it
## Describe the bug Hi, I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings: ``` from datasets import set_caching_enabled set_caching_enabled(False) ``` also force redownlaod: ``` download_mode='force_redownload' ``` but none worked so far, this is on a cluster and on some of the machines this reads from the cached files, I really appreciate any idea on how to fully remove caching @lhoestq many thanks ``` File "run_clm.py", line 496, in <module> main() File "run_clm.py", line 419, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 943, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 172, in evaluate output = self.eval_loop( File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 241, in eval_loop metrics = self.compute_pet_metrics(eval_datasets, model, self.extra_info[metric_key_prefix], task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 268, in compute_pet_metrics centroids = self._compute_per_token_train_centroids(model, task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 353, in _compute_per_token_train_centroids data = get_label_samples(self.get_per_task_train_dataset(task), label) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 350, in get_label_samples return dataset.filter(lambda example: int(example['labels']) == label) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2519, in filter indices = self.map( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2036, in map return self._map_single( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 503, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2248, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 654, in from_file return cls( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1092, in reorder_fields_as return Features(recursive_reorder(self, other)) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1081, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'candidates_ids': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'labels': Value(dtype='int64', id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'extra_fields': {}, 'task': Value(dtype='string', id=None)} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux - Python version: 3.8.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3440
2021-12-15T21:26:22
2022-02-24T09:12:22
2022-02-24T09:12:22
{ "login": "dorost1234", "id": 79165106, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,081,389,723
3,439
Add `cast_column` to `IterableDataset`
Closes #3369. cc: @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/3439
2021-12-15T19:00:45
2021-12-16T15:55:20
2021-12-16T15:55:19
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,081,302,203
3,438
Update supported versions of Python in setup.py
Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated.
closed
https://github.com/huggingface/datasets/pull/3438
2021-12-15T17:30:12
2021-12-20T14:22:13
2021-12-20T14:22:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,081,247,889
3,437
Update BLEURT hyperlink
The description of BLEURT on the hf.co website has a strange use of URL hyperlinking. This PR attempts to fix this, although I am not 100% sure Markdown syntax is allowed on the frontend or not. ![Screen Shot 2021-12-15 at 17 31 27](https://user-images.githubusercontent.com/26859204/146226432-c83cbdaf-f57d-4999-b53c-85da718ff7fb.png)
closed
https://github.com/huggingface/datasets/pull/3437
2021-12-15T16:34:47
2021-12-17T13:28:26
2021-12-17T13:28:25
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,081,068,139
3,436
Add the OneStopQa dataset
Adding OneStopQA, a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme.
closed
https://github.com/huggingface/datasets/pull/3436
2021-12-15T13:53:31
2021-12-17T14:32:00
2021-12-17T13:25:29
{ "login": "OmerShubi", "id": 28459495, "type": "User" }
[]
true
[]
1,081,043,756
3,435
Improve Wikipedia Loading Script
* More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser directions like __TOC__ that occasionally occur in text) Fix #3400 With support from @albertvillanova CC @yjernite
closed
https://github.com/huggingface/datasets/pull/3435
2021-12-15T13:30:06
2022-03-04T08:16:00
2022-03-04T08:16:00
{ "login": "geohci", "id": 45494522, "type": "User" }
[]
true
[]
1,080,917,446
3,434
Add The People's Speech
## Adding a Dataset - **Name:** The People's Speech - **Description:** a massive English-language dataset of audio transcriptions of full sentences. - **Paper:** https://openreview.net/pdf?id=R8CwidgJ0yT - **Data:** https://mlcommons.org/en/peoples-speech/ - **Motivation:** With over 30,000 hours of speech, this dataset is the largest and most diverse freely available English speech recognition corpus today. [The article](https://thegradient.pub/new-datasets-to-democratize-speech-recognition-technology-2/) which may be useful when working on the dataset. cc: @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/3434
2021-12-15T11:21:21
2023-02-28T16:22:29
2023-02-28T16:22:28
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,080,910,724
3,433
Add Multilingual Spoken Words dataset
## Adding a Dataset - **Name:** Multilingual Spoken Words - **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). Read more: https://mlcommons.org/en/news/spoken-words-blog/ - **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf - **Data:** https://mlcommons.org/en/multilingual-spoken-words/ - **Motivation:** Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/3433
2021-12-15T11:14:44
2022-02-22T10:03:53
2022-02-22T10:03:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,079,910,769
3,432
Correctly indent builder config in dataset script docs
null
closed
https://github.com/huggingface/datasets/pull/3432
2021-12-14T15:39:47
2021-12-14T17:35:17
2021-12-14T17:35:17
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,079,866,083
3,431
Unable to resolve any data file after loading once
when I rerun my program, it occurs this error " Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem? thx. And below is my code . ![image](https://user-images.githubusercontent.com/84694183/146023446-d75fdec8-65c1-484f-80d8-6c20ff5e994b.png)
closed
https://github.com/huggingface/datasets/issues/3431
2021-12-14T15:02:15
2022-12-11T10:53:04
2022-02-24T09:13:52
{ "login": "LzyFischer", "id": 84694183, "type": "User" }
[]
false
[]
1,079,811,124
3,430
Make decoding of Audio and Image feature optional
Add the `decode` argument (`True` by default) to the `Audio` and the `Image` feature to make it possible to toggle on/off decoding of these features. Even though we've discussed that on Slack, I'm not removing the `_storage_dtype` argument of the Audio feature in this PR to avoid breaking the Audio feature tests.
closed
https://github.com/huggingface/datasets/pull/3430
2021-12-14T14:15:08
2022-01-25T18:57:52
2022-01-25T18:57:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,902,390
3,429
Make cast cacheable (again) on Windows
`cast` currently emits the following warning when called on Windows: ``` Parameter 'function'=<function Dataset.cast.<locals>.<lambda> at 0x000001C930571EA0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` It seems like the issue stems from the `config.PYARROW_VERSION` object not being serializable on Windows (tested with `dumps(lambda: config.PYARROW_VERSION)`), so I'm fixing this by capturing `config.PYARROW_VERSION.major` before the lambda definition.
closed
https://github.com/huggingface/datasets/pull/3429
2021-12-13T19:32:02
2021-12-14T14:39:51
2021-12-14T14:39:50
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,863,468
3,428
Clean squad dummy data
Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file
closed
https://github.com/huggingface/datasets/pull/3428
2021-12-13T18:46:29
2021-12-13T18:57:50
2021-12-13T18:57:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,078,782,159
3,427
Add The Pile Enron Emails subset
Add: - Enron Emails subset of The Pile: "enron_emails" config Close bigscience-workshop/data_tooling#310. CC: @StellaAthena
closed
https://github.com/huggingface/datasets/pull/3427
2021-12-13T17:14:16
2021-12-14T17:30:59
2021-12-14T17:30:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,078,670,031
3,426
Update disaster_response_messages download urls (+ add validation split)
Fixes #3240, fixes #3416
closed
https://github.com/huggingface/datasets/pull/3426
2021-12-13T15:30:12
2021-12-14T14:38:30
2021-12-14T14:38:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,598,140
3,425
Getting configs names takes too long
## Steps to reproduce the bug ```python from datasets import get_dataset_config_names get_dataset_config_names("allenai/c4") ``` ## Expected results I would expect to get the answer quickly, at least in less than 10s ## Actual results It takes about 45s on my environment ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
open
https://github.com/huggingface/datasets/issues/3425
2021-12-13T14:27:57
2021-12-13T14:53:33
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,078,543,625
3,424
Add RedCaps dataset
Add the RedCaps dataset. I'm not adding the generated `dataset_infos.json` file for now due to its size (11 MB). TODOs: - [x] dummy data - [x] dataset card Close #3316
closed
https://github.com/huggingface/datasets/pull/3424
2021-12-13T13:38:13
2022-01-12T14:13:16
2022-01-12T14:13:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,049,638
3,423
data duplicate when setting num_works > 1 with streaming data
## Describe the bug The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import pandas as pd import numpy as np import os from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm import shutil NUM_OF_USER = 1000000 NUM_OF_ACTION = 50000 NUM_OF_SEQUENCE = 10000 NUM_OF_FILES = 32 NUM_OF_WORKERS = 16 if __name__ == "__main__": shutil.rmtree("./dataset") for i in range(NUM_OF_FILES): sequence_data = pd.DataFrame( { "imei": np.random.randint(1, NUM_OF_USER, size=NUM_OF_SEQUENCE), "sequence": np.random.randint(1, NUM_OF_ACTION, size=NUM_OF_SEQUENCE) } ) if not os.path.exists("./dataset"): os.makedirs("./dataset") sequence_data.to_csv(f"./dataset/sequence_data_{i}.csv", index=False) dataset = load_dataset("csv", data_files=[os.path.join("./dataset",file) for file in os.listdir("./dataset") if file.endswith(".csv")], split="train", streaming=True).with_format("torch") data_loader = DataLoader(dataset, batch_size=1024, num_workers=NUM_OF_WORKERS) result = pd.DataFrame() for i, batch in tqdm(enumerate(data_loader)): result = pd.concat([result, pd.DataFrame(batch)], axis=0) result.to_csv(f"num_work_{NUM_OF_WORKERS}.csv", index=False) ``` ## Expected results data do not duplicate ## Actual results data duplicate NUM_OF_WORKERS = 16 ![image](https://user-images.githubusercontent.com/16486492/145748707-9d2df25b-2f4f-4d7b-a83e-242be4fc8934.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:datasets==1.14.0 - Platform:transformers==4.11.3 - Python version:3.8 - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/3423
2021-12-13T03:43:17
2022-12-14T16:04:22
2022-12-14T16:04:22
{ "login": "cloudyuyuyu", "id": 16486492, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,078,022,619
3,422
Error about load_metric
## Describe the bug File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python metric = load_metric("glue", "sst2") ``` ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3422
2021-12-13T02:49:51
2022-01-07T14:06:47
2022-01-07T14:06:47
{ "login": "jiacheng-ye", "id": 30772464, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,077,966,571
3,421
Adding mMARCO dataset
Adding mMARCO (v1.1) to HF datasets.
closed
https://github.com/huggingface/datasets/pull/3421
2021-12-13T00:56:43
2022-10-03T09:37:15
2022-10-03T09:37:15
{ "login": "lhbonifacio", "id": 17603035, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,077,913,468
3,420
Add eli5_category dataset
This pull request adds a categorized Long-form question answering dataset `ELI5_Category`. It's a new variant of the [ELI5](https://huggingface.co/datasets/eli5) dataset that uses the Reddit tags to alleviate the training/validation overlapping in the origin ELI5 dataset. A [report](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/)(Section 2) on this dataset.
closed
https://github.com/huggingface/datasets/pull/3420
2021-12-12T21:30:45
2021-12-14T17:53:03
2021-12-14T17:53:02
{ "login": "jingshenSN2", "id": 40377373, "type": "User" }
[]
true
[]
1,077,350,974
3,419
`.to_json` is extremely slow after `.select`
## Describe the bug Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset original = load_dataset("squad", split="train") original.to_json("from_original.json") # Takes 0 seconds selected_subset1 = original.select([i for i in range(len(original))]) selected_subset1.to_json("from_select1.json") # Takes 212 seconds selected_subset2 = original.select([i for i in range(int(len(original) / 2))]) selected_subset2.to_json("from_select2.json") # Takes 90 seconds ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: master (https://github.com/huggingface/datasets/commit/6090f3cfb5c819f441dd4a4bb635e037c875b044) - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.0
open
https://github.com/huggingface/datasets/issues/3419
2021-12-11T01:36:31
2021-12-21T15:49:07
null
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,077,053,296
3,418
Add Wikisource dataset
Add loading script for Wikisource dataset. Fix #3399. CC: @geohci, @yjernite
closed
https://github.com/huggingface/datasets/pull/3418
2021-12-10T17:04:44
2022-10-04T09:35:56
2022-10-03T09:37:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,076,943,343
3,417
Fix type of bridge field in QED
Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`. The following paragraph in the QED repo explains the purpose of this field: >Each annotation in referential_equalities is a pair of spans, the question_reference and the sentence_reference, corresponding to an entity mention in the question and the selected_sentence respectively. As described in the paper, sentence_references can be "bridged in", in which case they do not correspond with any actual span in the selected_sentence. Hence, sentence_reference spans contain an additional field, bridge, which is a prepositional phrase when a reference is bridged, and is False otherwise. Prepositional phrases serve to link bridged references to an anchoring phrase in the selected_sentence. In the case a sentence_reference is bridged, the start and end, as well as the span string, map to such an anchoring phrase in the selected_sentence. Fix #3346 cc @VictorSanh
closed
https://github.com/huggingface/datasets/pull/3417
2021-12-10T15:07:21
2021-12-14T14:39:06
2021-12-14T14:39:05
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,076,868,771
3,416
disaster_response_messages unavailable
## Dataset viewer issue for '* disaster_response_messages*' **Link:** https://huggingface.co/datasets/disaster_response_messages Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv Am I the one who added this dataset ?No
closed
https://github.com/huggingface/datasets/issues/3416
2021-12-10T13:49:17
2021-12-14T14:38:29
2021-12-14T14:38:29
{ "login": "sacdallago", "id": 6240943, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,076,472,534
3,415
Non-deterministic tests: CI tests randomly fail
## Describe the bug Some CI tests fail randomly. 1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux: ``` =========================== short test summary info ============================ FAILED tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol[https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh-zip] FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive - Fi... FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped = 3 failed, 3553 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 192.79s (0:03:12) = ``` 2. After re-running the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/57bfe1f342cd3c59d2510b992d5f06a0761eb147, there was only 1 failing test (one on Linux and a different one on Windows): - On Linux: ``` =========================== short test summary info ============================ FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped = 1 failed, 3555 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 199.76s (0:03:19) = ``` - On Windows: ``` =========================== short test summary info =========================== FAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_without_script = 1 failed, 3551 passed, 2954 skipped, 2 xfailed, 1 xpassed, 121 warnings in 478.58s (0:07:58) = ``` The test `tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped` passes locally. 3. After re-running again the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/39f32f2119cf91b86867216bb5c356c586503c6a, ALL the tests passed.
closed
https://github.com/huggingface/datasets/issues/3415
2021-12-10T06:08:59
2022-03-31T16:38:51
2022-03-31T16:38:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,076,028,998
3,414
Skip None encoding (line deleted by accident in #3195)
Return the line deleted by accident in #3195 while [resolving merge conflicts](https://github.com/huggingface/datasets/pull/3195/commits/8b0ed15be08559056b817836a07d47acda0c4510). Fix #3181 (finally :))
closed
https://github.com/huggingface/datasets/pull/3414
2021-12-09T21:17:33
2021-12-10T11:00:03
2021-12-10T11:00:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,075,854,325
3,413
Add WIDER FACE dataset
Adds the WIDER FACE face detection benchmark. TODOs: * [x] dataset card * [x] dummy data
closed
https://github.com/huggingface/datasets/pull/3413
2021-12-09T18:03:38
2022-01-12T14:13:47
2022-01-12T14:13:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,075,846,368
3,412
Fix flaky test again for s3 serialization
Following https://github.com/huggingface/datasets/pull/3388 that wasn't enough (see CI error [here](https://app.circleci.com/pipelines/github/huggingface/datasets/9080/workflows/b971fb27-ff20-4220-9416-c19acdfdf6f4/jobs/55985))
closed
https://github.com/huggingface/datasets/pull/3412
2021-12-09T17:54:41
2021-12-09T18:00:52
2021-12-09T18:00:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,075,846,272
3,411
[chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script
## Describe the bug Model I am using (Bert, XLNet ...): bert-base-chinese The problem arises when using: * [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py` The tasks I am working on is: pretraining whole word masking with my own dataset and ref.json file I tried follow the run_mlm_wwm.py procedure to do whole word masking on pretraining task. my file is in .txt form, where one line represents one sample, with `9,264,784` chinese lines in total. the ref.json file is also contains 9,264,784 lines of whole word masking reference data for my chinese corpus. but when I try to adapt the run_mlm_wwm.py script, it shows that somehow after `datasets["train"] = load_dataset(...` `len(datasets["train"])` returns `9,265,365` then, after `tokenized_datasets = datasets.map(...` `len(tokenized_datasets["train"])` returns `9,265,279` I'm really confused and tried to trace code by myself but can't know what happened after a week trial. I want to know what happened in the `load_dataset()` function and `datasets.map` here and how did I get more lines of data than I input. so I'm here to ask. ## To reproduce Sorry that I can't provide my data here since it did not belong to me. but I'm sure I remove the blank lines. ## Expected behavior I expect the code run as it should. but the AssertionError in line 167 keeps raise as the line of reference json and datasets['train'] differs. Thanks for your patient reading! ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 3.0.0
open
https://github.com/huggingface/datasets/issues/3411
2021-12-09T17:54:35
2021-12-22T11:21:33
null
{ "login": "hyusterr", "id": 52968111, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,075,815,415
3,410
Fix dependencies conflicts in Windows CI after conda update to 4.11
For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11
closed
https://github.com/huggingface/datasets/pull/3410
2021-12-09T17:19:11
2021-12-09T17:36:20
2021-12-09T17:36:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,075,684,593
3,409
Pass new_fingerprint in multiprocessing
Following https://github.com/huggingface/datasets/pull/3045 Currently one can pass `new_fingerprint` to `.map()` to use a custom fingerprint instead of the one computed by hashing the map transform. However it's ignored if `num_proc>1`. In this PR I fixed that by passing `new_fingerprint` to `._map_single()` when `num_proc>1`. More specifically, `new_fingerprint` with a suffix based on the process `rank` is passed, so that each process has a different `new_fingerprint` cc @TevenLeScao @vlievin
closed
https://github.com/huggingface/datasets/pull/3409
2021-12-09T15:12:00
2022-08-19T10:41:04
2021-12-09T17:38:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,075,642,915
3,408
Typo in Dataset viewer error message
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource" ![Screen Shot 2021-12-09 at 15 31 31](https://user-images.githubusercontent.com/26859204/145415725-9cd728f0-c2c8-4b4e-a8e1-4f4d7841c94a.png) Am I the one who added this dataset ? N/A
closed
https://github.com/huggingface/datasets/issues/3408
2021-12-09T14:34:02
2021-12-22T11:02:53
2021-12-22T11:02:53
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,074,502,225
3,407
Use max number of data files to infer module
When inferring the module for datasets without script, set a maximum number of iterations over data files. This PR fixes the issue of taking too long when hundred of data files present. Please, feel free to agree on both numbers: ``` # Datasets without script DATA_FILES_MAX_NUMBER = 10 ARCHIVED_DATA_FILES_MAX_NUMBER = 5 ``` Fix #3404.
closed
https://github.com/huggingface/datasets/pull/3407
2021-12-08T14:58:43
2021-12-14T17:08:42
2021-12-14T17:08:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,074,366,050
3,406
Fix module inference for archive with a directory
Fix module inference for an archive file that contains files within a directory. Fix #3405.
closed
https://github.com/huggingface/datasets/pull/3406
2021-12-08T12:39:12
2021-12-08T13:03:30
2021-12-08T13:03:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,074,360,362
3,405
ZIP format inference does not work when files located in a dir inside the archive
## Describe the bug When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work. It only works for files located in the root directory of the ZIP file. ## Steps to reproduce the bug ```python infer_module_for_data_files_in_archives(["path/to/zip/file.zip"], False) ```
closed
https://github.com/huggingface/datasets/issues/3405
2021-12-08T12:32:15
2021-12-08T13:03:29
2021-12-08T13:03:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,073,657,561
3,404
Optimize ZIP format inference
**Is your feature request related to a problem? Please describe.** When hundreds of ZIP files are present in a dataset, format inference takes too long. See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497 **Describe the solution you'd like** Iterate over a maximum number of files. CC: @lhoestq
closed
https://github.com/huggingface/datasets/issues/3404
2021-12-07T18:44:49
2021-12-14T17:08:41
2021-12-14T17:08:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,073,622,120
3,403
Cannot import name 'maybe_sync'
## Describe the bug Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results No error ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.7/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import ( File "/opt/conda/lib/python3.7/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/opt/conda/lib/python3.7/site-packages/datasets/features/audio.py", line 8, in <module> from ..utils.streaming_download_manager import xopen File "/opt/conda/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 16, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/__init__.py", line 13, in <module> from .s3filesystem import S3FileSystem # noqa: F401 File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/s3filesystem.py", line 1, in <module> import s3fs File "/opt/conda/lib/python3.7/site-packages/s3fs/__init__.py", line 1, in <module> from .core import S3FileSystem, S3File File "/opt/conda/lib/python3.7/site-packages/s3fs/core.py", line 11, in <module> from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/opt/conda/lib/python3.7/site-packages/fsspec/asyn.py) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.0 - Platform: OVH Cloud Tesla V100 Machine - Python version: 3.7.9 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3403
2021-12-07T17:57:59
2021-12-17T07:00:35
2021-12-17T07:00:35
{ "login": "KMFODA", "id": 35491698, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,073,614,815
3,402
More robust first elem check in encode/cast example
Fix #3306
closed
https://github.com/huggingface/datasets/pull/3402
2021-12-07T17:48:16
2021-12-08T13:02:16
2021-12-08T13:02:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,073,603,508
3,401
Add Wikimedia pre-processed datasets
## Adding a Dataset - **Name:** Add pre-processed data to: - *wikimedia/wikipedia*: https://huggingface.co/datasets/wikimedia/wikipedia - *wikimedia/wikisource*: https://huggingface.co/datasets/wikimedia/wikisource - **Description:** Add pre-processed data to the Hub for all languages - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** This will be very useful for the NLP community, as the pre-processing has a high cost for lot of researchers (both in computation and in knowledge) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
closed
https://github.com/huggingface/datasets/issues/3401
2021-12-07T17:33:19
2024-10-09T16:10:47
2024-10-09T16:10:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,600,382
3,400
Improve Wikipedia loading script
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions: - _extract_content(filepath): - Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue - _parse_and_clean_wikicode(raw_content, parser): - Remove rm_template from cleaning -- this is redundant with .strip_code() from mwparserformhell - Build a language-specific list of namespace prefixes to filter out per below get_namespace_prefixes - Optional: strip prefixes like categories -- e.g., Category:Towns in Tianjin becomes Towns in Tianjin - Optional: strip magic words
closed
https://github.com/huggingface/datasets/issues/3400
2021-12-07T17:29:25
2022-03-22T16:52:28
2022-03-22T16:52:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,593,861
3,399
Add Wikisource dataset
## Adding a Dataset - **Name:** *wikisource* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** Additional high quality textual data, besides Wikipedia. Add loading script as "canonical" dataset (as it is the case for ""wikipedia"). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
closed
https://github.com/huggingface/datasets/issues/3399
2021-12-07T17:21:31
2024-10-09T16:11:27
2024-10-09T16:11:26
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,590,384
3,398
Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2 This should be done for all pre-processed datasets under "wikimedia" org in the Hub: https://huggingface.co/wikimedia
closed
https://github.com/huggingface/datasets/issues/3398
2021-12-07T17:17:27
2022-03-22T16:53:27
2022-03-22T16:53:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,502,444
3,397
add BNL newspapers
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate pull request to try and make this more complete at a later date. I had to manually add the `dummy_data` but I believe I've done this correctly (the tests pass locally).
closed
https://github.com/huggingface/datasets/pull/3397
2021-12-07T15:43:21
2022-01-17T18:35:34
2022-01-17T18:35:34
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[]
true
[]
1,073,467,183
3,396
Install Audio dependencies to support audio decoding
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*' **Link:** *https://huggingface.co/datasets/openslr* **Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla* Error: ``` Status code: 400 Exception: ImportError Message: To support decoding audio files, please install 'librosa'. ``` Am I the one who added this dataset ? Yes-No - openslr: No - projecte-aina/parlament_parla: Yes
closed
https://github.com/huggingface/datasets/issues/3396
2021-12-07T15:11:36
2022-04-25T16:12:22
2022-04-25T16:12:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" }, { "name": "audio_column", "color": "F83ACF" } ]
false
[]
1,073,432,650
3,395
Fix formatting in IterableDataset.map docs
Fix formatting in the recently added `Map` section of the streaming docs.
closed
https://github.com/huggingface/datasets/pull/3395
2021-12-07T14:41:01
2021-12-08T10:11:33
2021-12-08T10:11:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,073,396,308
3,394
Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parquet files in `push_to_hub`, but also the dataset `info` (stored in a JSON file).
closed
https://github.com/huggingface/datasets/issues/3394
2021-12-07T14:08:30
2021-12-21T17:00:09
2021-12-21T17:00:09
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,073,189,777
3,393
Common Voice Belarusian Dataset
## Adding a Dataset - **Name:** *Common Voice Belarusian Dataset* - **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)* - **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)* - **Motivation:** *It has more than 7GB of data, so it will be great to have it in this package so anyone can try to train something for Belarusian language.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/3393
2021-12-07T10:37:02
2021-12-09T15:56:03
null
{ "login": "wiedymi", "id": 42713027, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,073,073,408
3,392
Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts` **Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts *short description of the issue* Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603 Am I the one who added this dataset ? No -> @dansbecker
closed
https://github.com/huggingface/datasets/issues/3392
2021-12-07T08:41:01
2021-12-07T14:04:28
2021-12-07T14:04:28
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,072,849,055
3,391
method to select columns
**Is your feature request related to a problem? Please describe.** * There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error. **Describe the solution you'd like** * A new method that can be used to create a new dataset with only a list of specified columns. **Describe alternatives you've considered** `.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)` Or `.select(self, indices: Iterable = None, columns: List[str] = None)`
closed
https://github.com/huggingface/datasets/issues/3391
2021-12-07T02:44:19
2021-12-07T02:45:27
2021-12-07T02:45:27
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,072,462,456
3,390
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
## Describe the bug I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi I get "KeyError: 'Field "builder_name" does not exist in table schema'" My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-common-voice-processed How my voxpopuli dataset looks like: ![image](https://user-images.githubusercontent.com/25264037/144895598-b7d9ae91-b04a-4046-9f06-b71ff0824d13.png) Part of the processing (path column is the absolute path to audio files) ``` def add_audio_column(example): example['audio'] = example['path'] return example voxpopuli = voxpopuli.map(add_audio_column) voxpopuli.cast_column("audio", Audio()) voxpopuli["audio"] <-- to my knowledge this does load the local files and prepares those arrays voxpopuli = voxpopuli.cast_column("audio", Audio(sampling_rate=16_000)) resampling 16kHz ``` I have then saved it to disk_ `voxpopuli.save_to_disk('/asr_disk/datasets_processed_new/voxpopuli')` and made folder structure same as @patrickvonplaten I also get same error while trying to load_dataset from his repo: ![image](https://user-images.githubusercontent.com/25264037/144895872-e9b8f326-cf2b-46cf-9417-606a0ce14077.png) ## Steps to reproduce the bug ```python dataset = load_dataset("Finnish-NLP/voxpopuli_fi") ``` ## Expected results Dataset is loaded correctly and looks like in the first picture ## Actual results Loading throws keyError: KeyError: 'Field "builder_name" does not exist in table schema' Resources I have been trying to follow: https://huggingface.co/docs/datasets/audio_process.html https://huggingface.co/docs/datasets/share_dataset.html ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.2.dev0 - Platform: Ubuntu 20.04.2 LTS - Python version: 3.8.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3390
2021-12-06T18:22:49
2021-12-06T20:22:05
2021-12-06T20:22:05
{ "login": "R4ZZ3", "id": 25264037, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,072,191,865
3,389
Add EDGAR
## Adding a Dataset - **Name:** EDGAR Database - **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. Containing millions of company and individual filings, EDGAR benefits investors, corporations, and the U.S. economy overall by increasing the efficiency, transparency, and fairness of the securities markets. The system processes about 3,000 filings per day, serves up 3,000 terabytes of data to the public annually, and accommodates 40,000 new filers per year on average. EDGAR® and EDGARLink® are registered trademarks of the SEC. - **Data:** https://www.sec.gov/os/accessing-edgar-data - **Motivation:** Enabling and improving FSI (Financial Services Industry) datasets to increase ease of use Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/3389
2021-12-06T14:06:11
2022-10-05T10:40:22
null
{ "login": "philschmid", "id": 32632186, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,072,022,021
3,388
Fix flaky test of the temporary directory used by load_from_disk
The test is flaky, here is an example of random CI failure: https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989 I fixed that by not checking the content of the random part of the temporary directory name
closed
https://github.com/huggingface/datasets/pull/3388
2021-12-06T11:09:31
2021-12-06T11:25:03
2021-12-06T11:24:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,071,836,456
3,387
Create Language Modeling task
Create Language Modeling task to be able to specify the input "text" column in a dataset. This can be useful for datasets which are not exclusively used for language modeling and have more than one column: - for text classification datasets (with columns "review" and "rating", for example), the Language Modeling task can be used to specify the "text" column ("review" in this case). TODO: - [ ] Add the LanguageModeling task to all dataset scripts which can be used for language modeling
closed
https://github.com/huggingface/datasets/pull/3387
2021-12-06T07:56:07
2021-12-17T17:18:28
2021-12-17T17:18:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,071,813,141
3,386
Fix typos in dataset cards
This PR: - Fix typos in dataset cards - Fix Papers With Code ID for: - Bilingual Corpus of Arabic-English Parallel Tweets - Tweets Hate Speech Detection - Add pretty name tags
closed
https://github.com/huggingface/datasets/pull/3386
2021-12-06T07:20:40
2021-12-06T09:30:55
2021-12-06T09:30:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,071,742,310
3,385
None batched `with_transform`, `set_transform`
**Is your feature request related to a problem? Please describe.** A `torch.utils.data.Dataset.__getitem__` operates on a single example. But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform. **Describe the solution you'd like** Have a `batched=True` argument in `Datasets.with_transform` **Describe alternatives you've considered** * Convert a non-batched transform function to batched one myself. * Wrap a 🤗 Dataset with torch Dataset, and add a `__getitem__`. 🙄 * Have `lazy=False` in `Dataset.map`, and returns a `LazyDataset` if `lazy=True`. This way the same `map` interface can be used, and existing code can be updated with one argument change.
open
https://github.com/huggingface/datasets/issues/3385
2021-12-06T05:20:54
2022-01-17T15:25:01
null
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,071,594,165
3,384
Adding mMARCO dataset
We are adding mMARCO dataset to HuggingFace datasets repo. This way, all the languages covered in the translation are available in a easy way.
closed
https://github.com/huggingface/datasets/pull/3384
2021-12-05T23:59:11
2021-12-12T15:27:36
2021-12-12T15:27:36
{ "login": "lhbonifacio", "id": 17603035, "type": "User" }
[]
true
[]
1,071,551,884
3,383
add Georgian data in cc100.
update cc100 dataset to support loading Georgian (ka) data which is originally available in CC100 dataset source. All tests are passed. Dummy data generated. metadata generated.
closed
https://github.com/huggingface/datasets/pull/3383
2021-12-05T20:38:09
2021-12-14T14:37:23
2021-12-14T14:37:22
{ "login": "AnzorGozalishvili", "id": 55232459, "type": "User" }
[]
true
[]
1,071,293,299
3,382
#3337 Add typing overloads to Dataset.__getitem__ for mypy
Add typing overloads to Dataset.__getitem__ for mypy Fixes #3337 **Iterable** Iterable from `collections` cannot have a type, so you can't do `Iterable[int]` for example. `typing` has a Generic version that builds upon the one from `collections`. **Flake8** I had to add `# noqa: F811`, this is a bug from Flake8. datasets uses flake8==3.7.9 which released in October 2019 if I update flake8 (4.0.1), I no longer get these errors, but I did not want to make the update without your approval. (It also triggers other errors like no args in f-strings.)
closed
https://github.com/huggingface/datasets/pull/3382
2021-12-04T20:54:49
2021-12-14T10:28:55
2021-12-14T10:28:55
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
1,071,283,879
3,381
Unable to load audio_features from common_voice dataset
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) ``` ## Expected results This piece of code should return test_dataset after loading audio features. ## Actual results Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory 0%| | 0/3 [00:00<?, ?ex/s] Traceback (most recent call last): File "demo_file.py", line 23, in <module> test_dataset = test_dataset.map(speech_file_to_array_fn) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map desc=desc, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated result = f(decorated_item, *args, **kwargs) File "demo_file.py", line 19, in speech_file_to_array_fn speech_array, sampling_rate = torchaudio.load(batch["path"]) File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load filepath, frame_offset, num_frames, normalize, channels_first, format) RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3 ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3381
2021-12-04T19:59:11
2021-12-06T17:52:42
2021-12-06T17:52:42
{ "login": "ashu5644", "id": 8268102, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,071,166,270
3,380
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week! If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://hf.co/oss-survey) (please reply in the above feedback form rather than to this thread) Thank you all on behalf of the HuggingFace team! 🤗
closed
https://github.com/huggingface/datasets/issues/3380
2021-12-04T09:18:33
2022-01-11T12:29:53
2022-01-11T12:29:53
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
false
[]
1,071,079,146
3,379
iter_archive on zipfiles with better compression type check
Hello @lhoestq , thank you for your detailed answer on previous PR ! I made this new PR because I misused git on the previous one #3347. Related issue #3272. # Comments : * For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_extraction_protocol_local`: **I removed this part :** ```python elif path.endswith(".tar.gz") or path.endswith(".tgz"): raise NotImplementedError( f"Extraction protocol for TAR archives like '{urlpath}' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead." ) ``` **And also changed :** ```diff - extension = path.split(".")[-1] + extension = "tar" if path.endswith(".tar.gz") else path.split(".")[-1] ``` The reason for this is a compression like **.tar.gz** will be considered a **.gz** which is handled with **zipfile**, though **tar.gz** can only be opened using **tarfile**. Please tell me if there's anything to change. # Tasks : - [x] download_manager.py - [x] streaming_download_manager.py
closed
https://github.com/huggingface/datasets/pull/3379
2021-12-04T01:04:48
2023-01-24T13:00:19
2023-01-24T12:53:08
{ "login": "Mehdi2402", "id": 56029953, "type": "User" }
[]
true
[]
1,070,580,126
3,378
Add The Pile subsets
Add The Pile subsets: - pubmed - ubuntu_irc - europarl - hacker_news - nih_exporter Close bigscience-workshop/data_tooling#301. CC: @StellaAthena
closed
https://github.com/huggingface/datasets/pull/3378
2021-12-03T13:14:54
2021-12-09T18:11:25
2021-12-09T18:11:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,070,562,907
3,377
COCO 🥥 on the 🤗 Hub?
This is a draft PR since I ran into few small problems. I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py cc: @mariosasko
closed
https://github.com/huggingface/datasets/pull/3377
2021-12-03T12:55:27
2021-12-20T14:14:01
2021-12-20T14:14:00
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[]
true
[]
1,070,522,979
3,376
Update clue benchmark
Fix #3374
closed
https://github.com/huggingface/datasets/pull/3376
2021-12-03T12:06:01
2021-12-08T14:14:42
2021-12-08T14:14:41
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]