id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
932,143,634
2,560
fix Dataset.map when num_procs > num rows
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map(lambda x: x, num_proc=10) ```
closed
https://github.com/huggingface/datasets/pull/2560
2021-06-29T02:24:11
2021-06-29T15:00:18
2021-06-29T14:53:31
{ "login": "connor-mccarthy", "id": 55268212, "type": "User" }
[]
true
[]
931,849,724
2,559
Memory usage consistently increases when processing a dataset with `.map`
## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch size but that doesn't seem to help. ## Steps to reproduce the bug Providing code as it is would be hard. I can provide a MVP if that helps. ## Expected results Memory usage should become consistent after some time following the launch of processing. ## Actual results Memory usage keeps on increasing. ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2559
2021-06-28T18:31:58
2023-07-20T13:34:10
2023-07-20T13:34:10
{ "login": "apsdehal", "id": 3616806, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
931,736,647
2,558
Update: WebNLG - update checksums
The master branch changed so I computed the new checksums. I also pinned a specific revision so that it doesn't happen again in the future. Fix https://github.com/huggingface/datasets/issues/2553
closed
https://github.com/huggingface/datasets/pull/2558
2021-06-28T16:16:37
2021-06-28T17:23:17
2021-06-28T17:23:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
931,633,823
2,557
Fix `fever` keys
The keys has duplicates since they were reset to 0 after each file. I fixed it by taking into account the file index as well.
closed
https://github.com/huggingface/datasets/pull/2557
2021-06-28T14:27:02
2021-06-28T16:11:30
2021-06-28T16:11:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
931,595,872
2,556
Better DuplicateKeysError error to help the user debug the issue
As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys. The current one is ```python datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` and we could have something that guides the user to debugging the issue: ```python DuplicateKeysError: both 42th and 1337th examples have the same keys `48`. Please fix the dataset script at <path/to/the/dataset/script> ```
closed
https://github.com/huggingface/datasets/issues/2556
2021-06-28T13:50:57
2022-06-28T09:26:04
2022-06-28T09:26:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
931,585,485
2,555
Fix code_search_net keys
There were duplicate keys in the `code_search_net` dataset, as reported in https://github.com/huggingface/datasets/issues/2552 I fixed the keys (it was an addition of the file and row indices, which was causing collisions) Fix #2552.
closed
https://github.com/huggingface/datasets/pull/2555
2021-06-28T13:40:23
2021-09-02T08:24:43
2021-06-28T14:10:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
931,453,855
2,554
Multilabel metrics not supported
When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274 And looks like this is because here https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88 the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work: ```python class F1(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")), } ), reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"], ) def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None): return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ), } ```
closed
https://github.com/huggingface/datasets/issues/2554
2021-06-28T11:09:46
2021-10-13T12:29:13
2021-07-08T08:40:15
{ "login": "GuillemGSubies", "id": 37592763, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
931,365,926
2,553
load_dataset("web_nlg") NonMatchingChecksumError
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: macOS-11.3.1-x86_64-i386-64bit - Python version: 3.9.4 - PyArrow version: 3.0.0 Also tested on Linux, with python 3.6.8
closed
https://github.com/huggingface/datasets/issues/2553
2021-06-28T09:26:46
2021-06-28T17:23:39
2021-06-28T17:23:16
{ "login": "alxthm", "id": 33730312, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
931,354,687
2,552
Keys should be unique error on code_search_net
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] Downloading: 19.1kB [00:00, 10.1MB/s] No config specified, defaulting to: code_search_net/all Downloading and preparing dataset code_search_net/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to /Users/thomwolf/.cache/huggingface/datasets/code_search_net/all/1.0.0/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a... Traceback (most recent call last): File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/builder.py", line 1067, in _prepare_split writer.write(example, key) File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 343, in write self.check_duplicate_keys() File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 354, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` ## Environment info - `datasets` version: 1.8.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 2.0.0
closed
https://github.com/huggingface/datasets/issues/2552
2021-06-28T09:15:20
2021-09-06T14:08:30
2021-09-02T08:25:29
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
930,967,978
2,551
Fix FileSystems documentation
### What this fixes: This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)). ### What were the issues? When I originally tried implementing the code examples I faced several bugs attributed to: - out of date [botocore](https://github.com/boto/botocore) call signatures - capitalization errors in the `S3FileSystem` class name (written as `S3Filesystem` in one place) - call signature errors for the `S3FileSystem` class constructor (uses parameter `sessions` instead of `session` in some places) (see [`s3fs`](https://s3fs.readthedocs.io/en/latest/api.html#s3fs.core.S3FileSystem) for where this constructor signature is defined) ### Testing/reviewing notes Instructions for generating the documentation locally: [here](https://github.com/huggingface/datasets/tree/master/docs#generating-the-documentation).
closed
https://github.com/huggingface/datasets/pull/2551
2021-06-27T16:18:42
2021-06-28T13:09:55
2021-06-28T13:09:54
{ "login": "connor-mccarthy", "id": 55268212, "type": "User" }
[]
true
[]
930,951,287
2,550
Allow for incremental cumulative metric updates in a distributed setup
Currently, using a metric allows for one of the following: - Per example/batch metrics - Cumulative metrics over the whole data What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation. Since most metrics are just an average of per-example metrics (which aren't?), an efficient calculation can be done as follows: `((score_cumulative * n_cumulative) + (score_new * n_new)) / (n_cumulative+ n_new)` where `n` and `score` refer to number of examples and metric score, `cumulative` refers to the cumulative metric and `new` refers to the addition of new examples. If you don't want to add this capability in the library, a simple solution exists so users can do it themselves: It is easy to implement for a single process setup, but in a distributed one there is no way to get the correct `n_new`. The solution for this is to return the number of examples that was used to compute the metrics in `.compute()` by adding the following line here: https://github.com/huggingface/datasets/blob/5a3221785311d0ce86c2785b765e86bd6997d516/src/datasets/metric.py#L402-L403 ``` output["number_of_examples"] = len(predictions) ``` and also remove the log message here so it won't spam: https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/src/datasets/metric.py#L411 If this change is ok with you, I'll open a pull request.
closed
https://github.com/huggingface/datasets/issues/2550
2021-06-27T15:00:58
2021-09-26T13:42:39
2021-09-26T13:42:39
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
929,819,093
2,549
Handling unlabeled datasets
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error: ``` File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example return encode_nested_example(self, example) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example return schema.encode_example(obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example if not -1 <= example_data < self.num_classes: TypeError: '<=' not supported between instances of 'int' and 'NoneType' ``` What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers?
closed
https://github.com/huggingface/datasets/issues/2549
2021-06-25T04:32:23
2021-06-25T21:07:57
2021-06-25T21:07:56
{ "login": "nelson-liu", "id": 7272031, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
929,232,831
2,548
Field order issue in loading json
## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` Running the following, ``` f= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')}) json_data = datasets.load_dataset('json', data_files='j.json', features=f) ``` ## Expected results A successful load. ## Actual results ``` File "pyarrow/table.pxi", line 1409, in pyarrow.lib.Table.cast ValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2548
2021-06-24T13:29:53
2021-06-24T14:36:43
2021-06-24T14:34:05
{ "login": "luyug", "id": 55288513, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
929,192,329
2,547
Dataset load_from_disk is too slow
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example). ## Steps to reproduce the bug Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset. ## Expected results I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Ubuntu 18 - Python version: 3.8 I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model.
open
https://github.com/huggingface/datasets/issues/2547
2021-06-24T12:45:44
2021-06-25T14:56:38
null
{ "login": "avacaondata", "id": 35173563, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
929,091,689
2,546
Add license to the Cambridge English Write & Improve + LOCNESS dataset card
As noticed in https://github.com/huggingface/datasets/pull/2539, the licensing information was missing for this dataset. I added it and I also filled a few other empty sections.
closed
https://github.com/huggingface/datasets/pull/2546
2021-06-24T10:39:29
2021-06-24T10:52:01
2021-06-24T10:52:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
929,016,580
2,545
Fix DuplicatedKeysError in drop dataset
Close #2542. cc: @VictorSanh.
closed
https://github.com/huggingface/datasets/pull/2545
2021-06-24T09:10:39
2021-06-24T14:57:08
2021-06-24T14:57:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
928,900,827
2,544
Fix logging levels
Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info. Close #2543. cc: @stas00
closed
https://github.com/huggingface/datasets/pull/2544
2021-06-24T06:41:36
2021-06-25T13:40:19
2021-06-25T13:40:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
928,571,915
2,543
switching some low-level log.info's to log.debug?
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a ton of these: ``` 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock 06/23/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow. 06/23/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock ``` May I suggest that these can be `log.debug` as it's no informative to the user. More examples: these are not informative - too much information: ``` 06/23/2021 12:14:26 - INFO - datasets.load - Checking /home/stas/.cache/huggingface/datasets/downloads/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports. 06/23/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a ``` While these are: ``` 06/23/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from /home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt16/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` I also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy. But I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`. e.g.: ``` 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` why is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`. Warnings are typically something that's bordering error or the first thing to check when things don't work as expected. infrequent info is there to inform of the different stages or important events. Everything else is debug. At least the way I understand things.
closed
https://github.com/huggingface/datasets/issues/2543
2021-06-23T19:26:55
2021-06-25T13:40:19
2021-06-25T13:40:19
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
928,540,382
2,542
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results The examples keys should be unique. ## Actual results ```bash >>> load_dataset("drop") Using custom data configuration default Downloading and preparing dataset drop/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to /home/hf/.cache/huggingface/datasets/drop/default/0.1.0/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2542
2021-06-23T18:41:16
2021-06-25T21:50:05
2021-06-24T14:57:08
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
928,529,078
2,541
update discofuse link cc @ekQ
Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee
closed
https://github.com/huggingface/datasets/pull/2541
2021-06-23T18:24:58
2021-06-28T14:34:51
2021-06-28T14:34:50
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
928,433,892
2,540
Remove task templates if required features are removed during `Dataset.map`
This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`: ```python from datasets import load_dataset # `yelp_polarity` comes with a `TextClassification` template ds = load_dataset("yelp_polarity", split="test") ds # Dataset({ # features: ['text', 'label'], # num_rows: 38000 # }) # Triggers KeyError: 'label' - oh noes! ds.map(lambda x: {"inputs": 0}, remove_columns=ds.column_names) ``` I wrote a unit test to make sure I could reproduce the error and then patched a fix.
closed
https://github.com/huggingface/datasets/pull/2540
2021-06-23T16:20:25
2021-06-24T14:41:15
2021-06-24T13:34:03
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
927,952,429
2,539
remove wi_locness dataset due to licensing issues
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
closed
https://github.com/huggingface/datasets/pull/2539
2021-06-23T07:35:32
2021-06-25T14:52:42
2021-06-25T14:52:42
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
927,940,691
2,538
Loading partial dataset when debugging
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow. Something like a debug mode would really help. Thanks!
open
https://github.com/huggingface/datasets/issues/2538
2021-06-23T07:19:52
2023-04-19T11:05:38
null
{ "login": "reachtarunhere", "id": 9061913, "type": "User" }
[]
false
[]
927,472,659
2,537
Add Parquet loader + from_parquet and to_parquet
Continuation of #2247 I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`. As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
closed
https://github.com/huggingface/datasets/pull/2537
2021-06-22T17:28:23
2021-06-30T16:31:03
2021-06-30T16:30:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
927,338,639
2,536
Use `Audio` features for `AutomaticSpeechRecognition` task template
In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'. The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are.
closed
https://github.com/huggingface/datasets/issues/2536
2021-06-22T15:07:21
2022-06-01T17:18:16
2022-06-01T17:18:16
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
927,334,349
2,535
Improve Features docs
- Fix rendering and cross-references in Features docs - Add docstrings to Features methods
closed
https://github.com/huggingface/datasets/pull/2535
2021-06-22T15:03:27
2021-06-23T13:40:43
2021-06-23T13:40:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
927,201,435
2,534
Sync with transformers disabling NOTSET
Close #2528.
closed
https://github.com/huggingface/datasets/pull/2534
2021-06-22T12:54:21
2021-06-24T14:42:47
2021-06-24T14:42:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
927,193,264
2,533
Add task template for automatic speech recognition
This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription. Usage: ```python from datasets import load_dataset from datasets.tasks import AutomaticSpeechRecognition ds = load_dataset("timit_asr", split="train[:10]") # Dataset({ # features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], # num_rows: 10 # }) task = AutomaticSpeechRecognition(audio_file_column="file", transcription_column="text") ds.prepare_for_task(task) # Dataset({ # features: ['audio_file', 'transcription'], # num_rows: 10 # }) ```
closed
https://github.com/huggingface/datasets/pull/2533
2021-06-22T12:45:02
2021-06-23T16:14:46
2021-06-23T15:56:57
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
927,063,196
2,532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`: ![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png) Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing) It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`. I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this. p.s. **I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)** `get_dataset `is just a simple wrapping for `load_dataset` and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
closed
https://github.com/huggingface/datasets/issues/2532
2021-06-22T10:08:18
2021-06-23T05:17:25
2021-06-23T05:17:25
{ "login": "cosmeowpawlitan", "id": 50871412, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
927,017,924
2,531
Fix dev version
The dev version that ends in `.dev0` should be greater than the current version. However it happens that `1.8.0 > 1.8.0.dev0` for example. Therefore we need to use `1.8.1.dev0` for example in this case. I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this.
closed
https://github.com/huggingface/datasets/pull/2531
2021-06-22T09:17:10
2021-06-22T09:47:10
2021-06-22T09:47:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
927,013,773
2,530
Fixed label parsing in the ProductReviews dataset
Fixed issue with parsing dataset labels.
closed
https://github.com/huggingface/datasets/pull/2530
2021-06-22T09:12:45
2021-06-22T12:55:20
2021-06-22T12:52:40
{ "login": "yavuzKomecoglu", "id": 5150963, "type": "User" }
[]
true
[]
926,378,812
2,529
Add summarization template
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template. Usage: ```python from datasets import load_dataset from datasets.tasks import Summarization ds = load_dataset("xsum", split="train") # Dataset({ # features: ['document', 'summary', 'id'], # num_rows: 204045 # }) summarization = Summarization(text_column="document", summary_column="summary") ds.prepare_for_task(summarization) # Dataset({ # features: ['text', 'summary'], # num_rows: 204045 # }) ```
closed
https://github.com/huggingface/datasets/pull/2529
2021-06-21T16:08:31
2021-06-23T14:22:11
2021-06-23T13:30:10
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
926,314,656
2,528
Logging cannot be set to NOTSET similar to transformers
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449) `disable=bool(logging.get_verbosity() == logging.NOTSET)` and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493) `not_verbose = bool(logger.getEffectiveLevel() > WARNING)` ## Steps to reproduce the bug ```python import datasets import logging datasets.logging.get_verbosity = lambda : logging.NOTSET datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ``` ## Expected results The code should download and load the dataset as normal without displaying progress bars ## Actual results ```ImportError Traceback (most recent call last) <ipython-input-4-aec65c0509c6> in <module> ----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 713 dataset=True, 714 return_resolved_file_path=True, --> 715 use_auth_token=use_auth_token, 716 ) 717 # Set the base path for downloads as the parent of the script location ~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) 350 file_path = hf_bucket_url(path, filename=name, dataset=False) 351 try: --> 352 local_path = cached_path(file_path, download_config=download_config) 353 except FileNotFoundError: 354 raise FileNotFoundError( ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 289 use_etag=download_config.use_etag, 290 max_retries=download_config.max_retries, --> 291 use_auth_token=download_config.use_auth_token, 292 ) 293 elif os.path.exists(url_or_filename): ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 668 headers=headers, 669 cookies=cookies, --> 670 max_retries=max_retries, 671 ) 672 ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries) 493 initial=resume_size, 494 desc="Downloading", --> 495 disable=not_verbose, 496 ) 497 for chunk in response.iter_content(chunk_size=1024): ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs) 217 total = self.total * unit_scale if self.total else self.total 218 self.container = self.status_printer( --> 219 self.fp, total, self.desc, self.ncols) 220 self.sp = self.display 221 ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols) 95 if IProgress is None: # #187 #451 #558 #872 96 raise ImportError( ---> 97 "IProgress not found. Please update jupyter and ipywidgets." 98 " See https://ipywidgets.readthedocs.io/en/stable" 99 "/user_install.html") ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8 - Python version: 3.7.10 - PyArrow version: 3.0.0 I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
closed
https://github.com/huggingface/datasets/issues/2528
2021-06-21T15:04:54
2021-06-24T14:42:47
2021-06-24T14:42:47
{ "login": "joshzwiebel", "id": 34662010, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
926,031,525
2,527
Replace bad `n>1M` size tag
Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc. This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`.
closed
https://github.com/huggingface/datasets/pull/2527
2021-06-21T09:42:35
2021-06-21T15:06:50
2021-06-21T15:06:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
925,929,228
2,526
Add COCO datasets
## Adding a Dataset - **Name:** COCO - **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset. - **Paper + website:** https://cocodataset.org/#home - **Data:** https://cocodataset.org/#download - **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2526
2021-06-21T07:48:32
2023-06-22T14:12:18
null
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
925,896,358
2,525
Use scikit-learn package rather than sklearn in setup.py
The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats. Note: this affects only TESTS_REQUIRE so I guess only developers not end users.
closed
https://github.com/huggingface/datasets/pull/2525
2021-06-21T07:04:25
2021-06-21T10:01:13
2021-06-21T08:57:33
{ "login": "lesteve", "id": 1680079, "type": "User" }
[]
true
[]
925,610,934
2,524
Raise FileNotFoundError in WindowsFileLock
Closes #2443
closed
https://github.com/huggingface/datasets/pull/2524
2021-06-20T14:25:11
2021-06-28T09:56:22
2021-06-28T08:47:39
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
925,421,008
2,523
Fr
__Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__
closed
https://github.com/huggingface/datasets/issues/2523
2021-06-19T15:56:32
2021-06-19T18:48:23
2021-06-19T18:48:23
{ "login": "aDrIaNo34500", "id": 71971234, "type": "User" }
[]
false
[]
925,334,379
2,522
Documentation Mistakes in Dataset: emotion
As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper. But when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust.
closed
https://github.com/huggingface/datasets/issues/2522
2021-06-19T07:08:57
2023-01-02T12:04:58
2023-01-02T12:04:58
{ "login": "GDGauravDutta", "id": 62606251, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
925,030,685
2,521
Insert text classification template for Emotion dataset
This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset.
closed
https://github.com/huggingface/datasets/pull/2521
2021-06-18T15:56:19
2021-06-21T09:22:31
2021-06-21T09:22:31
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
925,015,004
2,520
Datasets with tricky task templates
I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for. ## Text classification * [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized. * [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported
closed
https://github.com/huggingface/datasets/issues/2520
2021-06-18T15:33:57
2023-07-20T13:20:32
2023-07-20T13:20:32
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "Dataset discussion", "color": "72f99f" } ]
false
[]
924,903,240
2,519
Improve performance of pandas arrow extractor
While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.
closed
https://github.com/huggingface/datasets/pull/2519
2021-06-18T13:24:41
2021-06-21T09:06:06
2021-06-21T09:06:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
924,654,100
2,518
Add task templates for tydiqa and xquad
This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub. Notes: * I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :) * there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
closed
https://github.com/huggingface/datasets/pull/2518
2021-06-18T08:06:34
2021-06-18T15:01:17
2021-06-18T14:50:33
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
924,643,345
2,517
Fix typo in MatthewsCorrelation class name
Close #2513.
closed
https://github.com/huggingface/datasets/pull/2517
2021-06-18T07:53:06
2021-06-18T08:43:55
2021-06-18T08:43:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
924,597,470
2,516
datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts. The following reproduces the issue - most likely I'm missing something A simulated tokeniser which can be pickled ``` class CustomTokenizer: def __init__(self): self.state = "init" def __getstate__(self): print("__getstate__ called") out = self.__dict__.copy() self.state = "pickled" return out def __setstate__(self, d): print("__setstate__ called") self.__dict__ = d self.state = "restored" tokenizer = CustomTokenizer() ``` Test that it actually works - prints "__getstate__ called" and "__setstate__ called" ``` import pickle serialized = pickle.dumps(tokenizer) restored = pickle.loads(serialized) assert restored.state == "restored" ``` Simulate a function that tokenises examples, when dataset.map is called, this function ``` def tokenize_function(examples): assert tokenizer.state == "restored" # this shouldn't fail but it does output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer return output ``` Use map to simulate tokenization ``` import glob from datasets import load_dataset assert tokenizer.state == "restored" train_files = glob.glob('train*.csv') validation_files = glob.glob('validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) tokenized_datasets = datasets.map( tokenize_function, batched=True, ) ``` What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well? --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-22-a2aef4f74aaa> in <module> 8 tokenized_datasets = datasets.map( 9 tokenize_function, ---> 10 batched=True, 11 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 487 desc=desc, 488 ) --> 489 for k, dataset in self.items() 490 } 491 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 487 desc=desc, 488 ) --> 489 for k, dataset in self.items() 490 } 491 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1633 fn_kwargs=fn_kwargs, 1634 new_fingerprint=new_fingerprint, -> 1635 desc=desc, 1636 ) 1637 else: ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 184 } 185 # apply actual function --> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 188 # re-apply format to the output ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc) 1961 indices, 1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0, -> 1963 offset=offset, 1964 ) 1965 except NumExamplesMismatch: ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset 1854 processed_inputs = ( -> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1856 ) 1857 if update_data is None: <ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples) 1 def tokenize_function(examples): ----> 2 assert tokenizer.state == "restored" 3 tokenizer(examples) 4 return examples
open
https://github.com/huggingface/datasets/issues/2516
2021-06-18T06:47:26
2021-06-23T13:47:49
null
{ "login": "david-waterworth", "id": 5028974, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
924,435,447
2,515
CRD3 dataset card
This PR adds additional information to the CRD3 dataset card.
closed
https://github.com/huggingface/datasets/pull/2515
2021-06-18T00:24:07
2021-06-21T10:18:44
2021-06-21T10:18:44
{ "login": "wilsonyhlee", "id": 1937386, "type": "User" }
[]
true
[]
924,417,172
2,514
Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like** have a functionality of " remove duplicated rows" **Describe alternatives you've considered** convert dataset to pandas, remove duplicate, and convert back... **Additional context** no
open
https://github.com/huggingface/datasets/issues/2514
2021-06-17T23:35:38
2024-07-19T13:23:01
null
{ "login": "liuxinglan", "id": 16516583, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
924,174,413
2,513
Corelation should be Correlation
https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66
closed
https://github.com/huggingface/datasets/issues/2513
2021-06-17T17:28:48
2021-06-18T08:43:55
2021-06-18T08:43:55
{ "login": "colbym-MM", "id": 71514164, "type": "User" }
[]
false
[]
924,069,353
2,512
seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict'
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric seqeval = load_metric("seqeval") seqeval.compute(predictions=[['A']], references=[['A']]) ``` ## Expected results The function computes a dict with metrics ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-69a57f5cf06f> in <module> 1 from datasets import load_dataset, load_metric 2 seqeval = load_metric("seqeval") ----> 3 seqeval.compute(predictions=[['A']], references=[['A']]) ~/p3/lib/python3.7/site-packages/datasets/metric.py in compute(self, *args, **kwargs) 396 references = self.data["references"] 397 with temp_seed(self.seed): --> 398 output = self._compute(predictions=predictions, references=references, **kwargs) 399 400 if self.buf_writer is not None: ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py in _compute(self, predictions, references, suffix) 95 96 def _compute(self, predictions, references, suffix=False): ---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True) 98 report.pop("macro avg") 99 report.pop("weighted avg") TypeError: classification_report() got an unexpected keyword argument 'output_dict' ``` ## Environment info sklearn=0.24 datasets=1.1.3
closed
https://github.com/huggingface/datasets/issues/2512
2021-06-17T15:36:02
2021-06-17T15:46:07
2021-06-17T15:46:07
{ "login": "avidale", "id": 8642136, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
923,762,133
2,511
Add C4
## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Should fix https://github.com/huggingface/datasets/issues/1710
closed
https://github.com/huggingface/datasets/issues/2511
2021-06-17T10:31:04
2021-07-05T12:36:58
2021-07-05T12:36:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
923,735,485
2,510
Add align_labels_with_mapping to DatasetDict
https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method. In this PR I also added `DatasetDict.align_labels_with_mapping`
closed
https://github.com/huggingface/datasets/pull/2510
2021-06-17T10:03:35
2021-06-17T10:45:25
2021-06-17T10:45:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
922,846,035
2,509
Fix fingerprint when moving cache dir
The fingerprint of a dataset changes if the cache directory is moved. I fixed that by setting the fingerprint to be the hash of: - the relative cache dir (dataset_name/version/config_id) - the requested split Close #2496 I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255. We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.
closed
https://github.com/huggingface/datasets/pull/2509
2021-06-16T16:45:09
2021-06-21T15:05:04
2021-06-21T15:05:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
921,863,173
2,508
Load Image Classification Dataset from Local
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like "cifar10". **Describe alternatives you've considered** Implement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path) Write custom data loader logic **Additional context** We're training ViT on custom dataset
closed
https://github.com/huggingface/datasets/issues/2508
2021-06-15T22:43:33
2022-03-01T16:29:44
2022-03-01T16:29:44
{ "login": "Jacobsolawetz", "id": 8428198, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
921,441,962
2,507
Rearrange JSON field names to match passed features schema field names
This PR depends on PR #2453 (which must be merged first). Close #2366.
closed
https://github.com/huggingface/datasets/pull/2507
2021-06-15T14:10:02
2021-06-16T10:47:49
2021-06-16T10:47:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
921,435,598
2,506
Add course banner
This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too.
closed
https://github.com/huggingface/datasets/pull/2506
2021-06-15T14:03:54
2021-06-15T16:25:36
2021-06-15T16:25:35
{ "login": "sgugger", "id": 35901082, "type": "User" }
[]
true
[]
921,234,797
2,505
Make numpy arrow extractor faster
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498 This could make the numpy/torch/tf/jax formatting faster
closed
https://github.com/huggingface/datasets/pull/2505
2021-06-15T10:11:32
2021-06-28T09:53:39
2021-06-28T09:53:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
920,636,186
2,503
SubjQA wrong boolean values in entries
## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective) However, `is_ques_subjective` seems to have wrong values in the entire dataset. For instance, in the example in the dataset card, we have: - "question_subj_level": 2 - "is_ques_subjective": false However, according to the description, the question should be subjective since the `question_subj_level` is below 4
open
https://github.com/huggingface/datasets/issues/2503
2021-06-14T17:42:46
2021-08-25T03:52:06
null
{ "login": "arnaudstiegler", "id": 26485052, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
920,623,572
2,502
JAX integration
Hi ! I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow). It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects. ```python from datasets import Dataset d = Dataset.from_dict({"foo": [[0., 1., 2.]]}) d = d.with_format("jax") d[0] # {'foo': DeviceArray([0., 1., 2.], dtype=float32)} ``` A few details: - The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default - AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https://github.com/google/jax/issues/4486)) - the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO I also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset. Since the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax. Close #2495
closed
https://github.com/huggingface/datasets/pull/2502
2021-06-14T17:24:23
2021-06-21T16:15:50
2021-06-21T16:15:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
920,579,634
2,501
Add Zenodo metadata file with license
This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`. Close #2472.
closed
https://github.com/huggingface/datasets/pull/2501
2021-06-14T16:28:12
2021-06-14T16:49:42
2021-06-14T16:49:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
920,471,411
2,500
Add load_dataset_builder
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself. TODOs: - [x] Add docstring and entry in the docs - [x] Add tests Closes #2484
closed
https://github.com/huggingface/datasets/pull/2500
2021-06-14T14:27:45
2025-06-20T18:07:24
2021-07-05T10:45:58
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
920,413,021
2,499
Python Programming Puzzles
## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scrolling through the data](https://github.com/microsoft/PythonProgrammingPuzzles/blob/main/problems/README.md)) - **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs. Note: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/2499
2021-06-14T13:27:18
2021-06-15T18:14:14
null
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
920,411,285
2,498
Improve torch formatting performance
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs. The current performance is about 30% slower than NVidia optimized BERT [examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded. **Describe the solution you'd like** Using profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call. ![dataloader_next](https://user-images.githubusercontent.com/458335/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png) As you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call. Digging a bit deeper into format_batch we can see the following profiler data: ![torch_formatter](https://user-images.githubusercontent.com/458335/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png) Once again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion. **Describe alternatives you've considered** I am not familiar with pyarrow and have not yet considered the alternatives to the current approach. Most of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%.
open
https://github.com/huggingface/datasets/issues/2498
2021-06-14T13:25:24
2022-07-15T17:12:04
null
{ "login": "vblagoje", "id": 458335, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
920,250,382
2,497
Use default cast for sliced list arrays if pyarrow >= 4
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
closed
https://github.com/huggingface/datasets/pull/2497
2021-06-14T10:02:47
2021-06-15T18:06:18
2021-06-14T14:24:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
920,216,314
2,496
Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`
`Dataset.map` uses the dataset fingerprint (a hash) for caching. However the fingerprint seems to change when someone moves the cache directory of the dataset. This is because it uses the default fingerprint generation: 1. the dataset path is used to get the fingerprint 2. the modification times of the arrow file is also used to get the fingerprint To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory.
closed
https://github.com/huggingface/datasets/issues/2496
2021-06-14T09:20:26
2021-06-21T15:05:03
2021-06-21T15:05:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
920,170,030
2,495
JAX formatting
We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well
closed
https://github.com/huggingface/datasets/issues/2495
2021-06-14T08:32:07
2021-06-21T16:15:49
2021-06-21T16:15:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
920,149,183
2,494
Improve docs on Enhancing performance
In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases: - How to make datasets the fastest - How to make datasets take the less RAM - How to make datasets take the less hard drive mem cc: @thomwolf
open
https://github.com/huggingface/datasets/issues/2494
2021-06-14T08:11:48
2025-06-28T18:55:38
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
919,833,281
2,493
add tensorflow-macos support
ref - https://github.com/huggingface/datasets/issues/2068
closed
https://github.com/huggingface/datasets/pull/2493
2021-06-13T16:20:08
2021-06-15T08:53:06
2021-06-15T08:53:06
{ "login": "slayerjain", "id": 12831254, "type": "User" }
[]
true
[]
919,718,102
2,492
Eduge
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
closed
https://github.com/huggingface/datasets/pull/2492
2021-06-13T05:10:59
2021-06-22T09:49:04
2021-06-16T10:41:46
{ "login": "enod", "id": 6023883, "type": "User" }
[]
true
[]
919,714,506
2,491
add eduge classification dataset
closed
https://github.com/huggingface/datasets/pull/2491
2021-06-13T04:37:01
2021-06-13T05:06:48
2021-06-13T05:06:38
{ "login": "enod", "id": 6023883, "type": "User" }
[]
true
[]
919,571,385
2,490
Allow latest pyarrow version
Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0. Close #2489.
closed
https://github.com/huggingface/datasets/pull/2490
2021-06-12T14:17:34
2021-07-06T16:54:52
2021-06-14T07:53:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
919,569,749
2,489
Allow latest pyarrow version once segfault bug is fixed
As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568): - it was fixed on 3 May 2021 - version 4.0.1 was released on 19 May 2021 with the bug fix
closed
https://github.com/huggingface/datasets/issues/2489
2021-06-12T14:09:52
2021-06-14T07:53:23
2021-06-14T07:53:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
919,500,756
2,488
Set configurable downloaded datasets path
Part of #2480.
closed
https://github.com/huggingface/datasets/pull/2488
2021-06-12T09:09:03
2021-06-14T09:13:27
2021-06-14T08:29:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
919,452,407
2,487
Set configurable extracted datasets path
Part of #2480.
closed
https://github.com/huggingface/datasets/pull/2487
2021-06-12T05:47:29
2021-06-14T09:30:17
2021-06-14T09:02:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
919,174,898
2,486
Add Rico Dataset
Hi there! I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib. 1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset? You can see the datasets available for Rico here: http://interactionmining.org/rico 2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset? 2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image? 3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently? 4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string? I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: !
closed
https://github.com/huggingface/datasets/pull/2486
2021-06-11T20:17:41
2022-10-03T09:38:18
2022-10-03T09:38:18
{ "login": "ncoop57", "id": 7613470, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
919,099,218
2,485
Implement layered building
As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190): > My suggestion for this would be to have this enabled by default. > > Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is: > > 1. uncompress a handful of files via a generator enough to generate one arrow file > 2. process arrow file 1 > 3. delete all the files that went in and aren't needed anymore. > > rinse and repeat. > > 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project > 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing > 3. It would already include deleting temp files this issue is talking about > > I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders.
open
https://github.com/huggingface/datasets/issues/2485
2021-06-11T18:54:25
2021-06-11T18:54:25
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
919,092,635
2,484
Implement loading a dataset builder
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
closed
https://github.com/huggingface/datasets/issues/2484
2021-06-11T18:47:22
2021-07-05T10:45:57
2021-07-05T10:45:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
918,871,712
2,483
Use gc.collect only when needed to avoid slow downs
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482) However calling gc.collect too often causes significant slow downs (the CI run time doubled). So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset
closed
https://github.com/huggingface/datasets/pull/2483
2021-06-11T15:09:30
2021-06-18T19:25:06
2021-06-11T15:31:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
918,846,027
2,482
Allow to use tqdm>=4.50.0
We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232)) They were due to open arrow files not properly closed by pyarrow. Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed. close https://github.com/huggingface/datasets/issues/2471 cc @lewtun
closed
https://github.com/huggingface/datasets/pull/2482
2021-06-11T14:49:21
2021-06-11T15:11:51
2021-06-11T15:11:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
918,680,168
2,481
Delete extracted files to save disk space
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
closed
https://github.com/huggingface/datasets/issues/2481
2021-06-11T12:21:52
2021-07-19T09:08:18
2021-07-19T09:08:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
918,678,578
2,480
Set download/extracted paths configurable
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives. TODO: - [x] Set configurable extracted datasets path: #2487 - [x] Set configurable downloaded datasets path: #2488 - [ ] Set configurable "incomplete" datasets path?
open
https://github.com/huggingface/datasets/issues/2480
2021-06-11T12:20:24
2021-06-15T14:23:49
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
918,672,431
2,479
❌ load_datasets ❌
closed
https://github.com/huggingface/datasets/pull/2479
2021-06-11T12:14:36
2021-06-11T14:46:25
2021-06-11T14:46:25
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
true
[]
918,507,510
2,478
Create release script
Create a script so that releases can be done automatically (as done in `transformers`).
open
https://github.com/huggingface/datasets/issues/2478
2021-06-11T09:38:02
2023-07-20T13:22:23
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
918,334,431
2,477
Fix docs custom stable version
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
closed
https://github.com/huggingface/datasets/pull/2477
2021-06-11T07:26:03
2021-06-14T09:14:20
2021-06-14T08:20:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
917,686,662
2,476
Add TimeDial
Dataset: https://github.com/google-research-datasets/TimeDial To-Do: Update README.md and add YAML tags
closed
https://github.com/huggingface/datasets/pull/2476
2021-06-10T18:33:07
2021-07-30T12:57:54
2021-07-30T12:57:54
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
917,650,882
2,475
Issue in timit_asr database
## Describe the bug I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows). I am using the next code line dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) The above code result with the same sentence duplicated ten times. It also happens when I use the dataset viewer at Streamlit . ## Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) data = dataset.to_pandas() # Sample code to reproduce the bug ``` ## Expected results table with different row information ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 (also occur in the latest version) - Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 1.15.3 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No - `datasets` version: - Platform: - Python version: - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/2475
2021-06-10T18:05:29
2021-06-13T08:13:50
2021-06-13T08:13:13
{ "login": "hrahamim", "id": 85702107, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
917,622,055
2,474
cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.** When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cached to the VM's disk: ` from datasets import load_from_disk myPreprocessedData = load_from_disk("/content/gdrive/MyDrive/ASR_data/myPreprocessedData") ` I know that chaching on google drive could slow down learning. But at least it would run. **Describe the solution you'd like** Add cache_Dir parameter to the load_from_disk function. **Describe alternatives you've considered** It looks like you could write a custom loading script for the load_dataset function. But this seems to be much too complex for my use case. Is there perhaps a template here that uses the load_from_disk function?
closed
https://github.com/huggingface/datasets/issues/2474
2021-06-10T17:39:36
2022-02-16T14:55:01
2022-02-16T14:55:00
{ "login": "chbensch", "id": 7063207, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
917,538,629
2,473
Add Disfl-QA
Dataset: https://github.com/google-research-datasets/disfl-qa To-Do: Update README.md and add YAML tags
closed
https://github.com/huggingface/datasets/pull/2473
2021-06-10T16:18:00
2021-07-29T11:56:19
2021-07-29T11:56:18
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
917,463,821
2,472
Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published". I have contacted Zenodo support to fix this issue. TODO: - [x] Check with Zenodo to fix the issue - [x] Check BibTeX entry is right
closed
https://github.com/huggingface/datasets/issues/2472
2021-06-10T15:15:46
2021-06-14T16:49:42
2021-06-14T16:49:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
917,067,165
2,471
Fix PermissionError on Windows when using tqdm >=4.50.0
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111 ``` PermissionError: [WinError 32] The process cannot access the file because it is being used by another process ```
closed
https://github.com/huggingface/datasets/issues/2471
2021-06-10T08:31:49
2021-06-11T15:11:50
2021-06-11T15:11:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
916,724,260
2,470
Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`. I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose. ## Steps to reproduce the bug ```python # this function will be applied with map() def tokenize_function(examples): return tokenizer( examples["text"], padding=PaddingStrategy.DO_NOT_PAD, truncation=True, ) # data_files is a Dict[str, str] mapping name -> path datasets = load_dataset("text", data_files={...}) # this is where the error happens if num_proc = 16, # but is fine if num_proc = 1 tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=num_workers, ) ``` ## Expected results The `map()` function succeeds with `num_proc` > 1. ## Actual results ![image](https://user-images.githubusercontent.com/1170062/121404271-a6cc5200-c910-11eb-8e27-5c893bd04042.png) ![image](https://user-images.githubusercontent.com/1170062/121404362-be0b3f80-c910-11eb-9117-658943029aef.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, but I think N/A for this issue - Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
closed
https://github.com/huggingface/datasets/issues/2470
2021-06-09T22:40:22
2021-07-01T09:34:54
2021-07-01T09:11:13
{ "login": "mbforbes", "id": 1170062, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
916,440,418
2,469
Bump tqdm version
closed
https://github.com/huggingface/datasets/pull/2469
2021-06-09T17:24:40
2021-06-11T15:03:42
2021-06-11T15:03:36
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
916,427,320
2,468
Implement ClassLabel encoding in JSON loader
Close #2365.
closed
https://github.com/huggingface/datasets/pull/2468
2021-06-09T17:08:54
2021-06-28T15:39:54
2021-06-28T15:05:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
915,914,098
2,466
change udpos features structure
The structure is change such that each example is a sentence The change is done for issues: #2061 #2444 Close #2061 , close #2444.
closed
https://github.com/huggingface/datasets/pull/2466
2021-06-09T08:03:31
2021-06-18T11:55:09
2021-06-16T10:41:37
{ "login": "cosmeowpawlitan", "id": 50871412, "type": "User" }
[]
true
[]
915,525,071
2,465
adding masahaner dataset
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner @lhoestq , can you please review
closed
https://github.com/huggingface/datasets/pull/2465
2021-06-08T21:20:25
2021-06-14T14:59:05
2021-06-14T14:59:05
{ "login": "dadelani", "id": 23586676, "type": "User" }
[]
true
[]
915,485,601
2,464
fix: adjusting indexing for the labels.
The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES` After this change, the `README.md` now reflects the content of `dataset_infos.json`. Signed-off-by: Matteo Manica <drugilsberg@gmail.com>
closed
https://github.com/huggingface/datasets/pull/2464
2021-06-08T20:47:25
2021-06-09T10:15:46
2021-06-09T09:10:28
{ "login": "drugilsberg", "id": 5406908, "type": "User" }
[]
true
[]
915,454,788
2,463
Fix proto_qa download link
Fixes #2459 Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
closed
https://github.com/huggingface/datasets/pull/2463
2021-06-08T20:23:16
2021-06-10T12:49:56
2021-06-10T08:31:10
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
915,384,613
2,462
Merge DatasetDict and Dataset
As discussed in #2424 and #2437 (please see there for detailed conversation): - It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset. - The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users. - A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user. Here is a proposal for discussion and refined (and potential abandon if it's not good enough): - let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other - let's disallow the use of integers in split names (probably not a very big breaking change) - when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order) - when you index with strings/split name you have the same behavior as now (full backward compat) - let's then also have all the methods of a Dataset on the DatasetDict The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both. There are a few things that we could discuss if we want to merge Dataset and DatasetDict: 1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature ``` from datasets import load_dataset dataset = load_dataset(...) dataset["train"] dataset["input_ids"] ``` 2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss. Moreover regarding your points: - integers are not allowed as split names already - it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset cc: @thomwolf @lhoestq
open
https://github.com/huggingface/datasets/issues/2462
2021-06-08T19:22:04
2023-08-16T09:34:34
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
915,286,150
2,461
Support sliced list arrays in cast
There is this issue in pyarrow: ```python import pyarrow as pa arr = pa.array([[i * 10] for i in range(4)]) arr.cast(pa.list_(pa.int32())) # works arr = arr.slice(1) arr.cast(pa.list_(pa.int32())) # fails # ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented") ``` However in `Dataset.cast` we slice tables to cast their types (it's memory intensive), so we have the same issue. Because of this it is currently not possible to cast a Dataset with a Sequence feature type (unless the table is small enough to not be sliced). In this PR I fixed this by resetting the offset of `pyarrow.ListArray` arrays to zero in the table before casting. I used `pyarrow.compute.subtract` function to update the offsets of the ListArray. cc @abhi1thakur @SBrandeis
closed
https://github.com/huggingface/datasets/pull/2461
2021-06-08T17:38:47
2021-06-08T17:56:24
2021-06-08T17:56:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
915,268,536
2,460
Revert default in-memory for small datasets
Close #2458
closed
https://github.com/huggingface/datasets/pull/2460
2021-06-08T17:14:23
2021-06-08T18:04:14
2021-06-08T17:55:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
915,222,015
2,459
`Proto_qa` hosting seems to be broken
## Describe the bug The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. @zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("proto_qa") ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators train_fpath = dl_manager.download(_URLs[self.config.name]["train"]) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download num_proc=download_config.num_proc, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl ```
closed
https://github.com/huggingface/datasets/issues/2459
2021-06-08T16:16:32
2021-06-10T08:31:09
2021-06-10T08:31:09
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]