id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
949,447,104
https://api.github.com/repos/huggingface/datasets/issues/2689
https://github.com/huggingface/datasets/issues/2689
2,689
cannot save the dataset to disk after rename_column
closed
5
2021-07-21T08:13:40
2025-02-11T23:23:17
2021-07-21T13:11:04
PaulLerner
[ "bug" ]
## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from_disk In [5]: dataset=Dataset.from_dict({'foo': [0]}) In [7]: dataset.save_to_disk('foo') In [8]: dataset=load_from_disk('foo') In [10]: dataset=dataset.rename_column('foo', 'bar') In [11]: dataset.save_to_disk('foo') --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) <ipython-input-11-a3bc0d4fc339> in <module> ----> 1 dataset.save_to_disk('foo') /mnt/beegfs/projects/meerqat/anaconda3/envs/meerqat/lib/python3.7/site-packages/datasets/arrow_dataset.py in save_to_disk(self, dataset_path , fs) 597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths: 598 raise PermissionError( --> 599 f"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself." 600 ) 601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths: PermissionError: Tried to overwrite foo/dataset.arrow but a dataset can't overwrite itself. ``` N. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.10 - PyArrow version: 3.0.0
false
949,182,074
https://api.github.com/repos/huggingface/datasets/issues/2688
https://github.com/huggingface/datasets/issues/2688
2,688
hebrew language codes he and iw should be treated as aliases
closed
2
2021-07-20T23:13:52
2021-07-21T16:34:53
2021-07-21T16:34:53
eyaler
[ "bug" ]
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
false
948,890,481
https://api.github.com/repos/huggingface/datasets/issues/2687
https://github.com/huggingface/datasets/pull/2687
2,687
Minor documentation fix
closed
0
2021-07-20T17:43:23
2021-07-21T13:04:55
2021-07-21T13:04:55
slowwavesleep
[]
Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. This PR fixes that.
true
948,811,669
https://api.github.com/repos/huggingface/datasets/issues/2686
https://github.com/huggingface/datasets/pull/2686
2,686
Fix bad config ids that name cache directories
closed
0
2021-07-20T16:00:45
2021-07-20T16:27:15
2021-07-20T16:27:15
lhoestq
[]
`data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded. Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users. I fixed this by ignoring the value of `data_dir` when it's `None` when computing the config_id. I also added a test to make sure the cache directories are not unexpectedly renamed in the future. Fix https://github.com/huggingface/datasets/issues/2683
true
948,791,572
https://api.github.com/repos/huggingface/datasets/issues/2685
https://github.com/huggingface/datasets/pull/2685
2,685
Fix Blog Authorship Corpus dataset
closed
3
2021-07-20T15:44:50
2021-07-21T13:11:58
2021-07-21T13:11:58
albertvillanova
[]
This PR: - Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError` - Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files Close #2679.
true
948,771,753
https://api.github.com/repos/huggingface/datasets/issues/2684
https://github.com/huggingface/datasets/pull/2684
2,684
Print absolute local paths in load_dataset error messages
closed
0
2021-07-20T15:28:28
2021-07-22T20:48:19
2021-07-22T14:01:10
mariosasko
[]
Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223
true
948,721,379
https://api.github.com/repos/huggingface/datasets/issues/2683
https://github.com/huggingface/datasets/issues/2683
2,683
Cache directories changed due to recent changes in how config kwargs are handled
closed
0
2021-07-20T14:37:57
2021-07-20T16:27:15
2021-07-20T16:27:15
lhoestq
[]
Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example: ```python from datasets import load_dataset_builder c4_builder = load_dataset_builder("c4", "en") print(c4_builder.cache_dir) # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en-174d3b7155eb68db/0.0.0/... # instead of # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en/0.0.0/... ``` This issue could be annoying since it would simply ignore old cache directories for users, and regenerate datasets cc @stas00 this is what you experienced a few days ago
false
948,713,137
https://api.github.com/repos/huggingface/datasets/issues/2682
https://github.com/huggingface/datasets/pull/2682
2,682
Fix c4 expected files
closed
0
2021-07-20T14:29:31
2021-07-20T14:38:11
2021-07-20T14:38:10
lhoestq
[]
Some files were not registered in the list of expected files to download Fix https://github.com/huggingface/datasets/issues/2677
true
948,708,645
https://api.github.com/repos/huggingface/datasets/issues/2681
https://github.com/huggingface/datasets/issues/2681
2,681
5 duplicate datasets
closed
2
2021-07-20T14:25:00
2021-07-20T15:44:17
2021-07-20T15:44:17
severo
[ "bug" ]
## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch <img width="838" alt="Capture d’écran 2021-07-20 à 16 33 58" src="https://user-images.githubusercontent.com/1676121/126342757-4625522a-f788-41a3-bd1f-2a8b9817bbf5.png"> - https://paperswithcode.com/dataset/squad -> https://huggingface.co/datasets/squad and https://huggingface.co/datasets/squad_v2 - https://paperswithcode.com/dataset/narrativeqa -> https://huggingface.co/datasets/narrativeqa and https://huggingface.co/datasets/narrativeqa_manual - https://paperswithcode.com/dataset/hate-speech-and-offensive-language -> https://huggingface.co/datasets/hate_offensive and https://huggingface.co/datasets/hate_speech_offensive - https://paperswithcode.com/dataset/newsph-nli -> https://huggingface.co/datasets/newsph and https://huggingface.co/datasets/newsph_nli Possible solutions: - don't fix (it works) - for each pair of duplicate datasets, remove one, and create an alias to the other. ## Steps to reproduce the bug Visit the Paperswithcode links, and look at the "Dataset Loaders" section ## Expected results There should only be one reference to a Hugging Face dataset loader ## Actual results Two Hugging Face dataset loaders
false
948,649,716
https://api.github.com/repos/huggingface/datasets/issues/2680
https://github.com/huggingface/datasets/pull/2680
2,680
feat: 🎸 add paperswithcode id for qasper dataset
closed
0
2021-07-20T13:22:29
2021-07-20T14:04:10
2021-07-20T14:04:10
severo
[]
The reverse reference exists on paperswithcode: https://paperswithcode.com/dataset/qasper
true
948,506,638
https://api.github.com/repos/huggingface/datasets/issues/2679
https://github.com/huggingface/datasets/issues/2679
2,679
Cannot load the blog_authorship_corpus due to codec errors
closed
3
2021-07-20T10:13:20
2021-07-21T17:02:21
2021-07-21T13:11:58
izaskr
[ "bug" ]
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error similar to the one below was raised for (what seems like) every XML file. /home/izaskr/.cache/huggingface/datasets/downloads/extracted/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a/blogs/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/load.py", line 856, in load_dataset builder_instance.download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 583, in download_and_prepare self._download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 671, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 4.0.1
false
948,471,222
https://api.github.com/repos/huggingface/datasets/issues/2678
https://github.com/huggingface/datasets/issues/2678
2,678
Import Error in Kaggle notebook
closed
4
2021-07-20T09:28:38
2021-07-21T13:59:26
2021-07-21T13:03:02
prikmm
[ "bug" ]
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-input-9-652e886d387f> in <module> ----> 1 import datasets /opt/conda/lib/python3.7/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in <module> 36 import pandas as pd 37 import pyarrow as pa ---> 38 import pyarrow.compute as pc 39 from multiprocess import Pool, RLock 40 from tqdm.auto import tqdm /opt/conda/lib/python3.7/site-packages/pyarrow/compute.py in <module> 16 # under the License. 17 ---> 18 from pyarrow._compute import ( # noqa 19 Function, 20 FunctionOptions, ImportError: /opt/conda/lib/python3.7/site-packages/pyarrow/_compute.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5arrow7compute15KernelSignature8ToStringEv ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Kaggle - Python version: 3.7.10 - PyArrow version: 4.0.1
false
948,429,788
https://api.github.com/repos/huggingface/datasets/issues/2677
https://github.com/huggingface/datasets/issues/2677
2,677
Error when downloading C4
closed
3
2021-07-20T08:37:30
2021-07-20T14:41:31
2021-07-20T14:38:10
Aktsvigun
[ "bug" ]
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="Снимок экрана 2021-07-20 в 11 37 17" src="https://user-images.githubusercontent.com/36672861/126289448-6e0db402-5f3f-485a-bf74-eb6e0271fc25.png">
false
947,734,909
https://api.github.com/repos/huggingface/datasets/issues/2676
https://github.com/huggingface/datasets/pull/2676
2,676
Increase json reader block_size automatically
closed
0
2021-07-19T14:51:14
2021-07-19T17:51:39
2021-07-19T17:51:38
lhoestq
[]
Currently some files can't be read with the default parameters of the JSON lines reader. For example this one: https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz raises a pyarrow error: ```python ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` The block size that is used is the default one by pyarrow (related to this [jira issue](https://issues.apache.org/jira/browse/ARROW-9612)). To fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines. By default the value is `chunksize // 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file. cc @thomwolf @albertvillanova
true
947,657,732
https://api.github.com/repos/huggingface/datasets/issues/2675
https://github.com/huggingface/datasets/pull/2675
2,675
Parallelize ETag requests
closed
0
2021-07-19T13:30:42
2021-07-19T19:33:25
2021-07-19T19:33:25
lhoestq
[]
Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed. In this I made the ETag requests parallel using multithreading. There is also a tqdm progress bar that shows up if there are more than 16 data files.
true
947,338,202
https://api.github.com/repos/huggingface/datasets/issues/2674
https://github.com/huggingface/datasets/pull/2674
2,674
Fix sacrebleu parameter name
closed
0
2021-07-19T07:07:26
2021-07-19T08:07:03
2021-07-19T08:07:03
albertvillanova
[]
DONE: - Fix parameter name: `smooth` to `smooth_method`. - Improve kwargs description. - Align docs on using a metric. - Add example of passing additional arguments in using metrics. Related to #2669.
true
947,300,008
https://api.github.com/repos/huggingface/datasets/issues/2673
https://github.com/huggingface/datasets/pull/2673
2,673
Fix potential DuplicatedKeysError in SQuAD
closed
0
2021-07-19T06:08:00
2021-07-19T07:08:03
2021-07-19T07:08:03
albertvillanova
[]
DONE: - Fix potential DiplicatedKeysError by ensuring keys are unique. - Align examples in the docs with SQuAD code. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
true
947,294,605
https://api.github.com/repos/huggingface/datasets/issues/2672
https://github.com/huggingface/datasets/pull/2672
2,672
Fix potential DuplicatedKeysError in LibriSpeech
closed
0
2021-07-19T06:00:49
2021-07-19T06:28:57
2021-07-19T06:28:56
albertvillanova
[]
DONE: - Fix unnecessary path join. - Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
true
947,273,875
https://api.github.com/repos/huggingface/datasets/issues/2671
https://github.com/huggingface/datasets/pull/2671
2,671
Mesinesp development and training data sets have been added.
closed
1
2021-07-19T05:14:38
2021-07-19T07:32:28
2021-07-19T06:45:50
aslihanuysall
[]
https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) training set has a total of 369,368 records.
true
947,120,709
https://api.github.com/repos/huggingface/datasets/issues/2670
https://github.com/huggingface/datasets/issues/2670
2,670
Using sharding to parallelize indexing
open
0
2021-07-18T21:26:26
2021-10-07T13:33:25
null
ggdupont
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding) **Describe the solution you'd like** When working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data. Additionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset. **Describe alternatives you've considered** Each dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf/idf, BM25). **Additional context** The objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.
false
946,982,998
https://api.github.com/repos/huggingface/datasets/issues/2669
https://github.com/huggingface/datasets/issues/2669
2,669
Metric kwargs are not passed to underlying external metric f1_score
closed
2
2021-07-18T08:32:31
2021-07-18T18:36:05
2021-07-18T11:19:04
BramVanroy
[ "bug" ]
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to reproduce the bug ```python import datasets f1 = datasets.load_metric("f1", keep_in_memory=True, average="min") f1.add_batch(predictions=[0,2,3], references=[1, 2, 3]) f1.compute() ``` ## Expected results No error, because `average="min"` should be passed correctly to f1_score in sklearn. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute "f1": f1_score( File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score return fbeta_score(y_true, y_pred, beta=1, labels=labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score _, _, f, _ = precision_recall_fscore_support(y_true, y_pred, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support labels = _check_set_wise_labels(y_true, y_pred, average, labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels raise ValueError("Target is %s but average='binary'. Please " ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
false
946,867,622
https://api.github.com/repos/huggingface/datasets/issues/2668
https://github.com/huggingface/datasets/pull/2668
2,668
Add Russian SuperGLUE
closed
2
2021-07-17T17:41:28
2021-07-29T11:50:31
2021-07-29T11:50:31
slowwavesleep
[]
Hi, This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.
true
946,861,908
https://api.github.com/repos/huggingface/datasets/issues/2667
https://github.com/huggingface/datasets/pull/2667
2,667
Use tqdm from tqdm_utils
closed
2
2021-07-17T17:06:35
2021-07-19T17:39:10
2021-07-19T17:32:00
mariosasko
[]
This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, there is no easy way to disable progress bars in a multiprocess setting on Windows (patching logging with `datasets.utils.logging.get_verbosity = lambda: datasets.utils.logging.NOTSET` doesn't seem to work as well), so adding support for this is a future goal. Additionally, this PR adds a unit ("ba" for batches) to the bar printed by `Dataset.to_json` (this change is motivated by https://github.com/huggingface/datasets/issues/2657).
true
946,825,140
https://api.github.com/repos/huggingface/datasets/issues/2666
https://github.com/huggingface/datasets/pull/2666
2,666
Adds CodeClippy dataset [WIP]
closed
2
2021-07-17T13:32:04
2023-07-26T23:06:01
2022-10-03T09:37:35
arampacha
[ "dataset contribution" ]
CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week https://the-eye.eu/public/AI/training_data/code_clippy_data/
true
946,822,036
https://api.github.com/repos/huggingface/datasets/issues/2665
https://github.com/huggingface/datasets/pull/2665
2,665
Adds APPS dataset to the hub [WIP]
closed
1
2021-07-17T13:13:17
2022-10-03T09:38:10
2022-10-03T09:38:10
arampacha
[ "dataset contribution" ]
A loading script for [APPS dataset](https://github.com/hendrycks/apps)
true
946,552,273
https://api.github.com/repos/huggingface/datasets/issues/2663
https://github.com/huggingface/datasets/issues/2663
2,663
[`to_json`] add multi-proc sharding support
closed
2
2021-07-16T19:41:50
2021-09-13T13:56:37
2021-09-13T13:56:37
stas00
[ "enhancement" ]
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards? I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process. I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs. The code I was using: ``` from multiprocessing import cpu_count, Process, Queue [...] filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count()) DATASET_NAME = "oscar" SHARDS = 10 def process_shard(idx): print(f"Sharding {idx}") ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True) # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling print(f"Saving {DATASET_NAME}-{idx}.jsonl") ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False) queue = Queue() processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)] for p in processes: p.start() for p in processes: p.join() ``` Thank you! @lhoestq
false
946,470,815
https://api.github.com/repos/huggingface/datasets/issues/2662
https://github.com/huggingface/datasets/pull/2662
2,662
Load Dataset from the Hub (NO DATASET SCRIPT)
closed
5
2021-07-16T17:21:58
2021-08-25T14:53:01
2021-08-25T14:18:08
lhoestq
[]
## Load the data from any Dataset repository on the Hub This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script. As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. Here is an example with the `allenai/c4` repository that contains a lot of compressed json lines files: ```python from datasets import load_dataset data_files = {"train": "en/c4-train.*.json.gz"} c4 = load_dataset("allenai/c4", data_files=data_files, split="train", streaming=True) print(c4.n_shards) # 1024 print(next(iter(c4))) # {'text': 'Beginners BBQ Class Takin...'} ``` By default it loads all the files, but as shown in the example you can choose the ones you want with unix style patterns. Of course it's still possible to use dataset scripts since they offer the most flexibility. ## Implementation details It uses `huggingface_hub` to list the files in a dataset repository. If you provide a path to a local directory instead of a repository name, it works the same way but it uses `glob`. Depending on the data files available, or passed in the `data_files` parameter, one of the available builders will be used among the csv, json, text and parquet builders. Because of this, it's not possible to load both csv and json files at once. In this case you have to load them separately and then concatenate the two datasets for example. ## TODO - [x] tests - [x] docs - [x] when huggingface_hub gets a new release, update the CI and the setup.py Close https://github.com/huggingface/datasets/issues/2629
true
946,446,967
https://api.github.com/repos/huggingface/datasets/issues/2661
https://github.com/huggingface/datasets/pull/2661
2,661
Add SD task for SUPERB
closed
11
2021-07-16T16:43:21
2021-08-04T17:03:53
2021-08-04T17:03:53
albertvillanova
[]
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). TODO: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [x] README: tags + description sections - ~~Add DER metric~~ (we leave the DER metric for a follow-up PR) Related to #2619. Close #2653. cc: @lewtun
true
946,316,180
https://api.github.com/repos/huggingface/datasets/issues/2660
https://github.com/huggingface/datasets/pull/2660
2,660
Move checks from _map_single to map
closed
3
2021-07-16T13:53:33
2021-09-06T14:12:23
2021-09-06T14:12:23
mariosasko
[]
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list.
true
946,155,407
https://api.github.com/repos/huggingface/datasets/issues/2659
https://github.com/huggingface/datasets/pull/2659
2,659
Allow dataset config kwargs to be None
closed
0
2021-07-16T10:25:38
2021-07-16T12:46:07
2021-07-16T12:46:07
lhoestq
[]
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
true
946,139,532
https://api.github.com/repos/huggingface/datasets/issues/2658
https://github.com/huggingface/datasets/issues/2658
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
closed
0
2021-07-16T10:05:44
2021-07-16T12:46:06
2021-07-16T12:46:06
lhoestq
[]
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
false
945,822,829
https://api.github.com/repos/huggingface/datasets/issues/2657
https://github.com/huggingface/datasets/issues/2657
2,657
`to_json` reporting enhancements
open
0
2021-07-15T23:32:18
2021-07-15T23:33:53
null
stas00
[ "enhancement" ]
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently. 2. It took me a while to make sense of the reported numbers: ``` 22%|██▏ | 1536/7076 [12:30:57<44:09:42, 28.70s/it] ``` So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.: ``` 22%|██▏ | 15360K/70760K [12:30:57<44:09:42, 28.70s/it] ``` or ``` 22%|██▏ | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it] ``` (while of course remaining friendly to small datasets) I forget if tqdm lets you add a magnitude identifier to the running count. Thank you!
false
945,421,790
https://api.github.com/repos/huggingface/datasets/issues/2656
https://github.com/huggingface/datasets/pull/2656
2,656
Change `from_csv` default arguments
closed
1
2021-07-15T14:09:06
2023-09-24T09:56:44
2021-07-16T10:23:26
SBrandeis
[]
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`: ```python Dataset.from_csv( ..., sep=None ) ```
true
945,382,723
https://api.github.com/repos/huggingface/datasets/issues/2655
https://github.com/huggingface/datasets/issues/2655
2,655
Allow the selection of multiple columns at once
closed
5
2021-07-15T13:30:45
2024-01-09T15:11:27
2024-01-09T07:46:28
Dref360
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **Describe alternatives you've considered** we can do `[dataset[col] for col in ('idx', 'label')]` **Additional context** This is of course very minor.
false
945,167,231
https://api.github.com/repos/huggingface/datasets/issues/2654
https://github.com/huggingface/datasets/issues/2654
2,654
Give a user feedback if the dataset he loads is streamable or not
open
2
2021-07-15T09:07:27
2021-08-02T11:03:21
null
philschmid
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive. **Describe alternatives you've considered** Add a new metadata tag for "streaming"
false
945,102,321
https://api.github.com/repos/huggingface/datasets/issues/2653
https://github.com/huggingface/datasets/issues/2653
2,653
Add SD task for SUPERB
closed
2
2021-07-15T07:51:40
2021-08-04T17:03:52
2021-08-04T17:03:52
albertvillanova
[ "dataset request" ]
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [ ] README: tags + description sections Related to #2619. cc: @lewtun
false
944,865,924
https://api.github.com/repos/huggingface/datasets/issues/2652
https://github.com/huggingface/datasets/pull/2652
2,652
Fix logging docstring
closed
0
2021-07-14T23:19:58
2021-07-18T11:41:06
2021-07-15T09:57:31
mariosasko
[]
Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534.
true
944,796,961
https://api.github.com/repos/huggingface/datasets/issues/2651
https://github.com/huggingface/datasets/issues/2651
2,651
Setting log level higher than warning does not suppress progress bar
closed
7
2021-07-14T21:06:51
2022-07-08T14:51:57
2021-07-15T03:41:35
Isa-rentacs
[ "bug" ]
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work. ## Steps to reproduce the bug ```python import datasets from datasets.utils.logging import set_verbosity_error set_verbosity_error() def dummy_map(batch): return batch common_voice_train = datasets.load_dataset("common_voice", "de", split="train") common_voice_test = datasets.load_dataset("common_voice", "de", split="test") common_voice_train.map(dummy_map) ``` ## Expected results - The progress bar for `.map` call won't be shown ## Actual results - The progress bar for `.map` is still shown ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyArrow version: 4.0.1
false
944,672,565
https://api.github.com/repos/huggingface/datasets/issues/2650
https://github.com/huggingface/datasets/issues/2650
2,650
[load_dataset] shard and parallelize the process
closed
4
2021-07-14T18:04:58
2023-11-28T19:11:41
2023-11-28T19:11:40
stas00
[ "enhancement" ]
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core. - If the build crashes, everything done up to that point gets lost Request: Shard the build over multiple arrow files, which would enable: - much faster build by parallelizing the build process - if the process crashed, the completed arrow files don't need to be re-built again Thank you! @lhoestq
false
944,651,229
https://api.github.com/repos/huggingface/datasets/issues/2649
https://github.com/huggingface/datasets/issues/2649
2,649
adding progress bar / ETA for `load_dataset`
open
2
2021-07-14T17:34:39
2023-03-27T10:32:49
null
stas00
[ "enhancement" ]
Please consider: ``` Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2... HF google storage unreachable. Downloading and preparing it from source ``` and no indication whatsoever of whether things work well or when it'll be done. It's important to have an estimated completion time for when doing slurm jobs since some instances have a cap on run-time. I think for this particular job it sat for 30min in total silence and then after 30min it started generating: ``` 897850 examples [07:24, 10286.71 examples/s] ``` which is already great! Request: 1. ETA - knowing how many hours to allocate for a slurm job 2. progress bar - helps to know things are working and aren't stuck and where we are at. Thank you! @lhoestq
false
944,484,522
https://api.github.com/repos/huggingface/datasets/issues/2648
https://github.com/huggingface/datasets/issues/2648
2,648
Add web_split dataset for Paraphase and Rephrase benchmark
open
1
2021-07-14T14:24:36
2021-07-14T14:26:12
null
bhadreshpsavani
[ "enhancement" ]
## Describe: For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better results on both tests data. This dataset is made from web NLG data. All the dataset related details are provided in the below repository Github link: https://github.com/shashiongithub/Split-and-Rephrase
false
944,424,941
https://api.github.com/repos/huggingface/datasets/issues/2647
https://github.com/huggingface/datasets/pull/2647
2,647
Fix anchor in README
closed
0
2021-07-14T13:22:44
2021-07-18T11:41:18
2021-07-15T06:50:47
mariosasko
[]
I forgot to push this fix in #2611, so I'm sending it now.
true
944,379,954
https://api.github.com/repos/huggingface/datasets/issues/2646
https://github.com/huggingface/datasets/issues/2646
2,646
downloading of yahoo_answers_topics dataset failed
closed
2
2021-07-14T12:31:05
2022-08-04T08:28:24
2022-08-04T08:28:24
vikrant7k
[ "bug" ]
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') # Sample code to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
false
944,374,284
https://api.github.com/repos/huggingface/datasets/issues/2645
https://github.com/huggingface/datasets/issues/2645
2,645
load_dataset processing failed with OS error after downloading a dataset
closed
2
2021-07-14T12:23:53
2021-07-15T09:34:02
2021-07-15T09:34:02
fake-warrior8
[ "bug" ]
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ## Expected results there is no error when running load_dataset. ## Actual results Specify the actual results or traceback. Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prep self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 952, in encode_example example = cast_to_python_objects(example) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 219, in cast_to_python_ob return _cast_to_python_objects(obj)[0] File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 165, in _cast_to_python_o import torch File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module> _load_global_deps() File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/home/anaconda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen: cannot load any more object with static TLS During handling of the above exception, another exception occurred: Traceback (most recent call last): File "download_hub_opus100.py", line 9, in <module> this_dataset = load_dataset('opus100', language_pair) File "/home/anaconda3/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepa dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 658, in _download_and_prep + str(e) OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid - Python version: 3.6.6 - PyArrow version: 3.0.0
false
944,254,748
https://api.github.com/repos/huggingface/datasets/issues/2644
https://github.com/huggingface/datasets/issues/2644
2,644
Batched `map` not allowed to return 0 items
closed
6
2021-07-14T09:58:19
2021-07-26T14:55:15
2021-07-26T14:55:15
pcuenca
[ "bug" ]
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset), `a batch mapped function can take as input a batch of size N and return a batch of size M where M can be greater or less than N and can even be zero`. However, when the returned batch has a size of zero (neither item in the batch fulfilled the condition), we get an `index out of bounds` error. I think that `arrow_writer.py` is [trying to infer the returned types using the first element returned](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L100), but no elements were returned in this case. For this error to happen, I'm returning a dictionary that contains empty lists for the keys I want to keep, see below. If I return an empty dictionary instead (no keys), then a different error eventually occurs. ## Steps to reproduce the bug ```python def select_rows(examples): # `key` is a column name that exists in the original dataset # The following line simulates no matches found, so we return an empty batch result = {'key': []} return result filtered_dataset = dataset.map( select_rows, remove_columns = dataset.column_names, batched = True, num_proc = 1, desc = "Selecting rows with images that exist" ) ``` The code above immediately triggers the exception. If we use the following instead: ```python def select_rows(examples): # `key` is a column name that exists in the original dataset result = {'key': []} # or defaultdict or whatever # code to check for condition and append elements to result # some_items_found will be set to True if there were any matching elements in the batch return result if some_items_found else {} ``` Then it _seems_ to work, but it eventually fails with some sort of schema error. I believe it may happen when an empty batch is followed by a non-empty one, but haven't set up a test to verify it. In my opinion, returning a dictionary with empty lists and valid column names should be accepted as a valid result with zero items. ## Expected results The dataset would be filtered and only the matching fields would be returned. ## Actual results An exception is encountered, as described. Using a workaround makes it fail further along the line. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
false
944,220,273
https://api.github.com/repos/huggingface/datasets/issues/2643
https://github.com/huggingface/datasets/issues/2643
2,643
Enum used in map functions will raise a RecursionError with dill.
open
4
2021-07-14T09:16:08
2021-11-02T09:51:11
null
jorgeecardona
[ "bug" ]
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above. ## Steps to reproduce the bug ```python from datasets import load_dataset from enum import Enum class A(Enum): a = 'a' def main(): a = A.a def f(x): return {} if a == a.a else x ds = load_dataset('cnn_dailymail', '3.0.0')['test'] ds = ds.map(f, num_proc=15) if __name__ == "__main__": main() ``` ## Expected results The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood. ## Actual results ```python File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type pickler.save_reduce(_create_type, (type(obj), obj.__name__, File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save self.framer.commit_frame() File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame if f.tell() >= self._FRAME_SIZE_TARGET or force: RecursionError: maximum recursion depth exceeded while calling a Python object ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 3.0.0
false
944,175,697
https://api.github.com/repos/huggingface/datasets/issues/2642
https://github.com/huggingface/datasets/issues/2642
2,642
Support multi-worker with streaming dataset (IterableDataset).
open
3
2021-07-14T08:22:58
2024-05-03T10:11:04
null
changjonathanc
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **Describe alternatives you've considered** A simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size. * https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset **Additional context**
false
943,838,085
https://api.github.com/repos/huggingface/datasets/issues/2641
https://github.com/huggingface/datasets/issues/2641
2,641
load_dataset("financial_phrasebank") NonMatchingChecksumError
closed
4
2021-07-13T21:21:49
2022-08-04T08:30:08
2022-08-04T08:30:08
courtmckay
[ "bug" ]
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financial_phrasebank dataset downloaded successfully ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip'] ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-4.14.232-177.418.amzn2.x86_64-x86_64-with-debian-10.6 - Python version: 3.7.10 - PyArrow version: 4.0.1
false
943,591,055
https://api.github.com/repos/huggingface/datasets/issues/2640
https://github.com/huggingface/datasets/pull/2640
2,640
Fix docstrings
closed
0
2021-07-13T16:09:14
2021-07-15T06:51:01
2021-07-15T06:06:12
albertvillanova
[]
Fix rendering of some docstrings.
true
943,527,463
https://api.github.com/repos/huggingface/datasets/issues/2639
https://github.com/huggingface/datasets/pull/2639
2,639
Refactor patching to specific submodule
closed
0
2021-07-13T15:08:45
2021-07-13T16:52:49
2021-07-13T16:52:49
albertvillanova
[]
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created. In relation with the initial approach followed in #2631.
true
943,484,913
https://api.github.com/repos/huggingface/datasets/issues/2638
https://github.com/huggingface/datasets/pull/2638
2,638
Streaming for the Json loader
closed
2
2021-07-13T14:37:06
2021-07-16T15:59:32
2021-07-16T15:59:31
lhoestq
[]
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows. Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related to #2573). So I switched to using `open` which is extended to support reading from remote file progressively, and I removed the pyarrow json reader which was not practical. Instead, I'm using the classical `json.loads` from the standard library.
true
943,044,514
https://api.github.com/repos/huggingface/datasets/issues/2636
https://github.com/huggingface/datasets/pull/2636
2,636
Streaming for the Pandas loader
closed
0
2021-07-13T09:18:21
2021-07-13T14:37:24
2021-07-13T14:37:23
lhoestq
[]
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example. Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
true
943,030,999
https://api.github.com/repos/huggingface/datasets/issues/2635
https://github.com/huggingface/datasets/pull/2635
2,635
Streaming for the CSV loader
closed
0
2021-07-13T09:08:58
2021-07-13T15:19:38
2021-07-13T15:19:37
lhoestq
[]
It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows. Indeed, when streaming, `open` is extended to support reading from remote file progressively.
true
942,805,621
https://api.github.com/repos/huggingface/datasets/issues/2634
https://github.com/huggingface/datasets/pull/2634
2,634
Inject ASR template for lj_speech dataset
closed
0
2021-07-13T06:04:54
2021-07-13T09:05:09
2021-07-13T09:05:09
albertvillanova
[]
Related to: #2565, #2633. cc: @lewtun
true
942,396,414
https://api.github.com/repos/huggingface/datasets/issues/2633
https://github.com/huggingface/datasets/pull/2633
2,633
Update ASR tags
closed
0
2021-07-12T19:58:31
2021-07-13T05:45:26
2021-07-13T05:45:13
lewtun
[]
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
true
942,293,727
https://api.github.com/repos/huggingface/datasets/issues/2632
https://github.com/huggingface/datasets/pull/2632
2,632
add image-classification task template
closed
2
2021-07-12T17:41:03
2021-07-13T15:44:28
2021-07-13T15:28:16
nateraw
[]
Snippet below is the tl;dr, but you can try it out directly here: [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb) ```python from datasets import load_dataset ds = load_dataset('nateraw/image-folder', data_files='PetImages/') # DatasetDict({ # train: Dataset({ # features: ['file', 'labels'], # num_rows: 23410 # }) # }) ds = ds.prepare_for_task('image-classification') # DatasetDict({ # train: Dataset({ # features: ['image_file_path', 'labels'], # num_rows: 23410 # }) # }) ```
true
942,242,271
https://api.github.com/repos/huggingface/datasets/issues/2631
https://github.com/huggingface/datasets/pull/2631
2,631
Delete extracted files when loading dataset
closed
13
2021-07-12T16:39:33
2021-07-19T09:08:19
2021-07-19T09:08:19
albertvillanova
[]
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
true
942,102,956
https://api.github.com/repos/huggingface/datasets/issues/2630
https://github.com/huggingface/datasets/issues/2630
2,630
Progress bars are not properly rendered in Jupyter notebook
closed
2
2021-07-12T14:07:13
2022-02-03T15:55:33
2022-02-03T15:55:33
albertvillanova
[ "bug" ]
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc: Reported by @thomwolf
false
941,819,205
https://api.github.com/repos/huggingface/datasets/issues/2629
https://github.com/huggingface/datasets/issues/2629
2,629
Load datasets from the Hub without requiring a dataset script
closed
1
2021-07-12T08:45:17
2021-08-25T14:18:08
2021-08-25T14:18:08
lhoestq
[]
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `data_files` argument. This feature should be compatible with private repositories and dataset streaming. This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
false
941,676,404
https://api.github.com/repos/huggingface/datasets/issues/2628
https://github.com/huggingface/datasets/pull/2628
2,628
Use ETag of remote data files
closed
0
2021-07-12T05:10:10
2021-07-12T14:08:34
2021-07-12T08:40:07
albertvillanova
[]
Use ETag of remote data files to create config ID. Related to #2616.
true
941,503,349
https://api.github.com/repos/huggingface/datasets/issues/2627
https://github.com/huggingface/datasets/pull/2627
2,627
Minor fix tests with Windows paths
closed
0
2021-07-11T17:55:48
2021-07-12T14:08:47
2021-07-12T08:34:50
albertvillanova
[]
Minor fix tests with Windows paths.
true
941,497,830
https://api.github.com/repos/huggingface/datasets/issues/2626
https://github.com/huggingface/datasets/pull/2626
2,626
Use correct logger in metrics.py
closed
0
2021-07-11T17:22:30
2021-07-12T14:08:54
2021-07-12T05:54:29
mariosasko
[]
Fixes #2624
true
941,439,922
https://api.github.com/repos/huggingface/datasets/issues/2625
https://github.com/huggingface/datasets/issues/2625
2,625
⚛️😇⚙️🔑
closed
0
2021-07-11T12:14:34
2021-07-12T05:55:59
2021-07-12T05:55:59
hustlen0mics
[]
false
941,318,247
https://api.github.com/repos/huggingface/datasets/issues/2624
https://github.com/huggingface/datasets/issues/2624
2,624
can't set verbosity for `metric.py`
closed
1
2021-07-10T20:23:45
2021-07-12T05:54:29
2021-07-12T05:54:29
thomas-happify
[ "bug" ]
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow. [2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. [2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow ``` As you can see, `datasets` logging come from different places. `filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/` So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation. I had to do ``` logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR) ``` to fully mute these messages ## Expected results it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: tried both 1.8.0 & 1.9.0 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.8.10 - PyArrow version: 3.0.0
false
941,265,342
https://api.github.com/repos/huggingface/datasets/issues/2623
https://github.com/huggingface/datasets/pull/2623
2,623
[Metrics] added wiki_split metrics
closed
1
2021-07-10T14:51:50
2021-07-14T14:28:13
2021-07-12T22:34:31
bhadreshpsavani
[]
Fixes: #2606 This pull request adds combine metrics for the wikisplit or English sentence split task Reviewer: @patrickvonplaten
true
941,127,785
https://api.github.com/repos/huggingface/datasets/issues/2622
https://github.com/huggingface/datasets/issues/2622
2,622
Integration with AugLy
closed
2
2021-07-10T00:03:09
2023-07-20T13:18:48
2023-07-20T13:18:47
Darktex
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial. **Describe the solution you'd like** The biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader. **Describe alternatives you've considered** One possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.
false
940,916,446
https://api.github.com/repos/huggingface/datasets/issues/2621
https://github.com/huggingface/datasets/pull/2621
2,621
Use prefix to allow exceed Windows MAX_PATH
closed
6
2021-07-09T16:39:53
2021-07-16T15:28:12
2021-07-16T15:28:11
albertvillanova
[]
By using this prefix, you can exceed the Windows MAX_PATH limit. See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces Related to #2524, #2220.
true
940,893,389
https://api.github.com/repos/huggingface/datasets/issues/2620
https://github.com/huggingface/datasets/pull/2620
2,620
Add speech processing tasks
closed
2
2021-07-09T16:07:29
2021-07-12T18:32:59
2021-07-12T17:32:02
lewtun
[]
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
true
940,858,236
https://api.github.com/repos/huggingface/datasets/issues/2619
https://github.com/huggingface/datasets/pull/2619
2,619
Add ASR task for SUPERB
closed
3
2021-07-09T15:19:45
2021-07-15T08:55:58
2021-07-13T12:40:18
lewtun
[]
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition). Usage: ```python from datasets import load_dataset asr = load_dataset("superb", "asr") # DatasetDict({ # train: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 28539 # }) # validation: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 2703 # }) # test: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 2620 # }) # }) ``` I've used the GLUE benchmark as a guide for filling out the README. To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel. Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)
true
940,852,640
https://api.github.com/repos/huggingface/datasets/issues/2618
https://github.com/huggingface/datasets/issues/2618
2,618
`filelock.py` Error
closed
2
2021-07-09T15:12:49
2024-06-21T06:14:07
2023-11-23T19:06:19
liyucheng09
[ "bug" ]
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available ``` According to error log, it is OSError, but there is an `except` in the `_acquire` function. ``` def _acquire(self): open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC try: fd = os.open(self._lock_file, open_mode) except (IOError, OSError): pass else: self._lock_file_fd = fd return None ``` I don't know why it stucked rather than `pass` directly. I am not quite familiar with filelock operation, so any help is highly appriciated. ## Steps to reproduce the bug ```python ds = load_dataset('xsum') ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 818, in load_dataset use_auth_token=use_auth_token, File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 470, in prepare_module with FileLock(lock_path): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) KeyboardInterrupt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 4.0.1
false
940,846,847
https://api.github.com/repos/huggingface/datasets/issues/2617
https://github.com/huggingface/datasets/pull/2617
2,617
Fix missing EOL issue in to_json for old versions of pandas
closed
0
2021-07-09T15:05:45
2021-07-12T14:09:00
2021-07-09T15:28:33
lhoestq
[]
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
true
940,799,038
https://api.github.com/repos/huggingface/datasets/issues/2616
https://github.com/huggingface/datasets/pull/2616
2,616
Support remote data files
closed
2
2021-07-09T14:07:38
2021-07-09T16:13:41
2021-07-09T16:13:41
albertvillanova
[ "enhancement" ]
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
true
940,794,339
https://api.github.com/repos/huggingface/datasets/issues/2615
https://github.com/huggingface/datasets/issues/2615
2,615
Jsonlines export error
closed
10
2021-07-09T14:02:05
2021-07-09T15:29:07
2021-07-09T15:28:33
TevenLeScao
[ "bug" ]
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This what I'm running: in python: ``` from datasets import load_dataset ptb = load_dataset("ptb_text_only") ptb["train"].to_json("ptb.jsonl") ``` then out of python: ``` head -10000 ptb.jsonl ``` ## Expected results Properly separated lines ## Actual results The last line is a concatenation of two lines ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyArrow version: 4.0.1
false
940,762,427
https://api.github.com/repos/huggingface/datasets/issues/2614
https://github.com/huggingface/datasets/pull/2614
2,614
Convert numpy scalar to python float in Pearsonr output
closed
0
2021-07-09T13:22:55
2021-07-12T14:13:02
2021-07-09T14:04:38
lhoestq
[]
Following of https://github.com/huggingface/datasets/pull/2612
true
940,759,852
https://api.github.com/repos/huggingface/datasets/issues/2613
https://github.com/huggingface/datasets/pull/2613
2,613
Use ndarray.item instead of ndarray.tolist
closed
0
2021-07-09T13:19:35
2021-07-12T14:12:57
2021-07-09T13:50:05
lewtun
[]
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#numpy-ndarray-item PS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612
true
940,604,512
https://api.github.com/repos/huggingface/datasets/issues/2612
https://github.com/huggingface/datasets/pull/2612
2,612
Return Python float instead of numpy.float64 in sklearn metrics
closed
3
2021-07-09T09:48:09
2021-07-12T14:12:53
2021-07-09T13:03:54
lewtun
[]
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`. The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like: ```python import yaml from datasets import load_metric metric = load_metric("accuracy") score = metric.compute(predictions=[0,1], references=[0,1]) print(yaml.dump(score["accuracy"])) # output below # !!python/object/apply:numpy.core.multiarray.scalar # - !!python/object/apply:numpy.dtype # args: # - f8 # - false # - true # state: !!python/tuple # - 3 # - < # - null # - null # - null # - -1 # - -1 # - 0 # - !!binary | # AAAAAAAA8D8= ```
true
940,307,053
https://api.github.com/repos/huggingface/datasets/issues/2611
https://github.com/huggingface/datasets/pull/2611
2,611
More consistent naming
closed
0
2021-07-09T00:09:17
2021-07-13T17:13:19
2021-07-13T16:08:30
mariosasko
[]
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`🤗Datasets` -> `🤗 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
true
939,899,829
https://api.github.com/repos/huggingface/datasets/issues/2610
https://github.com/huggingface/datasets/pull/2610
2,610
Add missing WikiANN language tags
closed
0
2021-07-08T14:08:01
2021-07-12T14:12:16
2021-07-08T15:44:04
albertvillanova
[]
Add missing language tags for WikiANN datasets.
true
939,616,682
https://api.github.com/repos/huggingface/datasets/issues/2609
https://github.com/huggingface/datasets/pull/2609
2,609
Fix potential DuplicatedKeysError
closed
1
2021-07-08T08:38:04
2021-07-12T14:13:16
2021-07-09T16:42:08
albertvillanova
[]
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
true
938,897,626
https://api.github.com/repos/huggingface/datasets/issues/2608
https://github.com/huggingface/datasets/pull/2608
2,608
Support streaming JSON files
closed
0
2021-07-07T13:30:22
2021-07-12T14:12:31
2021-07-08T16:08:41
albertvillanova
[]
Use open in JSON dataset builder, so that it can be patched with xopen for streaming. Close #2607.
true
938,796,902
https://api.github.com/repos/huggingface/datasets/issues/2607
https://github.com/huggingface/datasets/issues/2607
2,607
Streaming local gzip compressed JSON line files is not working
closed
6
2021-07-07T11:36:33
2021-07-20T09:50:19
2021-07-08T16:08:41
thomwolf
[ "bug" ]
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset)) ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-6-27a664e29784> in <module> ----> 1 next(iter(streamed_dataset)) ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 336 337 def __iter__(self): --> 338 for key, example in self._iter(): 339 if self.features: 340 # we encode the example for ClassLabel feature types for example ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in _iter(self) 333 else: 334 ex_iterable = self._ex_iterable --> 335 yield from ex_iterable 336 337 def __iter__(self): ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in wrapper(**kwargs) 282 def wrapper(**kwargs): 283 python_formatter = PythonFormatter() --> 284 for key, table in generate_tables_fn(**kwargs): 285 batch = python_formatter.format_batch(table) 286 for i, example in enumerate(_batch_to_examples(batch)): ~/Documents/GitHub/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files, original_files) 85 file, 86 read_options=self.config.pa_read_options, ---> 87 parse_options=self.config.pa_parse_options, 88 ) 89 except pa.ArrowInvalid as err: ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json._get_reader() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_input_stream() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_native_file() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile.__cinit__() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile._open_readable() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'gzip://file-000000000000.json::/Users/thomwolf/github-dataset/file-000000000000.json.gz'. Detail: [errno 2] No such file or directory ``` ## Environment info - `datasets` version: 1.9.1.dev0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyArrow version: 1.0.0
false
938,763,684
https://api.github.com/repos/huggingface/datasets/issues/2606
https://github.com/huggingface/datasets/issues/2606
2,606
[Metrics] addition of wiki_split metrics
closed
1
2021-07-07T10:56:04
2021-07-12T22:34:31
2021-07-12T22:34:31
bhadreshpsavani
[ "enhancement", "metric request" ]
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01-4b48db7a6694.png) While training we require metrics which can give all the output Currently, we don't have an exact match for text normalized data **Describe the solution you'd like** A custom metrics for wiki_split that can calculate these three values and provide it in the form of a single dictionary For exact match, we can refer to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py) **Describe alternatives you've considered** Two metrics are already present one more can be added for an exact match then we can run all three metrics in training script #self-assign
false
938,648,164
https://api.github.com/repos/huggingface/datasets/issues/2605
https://github.com/huggingface/datasets/pull/2605
2,605
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
closed
0
2021-07-07T08:47:23
2021-07-12T14:10:27
2021-07-07T08:59:13
lhoestq
[]
During the FLAX sprint some users have this error when streaming datasets: ```python aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer ``` This error must trigger a retry instead of directly crashing Therefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError` In particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`.
true
938,602,237
https://api.github.com/repos/huggingface/datasets/issues/2604
https://github.com/huggingface/datasets/issues/2604
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
closed
14
2021-07-07T07:56:16
2021-07-19T09:08:18
2021-07-19T09:08:18
thomwolf
[ "enhancement" ]
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to stream extraction/delete) would be nice to avoid disk cluter. I can maybe tackle this one in the JSON script unless you want a more general solution.
false
938,588,149
https://api.github.com/repos/huggingface/datasets/issues/2603
https://github.com/huggingface/datasets/pull/2603
2,603
Fix DuplicatedKeysError in omp
closed
0
2021-07-07T07:38:32
2021-07-12T14:10:41
2021-07-07T12:56:35
albertvillanova
[]
Close #2598.
true
938,555,712
https://api.github.com/repos/huggingface/datasets/issues/2602
https://github.com/huggingface/datasets/pull/2602
2,602
Remove import of transformers
closed
0
2021-07-07T06:58:18
2021-07-12T14:10:22
2021-07-07T08:28:51
albertvillanova
[]
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers. Related to huggingface/transformers#12549 and #502.
true
938,096,396
https://api.github.com/repos/huggingface/datasets/issues/2601
https://github.com/huggingface/datasets/pull/2601
2,601
Fix `filter` with multiprocessing in case all samples are discarded
closed
0
2021-07-06T17:06:28
2021-07-12T14:10:35
2021-07-07T12:50:31
mxschmdt
[]
Fixes #2600 Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
true
938,086,745
https://api.github.com/repos/huggingface/datasets/issues/2600
https://github.com/huggingface/datasets/issues/2600
2,600
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
closed
0
2021-07-06T16:53:25
2021-07-07T12:50:31
2021-07-07T12:50:31
mxschmdt
[ "bug" ]
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) data.filter(lambda x: False, num_proc=2) ``` ## Expected results An empty table should be returned without crashing. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter return self.map( File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map result = concatenate_datasets(transformed_shards) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets table = concat_tables(tables_to_concat, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables blocks = to_blocks(tables[0]) IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.10 - PyArrow version: 3.0.0
false
937,980,229
https://api.github.com/repos/huggingface/datasets/issues/2599
https://github.com/huggingface/datasets/pull/2599
2,599
Update processing.rst with other export formats
closed
0
2021-07-06T14:50:38
2021-07-12T14:10:16
2021-07-07T08:05:48
TevenLeScao
[]
Add other supported export formats than CSV in the docs.
true
937,930,632
https://api.github.com/repos/huggingface/datasets/issues/2598
https://github.com/huggingface/datasets/issues/2598
2,598
Unable to download omp dataset
closed
1
2021-07-06T14:00:52
2021-07-07T12:56:35
2021-07-07T12:56:35
erikadistefano
[ "bug" ]
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual results Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b... 0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split writer.write(example, key) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_datasets.py", line 32, in <module> omp = load_dataset('omp', 'posts_labeled') File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.8.0 - Platform: Ubuntu 18.04.4 LTS - Python version: 3.6.9 - PyArrow version: 3.0.0
false
937,917,770
https://api.github.com/repos/huggingface/datasets/issues/2597
https://github.com/huggingface/datasets/pull/2597
2,597
Remove redundant prepare_module
closed
0
2021-07-06T13:47:45
2021-07-12T14:10:52
2021-07-07T13:01:46
albertvillanova
[ "refactoring" ]
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
true
937,598,914
https://api.github.com/repos/huggingface/datasets/issues/2596
https://github.com/huggingface/datasets/issues/2596
2,596
Transformer Class on dataset
closed
9
2021-07-06T07:27:15
2022-11-02T14:26:09
2022-11-02T14:26:09
arita37
[ "enhancement" ]
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
false
937,483,120
https://api.github.com/repos/huggingface/datasets/issues/2595
https://github.com/huggingface/datasets/issues/2595
2,595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
closed
2
2021-07-06T03:20:55
2021-07-06T05:59:49
2021-07-06T05:59:49
profsatwinder
[ "bug" ]
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation") 4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test") 9 frames /root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>() 19 20 import datasets ---> 21 from datasets.tasks import AutomaticSpeechRecognition 22 23 ModuleNotFoundError: No module named 'datasets.tasks'
false
937,294,772
https://api.github.com/repos/huggingface/datasets/issues/2594
https://github.com/huggingface/datasets/pull/2594
2,594
Fix BibTeX entry
closed
0
2021-07-05T18:24:10
2021-07-06T04:59:38
2021-07-06T04:59:38
albertvillanova
[]
Fix BibTeX entry.
true
937,242,137
https://api.github.com/repos/huggingface/datasets/issues/2593
https://github.com/huggingface/datasets/pull/2593
2,593
Support pandas 1.3.0 read_csv
closed
0
2021-07-05T16:40:04
2021-07-05T17:14:14
2021-07-05T17:14:14
lhoestq
[]
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults) 1304 1305 if names is not lib.no_default and prefix is not lib.no_default: -> 1306 raise ValueError("Specified named and prefix; you can only specify one.") 1307 1308 kwds["names"] = None if names is lib.no_default else names ValueError: Specified named and prefix; you can only specify one. ```
true
937,060,559
https://api.github.com/repos/huggingface/datasets/issues/2592
https://github.com/huggingface/datasets/pull/2592
2,592
Add c4.noclean infos
closed
0
2021-07-05T12:51:40
2021-07-05T13:15:53
2021-07-05T13:15:52
lhoestq
[]
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
true
936,957,975
https://api.github.com/repos/huggingface/datasets/issues/2591
https://github.com/huggingface/datasets/issues/2591
2,591
Cached dataset overflowing disk space
closed
4
2021-07-05T10:43:19
2021-07-19T09:08:19
2021-07-19T09:08:19
BirgerMoell
[]
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files). This might not technically be a bug, but I was unsure and I felt that the bug was the closest one. Traceback (most recent call last): File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single writer.finalize() File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize self.pa_writer.close() File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device """ The above exception was the direct cause of the following exception:
false
936,954,348
https://api.github.com/repos/huggingface/datasets/issues/2590
https://github.com/huggingface/datasets/pull/2590
2,590
Add language tags
closed
0
2021-07-05T10:39:57
2021-07-05T10:58:48
2021-07-05T10:58:48
lewtun
[]
This PR adds some missing language tags needed for ASR datasets in #2565
true
936,825,060
https://api.github.com/repos/huggingface/datasets/issues/2589
https://github.com/huggingface/datasets/pull/2589
2,589
Support multilabel metrics
closed
5
2021-07-05T08:19:25
2022-07-29T10:56:25
2021-07-08T08:40:15
albertvillanova
[]
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
true
936,795,541
https://api.github.com/repos/huggingface/datasets/issues/2588
https://github.com/huggingface/datasets/pull/2588
2,588
Fix test_is_small_dataset
closed
0
2021-07-05T07:46:26
2021-07-12T14:10:11
2021-07-06T17:09:30
albertvillanova
[]
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
true