id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
949,447,104 | https://api.github.com/repos/huggingface/datasets/issues/2689 | https://github.com/huggingface/datasets/issues/2689 | 2,689 | cannot save the dataset to disk after rename_column | closed | 5 | 2021-07-21T08:13:40 | 2025-02-11T23:23:17 | 2021-07-21T13:11:04 | PaulLerner | [
"bug"
] | ## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from_disk
In [5]: dataset=Dataset.from_dict({'foo': [0]})... | false |
949,182,074 | https://api.github.com/repos/huggingface/datasets/issues/2688 | https://github.com/huggingface/datasets/issues/2688 | 2,688 | hebrew language codes he and iw should be treated as aliases | closed | 2 | 2021-07-20T23:13:52 | 2021-07-21T16:34:53 | 2021-07-21T16:34:53 | eyaler | [
"bug"
] | https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | false |
948,890,481 | https://api.github.com/repos/huggingface/datasets/issues/2687 | https://github.com/huggingface/datasets/pull/2687 | 2,687 | Minor documentation fix | closed | 0 | 2021-07-20T17:43:23 | 2021-07-21T13:04:55 | 2021-07-21T13:04:55 | slowwavesleep | [] | Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. Thi... | true |
948,811,669 | https://api.github.com/repos/huggingface/datasets/issues/2686 | https://github.com/huggingface/datasets/pull/2686 | 2,686 | Fix bad config ids that name cache directories | closed | 0 | 2021-07-20T16:00:45 | 2021-07-20T16:27:15 | 2021-07-20T16:27:15 | lhoestq | [] | `data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded.
Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users.
I fixed this by ignoring the value of `data_dir` when it's `None` when computing... | true |
948,791,572 | https://api.github.com/repos/huggingface/datasets/issues/2685 | https://github.com/huggingface/datasets/pull/2685 | 2,685 | Fix Blog Authorship Corpus dataset | closed | 3 | 2021-07-20T15:44:50 | 2021-07-21T13:11:58 | 2021-07-21T13:11:58 | albertvillanova | [] | This PR:
- Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError`
- Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files
Close #2679. | true |
948,771,753 | https://api.github.com/repos/huggingface/datasets/issues/2684 | https://github.com/huggingface/datasets/pull/2684 | 2,684 | Print absolute local paths in load_dataset error messages | closed | 0 | 2021-07-20T15:28:28 | 2021-07-22T20:48:19 | 2021-07-22T14:01:10 | mariosasko | [] | Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223 | true |
948,721,379 | https://api.github.com/repos/huggingface/datasets/issues/2683 | https://github.com/huggingface/datasets/issues/2683 | 2,683 | Cache directories changed due to recent changes in how config kwargs are handled | closed | 0 | 2021-07-20T14:37:57 | 2021-07-20T16:27:15 | 2021-07-20T16:27:15 | lhoestq | [] | Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example:
```python
from datasets import load_dataset_builder
c4_builder = load_dataset_builder("c4", "en")
print(c4_builder.cache_dir)
# /Users/quentinlhoest/.cache/huggingfac... | false |
948,713,137 | https://api.github.com/repos/huggingface/datasets/issues/2682 | https://github.com/huggingface/datasets/pull/2682 | 2,682 | Fix c4 expected files | closed | 0 | 2021-07-20T14:29:31 | 2021-07-20T14:38:11 | 2021-07-20T14:38:10 | lhoestq | [] | Some files were not registered in the list of expected files to download
Fix https://github.com/huggingface/datasets/issues/2677 | true |
948,708,645 | https://api.github.com/repos/huggingface/datasets/issues/2681 | https://github.com/huggingface/datasets/issues/2681 | 2,681 | 5 duplicate datasets | closed | 2 | 2021-07-20T14:25:00 | 2021-07-20T15:44:17 | 2021-07-20T15:44:17 | severo | [
"bug"
] | ## Describe the bug
In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are:
- https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch
<img width="838... | false |
948,649,716 | https://api.github.com/repos/huggingface/datasets/issues/2680 | https://github.com/huggingface/datasets/pull/2680 | 2,680 | feat: 🎸 add paperswithcode id for qasper dataset | closed | 0 | 2021-07-20T13:22:29 | 2021-07-20T14:04:10 | 2021-07-20T14:04:10 | severo | [] | The reverse reference exists on paperswithcode:
https://paperswithcode.com/dataset/qasper | true |
948,506,638 | https://api.github.com/repos/huggingface/datasets/issues/2679 | https://github.com/huggingface/datasets/issues/2679 | 2,679 | Cannot load the blog_authorship_corpus due to codec errors | closed | 3 | 2021-07-20T10:13:20 | 2021-07-21T17:02:21 | 2021-07-21T13:11:58 | izaskr | [
"bug"
] | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | false |
948,471,222 | https://api.github.com/repos/huggingface/datasets/issues/2678 | https://github.com/huggingface/datasets/issues/2678 | 2,678 | Import Error in Kaggle notebook | closed | 4 | 2021-07-20T09:28:38 | 2021-07-21T13:59:26 | 2021-07-21T13:03:02 | prikmm | [
"bug"
] | ## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-inp... | false |
948,429,788 | https://api.github.com/repos/huggingface/datasets/issues/2677 | https://github.com/huggingface/datasets/issues/2677 | 2,677 | Error when downloading C4 | closed | 3 | 2021-07-20T08:37:30 | 2021-07-20T14:41:31 | 2021-07-20T14:38:10 | Aktsvigun | [
"bug"
] | Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2... | false |
947,734,909 | https://api.github.com/repos/huggingface/datasets/issues/2676 | https://github.com/huggingface/datasets/pull/2676 | 2,676 | Increase json reader block_size automatically | closed | 0 | 2021-07-19T14:51:14 | 2021-07-19T17:51:39 | 2021-07-19T17:51:38 | lhoestq | [] | Currently some files can't be read with the default parameters of the JSON lines reader.
For example this one:
https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz
raises a pyarrow error:
```python
ArrowInvalid: straddling object straddles two block boundaries (try to increa... | true |
947,657,732 | https://api.github.com/repos/huggingface/datasets/issues/2675 | https://github.com/huggingface/datasets/pull/2675 | 2,675 | Parallelize ETag requests | closed | 0 | 2021-07-19T13:30:42 | 2021-07-19T19:33:25 | 2021-07-19T19:33:25 | lhoestq | [] | Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed.
In this I made the ETag requests parallel using multi... | true |
947,338,202 | https://api.github.com/repos/huggingface/datasets/issues/2674 | https://github.com/huggingface/datasets/pull/2674 | 2,674 | Fix sacrebleu parameter name | closed | 0 | 2021-07-19T07:07:26 | 2021-07-19T08:07:03 | 2021-07-19T08:07:03 | albertvillanova | [] | DONE:
- Fix parameter name: `smooth` to `smooth_method`.
- Improve kwargs description.
- Align docs on using a metric.
- Add example of passing additional arguments in using metrics.
Related to #2669. | true |
947,300,008 | https://api.github.com/repos/huggingface/datasets/issues/2673 | https://github.com/huggingface/datasets/pull/2673 | 2,673 | Fix potential DuplicatedKeysError in SQuAD | closed | 0 | 2021-07-19T06:08:00 | 2021-07-19T07:08:03 | 2021-07-19T07:08:03 | albertvillanova | [] | DONE:
- Fix potential DiplicatedKeysError by ensuring keys are unique.
- Align examples in the docs with SQuAD code.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | true |
947,294,605 | https://api.github.com/repos/huggingface/datasets/issues/2672 | https://github.com/huggingface/datasets/pull/2672 | 2,672 | Fix potential DuplicatedKeysError in LibriSpeech | closed | 0 | 2021-07-19T06:00:49 | 2021-07-19T06:28:57 | 2021-07-19T06:28:56 | albertvillanova | [] | DONE:
- Fix unnecessary path join.
- Fix potential DiplicatedKeysError by ensuring keys are unique.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | true |
947,273,875 | https://api.github.com/repos/huggingface/datasets/issues/2671 | https://github.com/huggingface/datasets/pull/2671 | 2,671 | Mesinesp development and training data sets have been added. | closed | 1 | 2021-07-19T05:14:38 | 2021-07-19T07:32:28 | 2021-07-19T06:45:50 | aslihanuysall | [] | https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records.
The Mesinesp ... | true |
947,120,709 | https://api.github.com/repos/huggingface/datasets/issues/2670 | https://github.com/huggingface/datasets/issues/2670 | 2,670 | Using sharding to parallelize indexing | open | 0 | 2021-07-18T21:26:26 | 2021-10-07T13:33:25 | null | ggdupont | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding)
**Describe the solution you'd like**
When working on dataset shards, if an index already exists, its mapping ... | false |
946,982,998 | https://api.github.com/repos/huggingface/datasets/issues/2669 | https://github.com/huggingface/datasets/issues/2669 | 2,669 | Metric kwargs are not passed to underlying external metric f1_score | closed | 2 | 2021-07-18T08:32:31 | 2021-07-18T18:36:05 | 2021-07-18T11:19:04 | BramVanroy | [
"bug"
] | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to... | false |
946,867,622 | https://api.github.com/repos/huggingface/datasets/issues/2668 | https://github.com/huggingface/datasets/pull/2668 | 2,668 | Add Russian SuperGLUE | closed | 2 | 2021-07-17T17:41:28 | 2021-07-29T11:50:31 | 2021-07-29T11:50:31 | slowwavesleep | [] | Hi,
This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for. | true |
946,861,908 | https://api.github.com/repos/huggingface/datasets/issues/2667 | https://github.com/huggingface/datasets/pull/2667 | 2,667 | Use tqdm from tqdm_utils | closed | 2 | 2021-07-17T17:06:35 | 2021-07-19T17:39:10 | 2021-07-19T17:32:00 | mariosasko | [] | This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, the... | true |
946,825,140 | https://api.github.com/repos/huggingface/datasets/issues/2666 | https://github.com/huggingface/datasets/pull/2666 | 2,666 | Adds CodeClippy dataset [WIP] | closed | 2 | 2021-07-17T13:32:04 | 2023-07-26T23:06:01 | 2022-10-03T09:37:35 | arampacha | [
"dataset contribution"
] | CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week
https://the-eye.eu/public/AI/training_data/code_clippy_data/ | true |
946,822,036 | https://api.github.com/repos/huggingface/datasets/issues/2665 | https://github.com/huggingface/datasets/pull/2665 | 2,665 | Adds APPS dataset to the hub [WIP] | closed | 1 | 2021-07-17T13:13:17 | 2022-10-03T09:38:10 | 2022-10-03T09:38:10 | arampacha | [
"dataset contribution"
] | A loading script for [APPS dataset](https://github.com/hendrycks/apps) | true |
946,552,273 | https://api.github.com/repos/huggingface/datasets/issues/2663 | https://github.com/huggingface/datasets/issues/2663 | 2,663 | [`to_json`] add multi-proc sharding support | closed | 2 | 2021-07-16T19:41:50 | 2021-09-13T13:56:37 | 2021-09-13T13:56:37 | stas00 | [
"enhancement"
] | As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.
I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally i... | false |
946,470,815 | https://api.github.com/repos/huggingface/datasets/issues/2662 | https://github.com/huggingface/datasets/pull/2662 | 2,662 | Load Dataset from the Hub (NO DATASET SCRIPT) | closed | 5 | 2021-07-16T17:21:58 | 2021-08-25T14:53:01 | 2021-08-25T14:18:08 | lhoestq | [] | ## Load the data from any Dataset repository on the Hub
This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script.
As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. H... | true |
946,446,967 | https://api.github.com/repos/huggingface/datasets/issues/2661 | https://github.com/huggingface/datasets/pull/2661 | 2,661 | Add SD task for SUPERB | closed | 11 | 2021-07-16T16:43:21 | 2021-08-04T17:03:53 | 2021-08-04T17:03:53 | albertvillanova | [] | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
TODO:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Upl... | true |
946,316,180 | https://api.github.com/repos/huggingface/datasets/issues/2660 | https://github.com/huggingface/datasets/pull/2660 | 2,660 | Move checks from _map_single to map | closed | 3 | 2021-07-16T13:53:33 | 2021-09-06T14:12:23 | 2021-09-06T14:12:23 | mariosasko | [] | The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is the... | true |
946,155,407 | https://api.github.com/repos/huggingface/datasets/issues/2659 | https://github.com/huggingface/datasets/pull/2659 | 2,659 | Allow dataset config kwargs to be None | closed | 0 | 2021-07-16T10:25:38 | 2021-07-16T12:46:07 | 2021-07-16T12:46:07 | lhoestq | [] | Close https://github.com/huggingface/datasets/issues/2658
The dataset config kwargs that were set to None we simply ignored.
This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator.
cc @SBrandeis | true |
946,139,532 | https://api.github.com/repos/huggingface/datasets/issues/2658 | https://github.com/huggingface/datasets/issues/2658 | 2,658 | Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv | closed | 0 | 2021-07-16T10:05:44 | 2021-07-16T12:46:06 | 2021-07-16T12:46:06 | lhoestq | [] | When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator.
Related to https://github.com/huggingface/datasets/pull/2656
cc @SBrandeis | false |
945,822,829 | https://api.github.com/repos/huggingface/datasets/issues/2657 | https://github.com/huggingface/datasets/issues/2657 | 2,657 | `to_json` reporting enhancements | open | 0 | 2021-07-15T23:32:18 | 2021-07-15T23:33:53 | null | stas00 | [
"enhancement"
] | While using `to_json` 2 things came to mind that would have made the experience easier on the user:
1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps... | false |
945,421,790 | https://api.github.com/repos/huggingface/datasets/issues/2656 | https://github.com/huggingface/datasets/pull/2656 | 2,656 | Change `from_csv` default arguments | closed | 1 | 2021-07-15T14:09:06 | 2023-09-24T09:56:44 | 2021-07-16T10:23:26 | SBrandeis | [] | Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator
This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`:
```python
Dataset.from_csv(
...,
sep=None
)
``` | true |
945,382,723 | https://api.github.com/repos/huggingface/datasets/issues/2655 | https://github.com/huggingface/datasets/issues/2655 | 2,655 | Allow the selection of multiple columns at once | closed | 5 | 2021-07-15T13:30:45 | 2024-01-09T15:11:27 | 2024-01-09T07:46:28 | Dref360 | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
idx, label = my_dataset[['idx', 'label']]
```
**... | false |
945,167,231 | https://api.github.com/repos/huggingface/datasets/issues/2654 | https://github.com/huggingface/datasets/issues/2654 | 2,654 | Give a user feedback if the dataset he loads is streamable or not | open | 2 | 2021-07-15T09:07:27 | 2021-08-02T11:03:21 | null | philschmid | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
I would love to know if a `dataset` is with the current implementation streamable or not.
**Describe the solution you'd like**
We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g.... | false |
945,102,321 | https://api.github.com/repos/huggingface/datasets/issues/2653 | https://github.com/huggingface/datasets/issues/2653 | 2,653 | Add SD task for SUPERB | closed | 2 | 2021-07-15T07:51:40 | 2021-08-04T17:03:52 | 2021-08-04T17:03:52 | albertvillanova | [
"dataset request"
] | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Up... | false |
944,865,924 | https://api.github.com/repos/huggingface/datasets/issues/2652 | https://github.com/huggingface/datasets/pull/2652 | 2,652 | Fix logging docstring | closed | 0 | 2021-07-14T23:19:58 | 2021-07-18T11:41:06 | 2021-07-15T09:57:31 | mariosasko | [] | Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534. | true |
944,796,961 | https://api.github.com/repos/huggingface/datasets/issues/2651 | https://github.com/huggingface/datasets/issues/2651 | 2,651 | Setting log level higher than warning does not suppress progress bar | closed | 7 | 2021-07-14T21:06:51 | 2022-07-08T14:51:57 | 2021-07-15T03:41:35 | Isa-rentacs | [
"bug"
] | ## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.
I also tried to set `DATASETS_VERBOS... | false |
944,672,565 | https://api.github.com/repos/huggingface/datasets/issues/2650 | https://github.com/huggingface/datasets/issues/2650 | 2,650 | [load_dataset] shard and parallelize the process | closed | 4 | 2021-07-14T18:04:58 | 2023-11-28T19:11:41 | 2023-11-28T19:11:40 | stas00 | [
"enhancement"
] | - Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core.
- If the build crashes, everything done up to that point gets lost
Request: Shard the build over multiple arrow files, which would enable:
- much faster build by parallelizing the build process
- if the p... | false |
944,651,229 | https://api.github.com/repos/huggingface/datasets/issues/2649 | https://github.com/huggingface/datasets/issues/2649 | 2,649 | adding progress bar / ETA for `load_dataset` | open | 2 | 2021-07-14T17:34:39 | 2023-03-27T10:32:49 | null | stas00 | [
"enhancement"
] | Please consider:
```
Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2...
HF google storage unre... | false |
944,484,522 | https://api.github.com/repos/huggingface/datasets/issues/2648 | https://github.com/huggingface/datasets/issues/2648 | 2,648 | Add web_split dataset for Paraphase and Rephrase benchmark | open | 1 | 2021-07-14T14:24:36 | 2021-07-14T14:26:12 | null | bhadreshpsavani | [
"enhancement"
] | ## Describe:
For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better resu... | false |
944,424,941 | https://api.github.com/repos/huggingface/datasets/issues/2647 | https://github.com/huggingface/datasets/pull/2647 | 2,647 | Fix anchor in README | closed | 0 | 2021-07-14T13:22:44 | 2021-07-18T11:41:18 | 2021-07-15T06:50:47 | mariosasko | [] | I forgot to push this fix in #2611, so I'm sending it now. | true |
944,379,954 | https://api.github.com/repos/huggingface/datasets/issues/2646 | https://github.com/huggingface/datasets/issues/2646 | 2,646 | downloading of yahoo_answers_topics dataset failed | closed | 2 | 2021-07-14T12:31:05 | 2022-08-04T08:28:24 | 2022-08-04T08:28:24 | vikrant7k | [
"bug"
] | ## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config... | false |
944,374,284 | https://api.github.com/repos/huggingface/datasets/issues/2645 | https://github.com/huggingface/datasets/issues/2645 | 2,645 | load_dataset processing failed with OS error after downloading a dataset | closed | 2 | 2021-07-14T12:23:53 | 2021-07-15T09:34:02 | 2021-07-15T09:34:02 | fake-warrior8 | [
"bug"
] | ## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets import load_dataset
this_dataset = load_dataset('opus100', 'af-en')
```
... | false |
944,254,748 | https://api.github.com/repos/huggingface/datasets/issues/2644 | https://github.com/huggingface/datasets/issues/2644 | 2,644 | Batched `map` not allowed to return 0 items | closed | 6 | 2021-07-14T09:58:19 | 2021-07-26T14:55:15 | 2021-07-26T14:55:15 | pcuenca | [
"bug"
] | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | false |
944,220,273 | https://api.github.com/repos/huggingface/datasets/issues/2643 | https://github.com/huggingface/datasets/issues/2643 | 2,643 | Enum used in map functions will raise a RecursionError with dill. | open | 4 | 2021-07-14T09:16:08 | 2021-11-02T09:51:11 | null | jorgeecardona | [
"bug"
] | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ... | false |
944,175,697 | https://api.github.com/repos/huggingface/datasets/issues/2642 | https://github.com/huggingface/datasets/issues/2642 | 2,642 | Support multi-worker with streaming dataset (IterableDataset). | open | 3 | 2021-07-14T08:22:58 | 2024-05-03T10:11:04 | null | changjonathanc | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking).
**Describe the solution you'd like**
Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`.
**D... | false |
943,838,085 | https://api.github.com/repos/huggingface/datasets/issues/2641 | https://github.com/huggingface/datasets/issues/2641 | 2,641 | load_dataset("financial_phrasebank") NonMatchingChecksumError | closed | 4 | 2021-07-13T21:21:49 | 2022-08-04T08:30:08 | 2022-08-04T08:30:08 | courtmckay | [
"bug"
] | ## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_allagree')
```
## Expected results
I expect to see the financi... | false |
943,591,055 | https://api.github.com/repos/huggingface/datasets/issues/2640 | https://github.com/huggingface/datasets/pull/2640 | 2,640 | Fix docstrings | closed | 0 | 2021-07-13T16:09:14 | 2021-07-15T06:51:01 | 2021-07-15T06:06:12 | albertvillanova | [] | Fix rendering of some docstrings. | true |
943,527,463 | https://api.github.com/repos/huggingface/datasets/issues/2639 | https://github.com/huggingface/datasets/pull/2639 | 2,639 | Refactor patching to specific submodule | closed | 0 | 2021-07-13T15:08:45 | 2021-07-13T16:52:49 | 2021-07-13T16:52:49 | albertvillanova | [] | Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created.
In relation with the initial approach followed in #2631. | true |
943,484,913 | https://api.github.com/repos/huggingface/datasets/issues/2638 | https://github.com/huggingface/datasets/pull/2638 | 2,638 | Streaming for the Json loader | closed | 2 | 2021-07-13T14:37:06 | 2021-07-16T15:59:32 | 2021-07-16T15:59:31 | lhoestq | [] | It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows.
Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related... | true |
943,044,514 | https://api.github.com/repos/huggingface/datasets/issues/2636 | https://github.com/huggingface/datasets/pull/2636 | 2,636 | Streaming for the Pandas loader | closed | 0 | 2021-07-13T09:18:21 | 2021-07-13T14:37:24 | 2021-07-13T14:37:23 | lhoestq | [] | It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example.
Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub | true |
943,030,999 | https://api.github.com/repos/huggingface/datasets/issues/2635 | https://github.com/huggingface/datasets/pull/2635 | 2,635 | Streaming for the CSV loader | closed | 0 | 2021-07-13T09:08:58 | 2021-07-13T15:19:38 | 2021-07-13T15:19:37 | lhoestq | [] | It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows.
Indeed, when streaming, `open` is extended to support reading from remote file progressively. | true |
942,805,621 | https://api.github.com/repos/huggingface/datasets/issues/2634 | https://github.com/huggingface/datasets/pull/2634 | 2,634 | Inject ASR template for lj_speech dataset | closed | 0 | 2021-07-13T06:04:54 | 2021-07-13T09:05:09 | 2021-07-13T09:05:09 | albertvillanova | [] | Related to: #2565, #2633.
cc: @lewtun | true |
942,396,414 | https://api.github.com/repos/huggingface/datasets/issues/2633 | https://github.com/huggingface/datasets/pull/2633 | 2,633 | Update ASR tags | closed | 0 | 2021-07-12T19:58:31 | 2021-07-13T05:45:26 | 2021-07-13T05:45:13 | lewtun | [] | This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620 | true |
942,293,727 | https://api.github.com/repos/huggingface/datasets/issues/2632 | https://github.com/huggingface/datasets/pull/2632 | 2,632 | add image-classification task template | closed | 2 | 2021-07-12T17:41:03 | 2021-07-13T15:44:28 | 2021-07-13T15:28:16 | nateraw | [] | Snippet below is the tl;dr, but you can try it out directly here:
[](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb)
```python
from datasets import load_datase... | true |
942,242,271 | https://api.github.com/repos/huggingface/datasets/issues/2631 | https://github.com/huggingface/datasets/pull/2631 | 2,631 | Delete extracted files when loading dataset | closed | 13 | 2021-07-12T16:39:33 | 2021-07-19T09:08:19 | 2021-07-19T09:08:19 | albertvillanova | [] | Close #2481, close #2604, close #2591.
cc: @stas00, @thomwolf, @BirgerMoell | true |
942,102,956 | https://api.github.com/repos/huggingface/datasets/issues/2630 | https://github.com/huggingface/datasets/issues/2630 | 2,630 | Progress bars are not properly rendered in Jupyter notebook | closed | 2 | 2021-07-12T14:07:13 | 2022-02-03T15:55:33 | 2022-02-03T15:55:33 | albertvillanova | [
"bug"
] | ## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress bars.
## Actual results
Simple plane progress bars.
cc... | false |
941,819,205 | https://api.github.com/repos/huggingface/datasets/issues/2629 | https://github.com/huggingface/datasets/issues/2629 | 2,629 | Load datasets from the Hub without requiring a dataset script | closed | 1 | 2021-07-12T08:45:17 | 2021-08-25T14:18:08 | 2021-08-25T14:18:08 | lhoestq | [] | As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script.
Moreover I would like to be able to specify which file goes into which split using the `da... | false |
941,676,404 | https://api.github.com/repos/huggingface/datasets/issues/2628 | https://github.com/huggingface/datasets/pull/2628 | 2,628 | Use ETag of remote data files | closed | 0 | 2021-07-12T05:10:10 | 2021-07-12T14:08:34 | 2021-07-12T08:40:07 | albertvillanova | [] | Use ETag of remote data files to create config ID.
Related to #2616. | true |
941,503,349 | https://api.github.com/repos/huggingface/datasets/issues/2627 | https://github.com/huggingface/datasets/pull/2627 | 2,627 | Minor fix tests with Windows paths | closed | 0 | 2021-07-11T17:55:48 | 2021-07-12T14:08:47 | 2021-07-12T08:34:50 | albertvillanova | [] | Minor fix tests with Windows paths. | true |
941,497,830 | https://api.github.com/repos/huggingface/datasets/issues/2626 | https://github.com/huggingface/datasets/pull/2626 | 2,626 | Use correct logger in metrics.py | closed | 0 | 2021-07-11T17:22:30 | 2021-07-12T14:08:54 | 2021-07-12T05:54:29 | mariosasko | [] | Fixes #2624 | true |
941,439,922 | https://api.github.com/repos/huggingface/datasets/issues/2625 | https://github.com/huggingface/datasets/issues/2625 | 2,625 | ⚛️😇⚙️🔑 | closed | 0 | 2021-07-11T12:14:34 | 2021-07-12T05:55:59 | 2021-07-12T05:55:59 | hustlen0mics | [] | false | |
941,318,247 | https://api.github.com/repos/huggingface/datasets/issues/2624 | https://github.com/huggingface/datasets/issues/2624 | 2,624 | can't set verbosity for `metric.py` | closed | 1 | 2021-07-10T20:23:45 | 2021-07-12T05:54:29 | 2021-07-12T05:54:29 | thomas-happify | [
"bug"
] | ## Describe the bug
```
[2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock
[2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingfa... | false |
941,265,342 | https://api.github.com/repos/huggingface/datasets/issues/2623 | https://github.com/huggingface/datasets/pull/2623 | 2,623 | [Metrics] added wiki_split metrics | closed | 1 | 2021-07-10T14:51:50 | 2021-07-14T14:28:13 | 2021-07-12T22:34:31 | bhadreshpsavani | [] | Fixes: #2606
This pull request adds combine metrics for the wikisplit or English sentence split task
Reviewer: @patrickvonplaten | true |
941,127,785 | https://api.github.com/repos/huggingface/datasets/issues/2622 | https://github.com/huggingface/datasets/issues/2622 | 2,622 | Integration with AugLy | closed | 2 | 2021-07-10T00:03:09 | 2023-07-20T13:18:48 | 2023-07-20T13:18:47 | Darktex | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text.
It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP m... | false |
940,916,446 | https://api.github.com/repos/huggingface/datasets/issues/2621 | https://github.com/huggingface/datasets/pull/2621 | 2,621 | Use prefix to allow exceed Windows MAX_PATH | closed | 6 | 2021-07-09T16:39:53 | 2021-07-16T15:28:12 | 2021-07-16T15:28:11 | albertvillanova | [] | By using this prefix, you can exceed the Windows MAX_PATH limit.
See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces
Related to #2524, #2220. | true |
940,893,389 | https://api.github.com/repos/huggingface/datasets/issues/2620 | https://github.com/huggingface/datasets/pull/2620 | 2,620 | Add speech processing tasks | closed | 2 | 2021-07-09T16:07:29 | 2021-07-12T18:32:59 | 2021-07-12T17:32:02 | lewtun | [] | This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category.
The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set. | true |
940,858,236 | https://api.github.com/repos/huggingface/datasets/issues/2619 | https://github.com/huggingface/datasets/pull/2619 | 2,619 | Add ASR task for SUPERB | closed | 3 | 2021-07-09T15:19:45 | 2021-07-15T08:55:58 | 2021-07-13T12:40:18 | lewtun | [] | This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
... | true |
940,852,640 | https://api.github.com/repos/huggingface/datasets/issues/2618 | https://github.com/huggingface/datasets/issues/2618 | 2,618 | `filelock.py` Error | closed | 2 | 2021-07-09T15:12:49 | 2024-06-21T06:14:07 | 2023-11-23T19:06:19 | liyucheng09 | [
"bug"
] | ## Describe the bug
It seems that the `filelock.py` went error.
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
... | false |
940,846,847 | https://api.github.com/repos/huggingface/datasets/issues/2617 | https://github.com/huggingface/datasets/pull/2617 | 2,617 | Fix missing EOL issue in to_json for old versions of pandas | closed | 0 | 2021-07-09T15:05:45 | 2021-07-12T14:09:00 | 2021-07-09T15:28:33 | lhoestq | [] | Some versions of pandas don't add an EOL at the end of the output of `to_json`.
Therefore users could end up having two samples in the same line
Close https://github.com/huggingface/datasets/issues/2615 | true |
940,799,038 | https://api.github.com/repos/huggingface/datasets/issues/2616 | https://github.com/huggingface/datasets/pull/2616 | 2,616 | Support remote data files | closed | 2 | 2021-07-09T14:07:38 | 2021-07-09T16:13:41 | 2021-07-09T16:13:41 | albertvillanova | [
"enhancement"
] | Add support for (streaming) remote data files:
```python
data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
```
cc: @thomwolf | true |
940,794,339 | https://api.github.com/repos/huggingface/datasets/issues/2615 | https://github.com/huggingface/datasets/issues/2615 | 2,615 | Jsonlines export error | closed | 10 | 2021-07-09T14:02:05 | 2021-07-09T15:29:07 | 2021-07-09T15:28:33 | TevenLeScao | [
"bug"
] | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | false |
940,762,427 | https://api.github.com/repos/huggingface/datasets/issues/2614 | https://github.com/huggingface/datasets/pull/2614 | 2,614 | Convert numpy scalar to python float in Pearsonr output | closed | 0 | 2021-07-09T13:22:55 | 2021-07-12T14:13:02 | 2021-07-09T14:04:38 | lhoestq | [] | Following of https://github.com/huggingface/datasets/pull/2612 | true |
940,759,852 | https://api.github.com/repos/huggingface/datasets/issues/2613 | https://github.com/huggingface/datasets/pull/2613 | 2,613 | Use ndarray.item instead of ndarray.tolist | closed | 0 | 2021-07-09T13:19:35 | 2021-07-12T14:12:57 | 2021-07-09T13:50:05 | lewtun | [] | This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works).
Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#nump... | true |
940,604,512 | https://api.github.com/repos/huggingface/datasets/issues/2612 | https://github.com/huggingface/datasets/pull/2612 | 2,612 | Return Python float instead of numpy.float64 in sklearn metrics | closed | 3 | 2021-07-09T09:48:09 | 2021-07-12T14:12:53 | 2021-07-09T13:03:54 | lewtun | [] | This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-... | true |
940,307,053 | https://api.github.com/repos/huggingface/datasets/issues/2611 | https://github.com/huggingface/datasets/pull/2611 | 2,611 | More consistent naming | closed | 0 | 2021-07-09T00:09:17 | 2021-07-13T17:13:19 | 2021-07-13T16:08:30 | mariosasko | [] | As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`🤗Datasets` -> `🤗 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc. | true |
939,899,829 | https://api.github.com/repos/huggingface/datasets/issues/2610 | https://github.com/huggingface/datasets/pull/2610 | 2,610 | Add missing WikiANN language tags | closed | 0 | 2021-07-08T14:08:01 | 2021-07-12T14:12:16 | 2021-07-08T15:44:04 | albertvillanova | [] | Add missing language tags for WikiANN datasets. | true |
939,616,682 | https://api.github.com/repos/huggingface/datasets/issues/2609 | https://github.com/huggingface/datasets/pull/2609 | 2,609 | Fix potential DuplicatedKeysError | closed | 1 | 2021-07-08T08:38:04 | 2021-07-12T14:13:16 | 2021-07-09T16:42:08 | albertvillanova | [] | Fix potential DiplicatedKeysError by ensuring keys are unique.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | true |
938,897,626 | https://api.github.com/repos/huggingface/datasets/issues/2608 | https://github.com/huggingface/datasets/pull/2608 | 2,608 | Support streaming JSON files | closed | 0 | 2021-07-07T13:30:22 | 2021-07-12T14:12:31 | 2021-07-08T16:08:41 | albertvillanova | [] | Use open in JSON dataset builder, so that it can be patched with xopen for streaming.
Close #2607. | true |
938,796,902 | https://api.github.com/repos/huggingface/datasets/issues/2607 | https://github.com/huggingface/datasets/issues/2607 | 2,607 | Streaming local gzip compressed JSON line files is not working | closed | 6 | 2021-07-07T11:36:33 | 2021-07-20T09:50:19 | 2021-07-08T16:08:41 | thomwolf | [
"bug"
] | ## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
next(iter(streamed_dataset))... | false |
938,763,684 | https://api.github.com/repos/huggingface/datasets/issues/2606 | https://github.com/huggingface/datasets/issues/2606 | 2,606 | [Metrics] addition of wiki_split metrics | closed | 1 | 2021-07-07T10:56:04 | 2021-07-12T22:34:31 | 2021-07-12T22:34:31 | bhadreshpsavani | [
"enhancement",
"metric request"
] | **Is your feature request related to a problem? Please describe.**
While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score
like this
 | closed | 0 | 2021-07-07T08:47:23 | 2021-07-12T14:10:27 | 2021-07-07T08:59:13 | lhoestq | [] | During the FLAX sprint some users have this error when streaming datasets:
```python
aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer
```
This error must trigger a retry instead of directly crashing
Therefore I extended the error type that triggers the retry to be the base aiohttp er... | true |
938,602,237 | https://api.github.com/repos/huggingface/datasets/issues/2604 | https://github.com/huggingface/datasets/issues/2604 | 2,604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | closed | 14 | 2021-07-07T07:56:16 | 2021-07-19T09:08:18 | 2021-07-19T09:08:18 | thomwolf | [
"enhancement"
] | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | false |
938,588,149 | https://api.github.com/repos/huggingface/datasets/issues/2603 | https://github.com/huggingface/datasets/pull/2603 | 2,603 | Fix DuplicatedKeysError in omp | closed | 0 | 2021-07-07T07:38:32 | 2021-07-12T14:10:41 | 2021-07-07T12:56:35 | albertvillanova | [] | Close #2598. | true |
938,555,712 | https://api.github.com/repos/huggingface/datasets/issues/2602 | https://github.com/huggingface/datasets/pull/2602 | 2,602 | Remove import of transformers | closed | 0 | 2021-07-07T06:58:18 | 2021-07-12T14:10:22 | 2021-07-07T08:28:51 | albertvillanova | [] | When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers.
Related to huggingface/transformers#12549 and #502. | true |
938,096,396 | https://api.github.com/repos/huggingface/datasets/issues/2601 | https://github.com/huggingface/datasets/pull/2601 | 2,601 | Fix `filter` with multiprocessing in case all samples are discarded | closed | 0 | 2021-07-06T17:06:28 | 2021-07-12T14:10:35 | 2021-07-07T12:50:31 | mxschmdt | [] | Fixes #2600
Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process. | true |
938,086,745 | https://api.github.com/repos/huggingface/datasets/issues/2600 | https://github.com/huggingface/datasets/issues/2600 | 2,600 | Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded | closed | 0 | 2021-07-06T16:53:25 | 2021-07-07T12:50:31 | 2021-07-07T12:50:31 | mxschmdt | [
"bug"
] | ## Describe the bug
If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes.
## Steps to reproduce the bug
```python
from datasets import Dataset
data = Dataset.from_dict({'id': [0,1]})
dat... | false |
937,980,229 | https://api.github.com/repos/huggingface/datasets/issues/2599 | https://github.com/huggingface/datasets/pull/2599 | 2,599 | Update processing.rst with other export formats | closed | 0 | 2021-07-06T14:50:38 | 2021-07-12T14:10:16 | 2021-07-07T08:05:48 | TevenLeScao | [] | Add other supported export formats than CSV in the docs. | true |
937,930,632 | https://api.github.com/repos/huggingface/datasets/issues/2598 | https://github.com/huggingface/datasets/issues/2598 | 2,598 | Unable to download omp dataset | closed | 1 | 2021-07-06T14:00:52 | 2021-07-07T12:56:35 | 2021-07-07T12:56:35 | erikadistefano | [
"bug"
] | ## Describe the bug
The omp dataset cannot be downloaded because of a DuplicatedKeysError
## Steps to reproduce the bug
from datasets import load_dataset
omp = load_dataset('omp', 'posts_labeled')
print(omp)
## Expected results
This code should download the omp dataset and print the dictionary
## Actual r... | false |
937,917,770 | https://api.github.com/repos/huggingface/datasets/issues/2597 | https://github.com/huggingface/datasets/pull/2597 | 2,597 | Remove redundant prepare_module | closed | 0 | 2021-07-06T13:47:45 | 2021-07-12T14:10:52 | 2021-07-07T13:01:46 | albertvillanova | [
"refactoring"
] | I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`. | true |
937,598,914 | https://api.github.com/repos/huggingface/datasets/issues/2596 | https://github.com/huggingface/datasets/issues/2596 | 2,596 | Transformer Class on dataset | closed | 9 | 2021-07-06T07:27:15 | 2022-11-02T14:26:09 | 2022-11-02T14:26:09 | arita37 | [
"enhancement"
] | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| false |
937,483,120 | https://api.github.com/repos/huggingface/datasets/issues/2595 | https://github.com/huggingface/datasets/issues/2595 | 2,595 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets | closed | 2 | 2021-07-06T03:20:55 | 2021-07-06T05:59:49 | 2021-07-06T05:59:49 | profsatwinder | [
"bug"
] | Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_da... | false |
937,294,772 | https://api.github.com/repos/huggingface/datasets/issues/2594 | https://github.com/huggingface/datasets/pull/2594 | 2,594 | Fix BibTeX entry | closed | 0 | 2021-07-05T18:24:10 | 2021-07-06T04:59:38 | 2021-07-06T04:59:38 | albertvillanova | [] | Fix BibTeX entry. | true |
937,242,137 | https://api.github.com/repos/huggingface/datasets/issues/2593 | https://github.com/huggingface/datasets/pull/2593 | 2,593 | Support pandas 1.3.0 read_csv | closed | 0 | 2021-07-05T16:40:04 | 2021-07-05T17:14:14 | 2021-07-05T17:14:14 | lhoestq | [] | Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387
The csv reader raises an error:
```python
/usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on... | true |
937,060,559 | https://api.github.com/repos/huggingface/datasets/issues/2592 | https://github.com/huggingface/datasets/pull/2592 | 2,592 | Add c4.noclean infos | closed | 0 | 2021-07-05T12:51:40 | 2021-07-05T13:15:53 | 2021-07-05T13:15:52 | lhoestq | [] | Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset | true |
936,957,975 | https://api.github.com/repos/huggingface/datasets/issues/2591 | https://github.com/huggingface/datasets/issues/2591 | 2,591 | Cached dataset overflowing disk space | closed | 4 | 2021-07-05T10:43:19 | 2021-07-19T09:08:19 | 2021-07-19T09:08:19 | BirgerMoell | [] | I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to b... | false |
936,954,348 | https://api.github.com/repos/huggingface/datasets/issues/2590 | https://github.com/huggingface/datasets/pull/2590 | 2,590 | Add language tags | closed | 0 | 2021-07-05T10:39:57 | 2021-07-05T10:58:48 | 2021-07-05T10:58:48 | lewtun | [] | This PR adds some missing language tags needed for ASR datasets in #2565 | true |
936,825,060 | https://api.github.com/repos/huggingface/datasets/issues/2589 | https://github.com/huggingface/datasets/pull/2589 | 2,589 | Support multilabel metrics | closed | 5 | 2021-07-05T08:19:25 | 2022-07-29T10:56:25 | 2021-07-08T08:40:15 | albertvillanova | [] | Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554. | true |
936,795,541 | https://api.github.com/repos/huggingface/datasets/issues/2588 | https://github.com/huggingface/datasets/pull/2588 | 2,588 | Fix test_is_small_dataset | closed | 0 | 2021-07-05T07:46:26 | 2021-07-12T14:10:11 | 2021-07-06T17:09:30 | albertvillanova | [] | Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests. | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.