id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
969,728,545
https://api.github.com/repos/huggingface/datasets/issues/2794
https://github.com/huggingface/datasets/issues/2794
2,794
Warnings and documentation about pickling incorrect
open
0
2021-08-12T23:09:13
2021-08-12T23:09:31
null
mbforbes
[ "bug" ]
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: ...
false
968,967,773
https://api.github.com/repos/huggingface/datasets/issues/2793
https://github.com/huggingface/datasets/pull/2793
2,793
Fix type hint for data_files
closed
0
2021-08-12T14:42:37
2021-08-12T15:35:29
2021-08-12T15:35:29
albertvillanova
[]
Fix type hint for `data_files` in signatures and docstrings.
true
968,650,274
https://api.github.com/repos/huggingface/datasets/issues/2792
https://github.com/huggingface/datasets/pull/2792
2,792
Update: GooAQ - add train/val/test splits
closed
2
2021-08-12T11:40:18
2021-08-27T15:58:45
2021-08-27T15:58:14
bhavitvyamalik
[]
[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.
true
968,360,314
https://api.github.com/repos/huggingface/datasets/issues/2791
https://github.com/huggingface/datasets/pull/2791
2,791
Fix typo in cnn_dailymail
closed
0
2021-08-12T08:38:42
2021-08-12T11:17:59
2021-08-12T11:17:59
omaralsayed
[]
null
true
967,772,181
https://api.github.com/repos/huggingface/datasets/issues/2790
https://github.com/huggingface/datasets/pull/2790
2,790
Fix typo in test_dataset_common
closed
0
2021-08-12T01:10:29
2021-08-12T11:31:29
2021-08-12T11:31:29
nateraw
[]
null
true
967,361,934
https://api.github.com/repos/huggingface/datasets/issues/2789
https://github.com/huggingface/datasets/pull/2789
2,789
Updated dataset description of DaNE
closed
1
2021-08-11T19:58:48
2021-08-12T16:10:59
2021-08-12T16:06:01
KennethEnevoldsen
[]
null
true
967,149,389
https://api.github.com/repos/huggingface/datasets/issues/2788
https://github.com/huggingface/datasets/issues/2788
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
closed
1
2021-08-11T17:43:21
2023-07-25T17:40:50
2023-07-25T17:40:50
brijow
[]
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=[...
false
967,018,406
https://api.github.com/repos/huggingface/datasets/issues/2787
https://github.com/huggingface/datasets/issues/2787
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
closed
9
2021-08-11T16:19:01
2023-10-03T12:39:25
2021-08-18T15:09:18
jinec
[ "bug" ]
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/...
false
966,282,934
https://api.github.com/repos/huggingface/datasets/issues/2786
https://github.com/huggingface/datasets/pull/2786
2,786
Support streaming compressed files
closed
0
2021-08-11T09:02:06
2021-08-17T05:28:39
2021-08-16T06:36:19
albertvillanova
[]
Add support to stream compressed files (current options in fsspec): - bz2 - lz4 - xz - zstd cc: @lewtun
true
965,461,382
https://api.github.com/repos/huggingface/datasets/issues/2783
https://github.com/huggingface/datasets/pull/2783
2,783
Add KS task to SUPERB
closed
5
2021-08-10T22:14:07
2021-08-12T16:45:01
2021-08-11T20:19:17
anton-l
[]
Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051). - [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting) - [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_comma...
true
964,858,439
https://api.github.com/repos/huggingface/datasets/issues/2782
https://github.com/huggingface/datasets/pull/2782
2,782
Fix renaming of corpus_bleu args
closed
0
2021-08-10T11:02:34
2021-08-10T11:16:07
2021-08-10T11:16:07
albertvillanova
[]
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes the args without parameter names, s...
true
964,805,351
https://api.github.com/repos/huggingface/datasets/issues/2781
https://github.com/huggingface/datasets/issues/2781
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
closed
0
2021-08-10T09:59:41
2021-08-10T11:16:07
2021-08-10T11:16:07
albertvillanova
[ "bug" ]
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #273...
false
964,794,764
https://api.github.com/repos/huggingface/datasets/issues/2780
https://github.com/huggingface/datasets/pull/2780
2,780
VIVOS dataset for Vietnamese ASR
closed
0
2021-08-10T09:47:36
2021-08-12T11:09:30
2021-08-12T11:09:30
binh234
[]
null
true
964,775,085
https://api.github.com/repos/huggingface/datasets/issues/2779
https://github.com/huggingface/datasets/pull/2779
2,779
Fix sacrebleu tokenizers
closed
0
2021-08-10T09:24:27
2021-08-10T11:03:08
2021-08-10T10:57:54
albertvillanova
[]
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()...
true
964,737,422
https://api.github.com/repos/huggingface/datasets/issues/2778
https://github.com/huggingface/datasets/pull/2778
2,778
Do not pass tokenize to sacrebleu
closed
0
2021-08-10T08:40:37
2021-08-10T10:03:37
2021-08-10T10:03:37
albertvillanova
[]
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will ...
true
964,696,380
https://api.github.com/repos/huggingface/datasets/issues/2777
https://github.com/huggingface/datasets/pull/2777
2,777
Use packaging to handle versions
closed
0
2021-08-10T07:51:39
2021-08-18T13:56:27
2021-08-18T13:56:27
albertvillanova
[]
Use packaging module to handle/validate/check versions of Python packages. Related to #2769.
true
964,400,596
https://api.github.com/repos/huggingface/datasets/issues/2776
https://github.com/huggingface/datasets/issues/2776
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
open
0
2021-08-09T21:23:17
2021-08-09T21:23:17
null
stas00
[ "enhancement" ]
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-th...
false
964,303,626
https://api.github.com/repos/huggingface/datasets/issues/2775
https://github.com/huggingface/datasets/issues/2775
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
closed
3
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
mbforbes
[ "bug" ]
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se...
false
963,932,199
https://api.github.com/repos/huggingface/datasets/issues/2774
https://github.com/huggingface/datasets/pull/2774
2,774
Prevent .map from using multiprocessing when loading from cache
closed
6
2021-08-09T12:11:38
2021-09-09T10:20:28
2021-09-09T10:20:28
thomasw21
[]
## Context On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load fr...
true
963,730,497
https://api.github.com/repos/huggingface/datasets/issues/2773
https://github.com/huggingface/datasets/issues/2773
2,773
Remove dataset_infos.json
closed
1
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
albertvillanova
[ "enhancement", "generic discussion" ]
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_byt...
false
963,348,834
https://api.github.com/repos/huggingface/datasets/issues/2772
https://github.com/huggingface/datasets/issues/2772
2,772
Remove returned feature constrain
open
0
2021-08-08T04:01:30
2021-08-08T08:48:01
null
PosoSAgapo
[ "enhancement" ]
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score...
false
963,257,036
https://api.github.com/repos/huggingface/datasets/issues/2771
https://github.com/huggingface/datasets/pull/2771
2,771
[WIP][Common Voice 7] Add common voice 7.0
closed
2
2021-08-07T16:01:10
2021-12-06T23:24:02
2021-12-06T23:24:02
patrickvonplaten
[]
This PR allows to load the new common voice dataset manually as explained when doing: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab") ``` => ``` Please follow the manual download instructions: You need t...
true
963,246,512
https://api.github.com/repos/huggingface/datasets/issues/2770
https://github.com/huggingface/datasets/pull/2770
2,770
Add support for fast tokenizer in BertScore
closed
0
2021-08-07T15:00:03
2021-08-09T12:34:43
2021-08-09T11:16:25
mariosasko
[]
This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib. Fixes #2765
true
963,240,802
https://api.github.com/repos/huggingface/datasets/issues/2769
https://github.com/huggingface/datasets/pull/2769
2,769
Allow PyArrow from source
closed
0
2021-08-07T14:26:44
2021-08-09T15:38:39
2021-08-09T15:38:39
patrickvonplaten
[]
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
true
963,229,173
https://api.github.com/repos/huggingface/datasets/issues/2768
https://github.com/huggingface/datasets/issues/2768
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
closed
2
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
lvwerra
[ "bug" ]
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets im...
false
963,002,120
https://api.github.com/repos/huggingface/datasets/issues/2767
https://github.com/huggingface/datasets/issues/2767
2,767
equal operation to perform unbatch for huggingface datasets
closed
5
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
dorooddorood606
[ "bug" ]
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
false
962,994,198
https://api.github.com/repos/huggingface/datasets/issues/2766
https://github.com/huggingface/datasets/pull/2766
2,766
fix typo (ShuffingConfig -> ShufflingConfig)
closed
0
2021-08-06T19:31:40
2021-08-10T14:17:03
2021-08-10T14:17:02
daleevans
[]
pretty straightforward, it should be Shuffling instead of Shuffing
true
962,861,395
https://api.github.com/repos/huggingface/datasets/issues/2765
https://github.com/huggingface/datasets/issues/2765
2,765
BERTScore Error
closed
1
2021-08-06T15:58:57
2021-08-09T11:16:25
2021-08-09T11:16:25
gagan3012
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ...
false
962,554,799
https://api.github.com/repos/huggingface/datasets/issues/2764
https://github.com/huggingface/datasets/pull/2764
2,764
Add DER metric for SUPERB speaker diarization task
closed
1
2021-08-06T09:12:36
2023-07-11T09:35:23
2023-07-11T09:35:23
albertvillanova
[ "transfer-to-evaluate" ]
null
true
961,895,523
https://api.github.com/repos/huggingface/datasets/issues/2763
https://github.com/huggingface/datasets/issues/2763
2,763
English wikipedia datasets is not clean
closed
1
2021-08-05T14:37:24
2023-07-25T17:43:04
2023-07-25T17:43:04
lucadiliello
[ "bug" ]
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.e...
false
961,652,046
https://api.github.com/repos/huggingface/datasets/issues/2762
https://github.com/huggingface/datasets/issues/2762
2,762
Add RVL-CDIP dataset
closed
3
2021-08-05T09:57:05
2022-04-21T17:15:41
2022-04-21T17:15:41
NielsRogge
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The image...
false
961,568,287
https://api.github.com/repos/huggingface/datasets/issues/2761
https://github.com/huggingface/datasets/issues/2761
2,761
Error loading C4 realnewslike dataset
closed
4
2021-08-05T08:16:58
2021-08-08T19:44:34
2021-08-08T19:44:34
danshirron
[ "bug" ]
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|███████████████████████...
false
961,372,667
https://api.github.com/repos/huggingface/datasets/issues/2760
https://github.com/huggingface/datasets/issues/2760
2,760
Add Nuswide dataset
open
0
2021-08-05T03:00:41
2021-12-08T12:06:23
null
shivangibithel
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-c...
false
960,206,575
https://api.github.com/repos/huggingface/datasets/issues/2758
https://github.com/huggingface/datasets/pull/2758
2,758
Raise ManualDownloadError when loading a dataset that requires previous manual download
closed
0
2021-08-04T10:19:55
2021-08-04T11:36:30
2021-08-04T11:36:30
albertvillanova
[]
This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing. The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode. Close #2749. cc: @severo
true
959,984,081
https://api.github.com/repos/huggingface/datasets/issues/2757
https://github.com/huggingface/datasets/issues/2757
2,757
Unexpected type after `concatenate_datasets`
closed
2
2021-08-04T07:10:39
2021-08-04T16:01:24
2021-08-04T16:01:23
JulesBelveze
[ "bug" ]
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everythi...
false
959,255,646
https://api.github.com/repos/huggingface/datasets/issues/2756
https://github.com/huggingface/datasets/pull/2756
2,756
Fix metadata JSON for ubuntu_dialogs_corpus dataset
closed
0
2021-08-03T15:48:59
2021-08-04T09:43:25
2021-08-04T09:43:25
albertvillanova
[]
Related to #2743.
true
959,115,888
https://api.github.com/repos/huggingface/datasets/issues/2755
https://github.com/huggingface/datasets/pull/2755
2,755
Fix metadata JSON for turkish_movie_sentiment dataset
closed
0
2021-08-03T13:25:44
2021-08-04T09:06:54
2021-08-04T09:06:53
albertvillanova
[]
Related to #2743.
true
959,105,577
https://api.github.com/repos/huggingface/datasets/issues/2754
https://github.com/huggingface/datasets/pull/2754
2,754
Generate metadata JSON for telugu_books dataset
closed
0
2021-08-03T13:14:52
2021-08-04T08:49:02
2021-08-04T08:49:02
albertvillanova
[]
Related to #2743.
true
959,036,995
https://api.github.com/repos/huggingface/datasets/issues/2753
https://github.com/huggingface/datasets/pull/2753
2,753
Generate metadata JSON for reclor dataset
closed
0
2021-08-03T11:52:29
2021-08-04T08:07:15
2021-08-04T08:07:15
albertvillanova
[]
Related to #2743.
true
959,023,608
https://api.github.com/repos/huggingface/datasets/issues/2752
https://github.com/huggingface/datasets/pull/2752
2,752
Generate metadata JSON for lm1b dataset
closed
0
2021-08-03T11:34:56
2021-08-04T06:40:40
2021-08-04T06:40:39
albertvillanova
[]
Related to #2743.
true
959,021,262
https://api.github.com/repos/huggingface/datasets/issues/2751
https://github.com/huggingface/datasets/pull/2751
2,751
Update metadata for wikihow dataset
closed
0
2021-08-03T11:31:57
2021-08-03T15:52:09
2021-08-03T15:52:09
albertvillanova
[]
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
true
958,984,730
https://api.github.com/repos/huggingface/datasets/issues/2750
https://github.com/huggingface/datasets/issues/2750
2,750
Second concatenation of datasets produces errors
closed
5
2021-08-03T10:47:04
2022-01-19T14:23:43
2022-01-19T14:19:05
Aktsvigun
[ "bug" ]
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets d...
false
958,968,748
https://api.github.com/repos/huggingface/datasets/issues/2749
https://github.com/huggingface/datasets/issues/2749
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
closed
2
2021-08-03T10:26:27
2021-08-09T08:53:35
2021-08-04T11:36:30
severo
[ "bug" ]
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = ...
false
958,889,041
https://api.github.com/repos/huggingface/datasets/issues/2748
https://github.com/huggingface/datasets/pull/2748
2,748
Generate metadata JSON for wikihow dataset
closed
0
2021-08-03T08:55:40
2021-08-03T10:17:51
2021-08-03T10:17:51
albertvillanova
[]
Related to #2743.
true
958,867,627
https://api.github.com/repos/huggingface/datasets/issues/2747
https://github.com/huggingface/datasets/pull/2747
2,747
add multi-proc in `to_json`
closed
17
2021-08-03T08:30:13
2021-10-19T18:24:21
2021-09-13T13:56:37
bhavitvyamalik
[]
Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air) 1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a si...
true
958,551,619
https://api.github.com/repos/huggingface/datasets/issues/2746
https://github.com/huggingface/datasets/issues/2746
2,746
Cannot load `few-nerd` dataset
closed
6
2021-08-02T22:18:57
2021-11-16T08:51:34
2021-08-03T19:45:43
Mehrad0711
[ "bug" ]
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users...
false
958,269,579
https://api.github.com/repos/huggingface/datasets/issues/2745
https://github.com/huggingface/datasets/pull/2745
2,745
added semeval18_emotion_classification dataset
closed
7
2021-08-02T15:39:55
2021-10-29T09:22:05
2021-09-21T09:48:35
maxpel
[]
I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages. ``` datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification ...
true
958,146,637
https://api.github.com/repos/huggingface/datasets/issues/2744
https://github.com/huggingface/datasets/pull/2744
2,744
Fix key by recreating metadata JSON for journalists_questions dataset
closed
0
2021-08-02T13:27:53
2021-08-03T09:25:34
2021-08-03T09:25:33
albertvillanova
[]
Close #2743.
true
958,119,251
https://api.github.com/repos/huggingface/datasets/issues/2743
https://github.com/huggingface/datasets/issues/2743
2,743
Dataset JSON is incorrect
closed
2
2021-08-02T13:01:26
2021-08-03T10:06:57
2021-08-03T09:25:33
severo
[ "bug" ]
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset...
false
958,114,064
https://api.github.com/repos/huggingface/datasets/issues/2742
https://github.com/huggingface/datasets/issues/2742
2,742
Improve detection of streamable file types
closed
1
2021-08-02T12:55:09
2021-11-12T17:18:10
2021-11-12T17:18:10
severo
[ "enhancement", "dataset-viewer" ]
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownl...
false
957,979,559
https://api.github.com/repos/huggingface/datasets/issues/2741
https://github.com/huggingface/datasets/issues/2741
2,741
Add Hypersim dataset
open
0
2021-08-02T10:06:50
2021-12-08T12:06:51
null
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/hugg...
false
957,911,035
https://api.github.com/repos/huggingface/datasets/issues/2740
https://github.com/huggingface/datasets/pull/2740
2,740
Update release instructions
closed
0
2021-08-02T08:46:00
2021-08-02T14:39:56
2021-08-02T14:39:56
albertvillanova
[]
Update release instructions.
true
957,751,260
https://api.github.com/repos/huggingface/datasets/issues/2739
https://github.com/huggingface/datasets/pull/2739
2,739
Pass tokenize to sacrebleu only if explicitly passed by user
closed
0
2021-08-02T05:09:05
2021-08-03T04:23:37
2021-08-03T04:23:37
albertvillanova
[]
Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (a...
true
957,517,746
https://api.github.com/repos/huggingface/datasets/issues/2738
https://github.com/huggingface/datasets/pull/2738
2,738
Sunbird AI Ugandan low resource language dataset
closed
4
2021-08-01T15:18:00
2022-10-03T09:37:30
2022-10-03T09:37:30
ak3ra
[ "dataset contribution" ]
Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation.
true
957,124,881
https://api.github.com/repos/huggingface/datasets/issues/2737
https://github.com/huggingface/datasets/issues/2737
2,737
SacreBLEU update
closed
5
2021-07-30T23:53:08
2021-09-22T10:47:41
2021-08-03T04:23:37
devrimcavusoglu
[ "bug" ]
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries...
false
956,895,199
https://api.github.com/repos/huggingface/datasets/issues/2736
https://github.com/huggingface/datasets/issues/2736
2,736
Add Microsoft Building Footprints dataset
open
1
2021-07-30T16:17:08
2021-12-08T12:09:03
null
albertvillanova
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - *...
false
956,889,365
https://api.github.com/repos/huggingface/datasets/issues/2735
https://github.com/huggingface/datasets/issues/2735
2,735
Add Open Buildings dataset
open
0
2021-07-30T16:08:39
2021-07-31T05:01:25
null
albertvillanova
[ "dataset request" ]
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science....
false
956,844,874
https://api.github.com/repos/huggingface/datasets/issues/2734
https://github.com/huggingface/datasets/pull/2734
2,734
Update BibTeX entry
closed
0
2021-07-30T15:22:51
2021-07-30T15:47:58
2021-07-30T15:47:58
albertvillanova
[]
Update BibTeX entry.
true
956,725,476
https://api.github.com/repos/huggingface/datasets/issues/2733
https://github.com/huggingface/datasets/pull/2733
2,733
Add missing parquet known extension
closed
0
2021-07-30T13:01:20
2021-07-30T13:24:31
2021-07-30T13:24:30
lhoestq
[]
This code was failing because the parquet extension wasn't recognized: ```python from datasets import load_dataset base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" data_files = {"train": base_url + "wikipedia-train.parquet"} wiki = load_dataset("parquet", da...
true
956,676,360
https://api.github.com/repos/huggingface/datasets/issues/2732
https://github.com/huggingface/datasets/pull/2732
2,732
Updated TTC4900 Dataset
closed
2
2021-07-30T11:52:14
2021-07-30T16:00:51
2021-07-30T15:58:14
yavuzKomecoglu
[]
- The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download. - Updated readme.
true
956,087,452
https://api.github.com/repos/huggingface/datasets/issues/2731
https://github.com/huggingface/datasets/pull/2731
2,731
Adding to_tf_dataset method
closed
7
2021-07-29T18:10:25
2021-09-16T13:50:54
2021-09-16T13:50:54
Rocketknight1
[]
Oh my **god** do not merge this yet, it's just a draft. I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the wh...
true
955,987,834
https://api.github.com/repos/huggingface/datasets/issues/2730
https://github.com/huggingface/datasets/issues/2730
2,730
Update CommonVoice with new release
open
3
2021-07-29T15:59:59
2021-08-07T16:19:19
null
yjernite
[ "dataset request" ]
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8...
false
955,920,489
https://api.github.com/repos/huggingface/datasets/issues/2729
https://github.com/huggingface/datasets/pull/2729
2,729
Fix IndexError while loading Arabic Billion Words dataset
closed
0
2021-07-29T14:47:02
2021-07-30T13:03:55
2021-07-30T13:03:55
albertvillanova
[ "bug" ]
Catch `IndexError` and ignore that record. Close #2727.
true
955,892,970
https://api.github.com/repos/huggingface/datasets/issues/2728
https://github.com/huggingface/datasets/issues/2728
2,728
Concurrent use of same dataset (already downloaded)
open
4
2021-07-29T14:18:38
2021-08-02T07:25:57
null
PierreColombo
[ "bug" ]
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ...
false
955,812,149
https://api.github.com/repos/huggingface/datasets/issues/2727
https://github.com/huggingface/datasets/issues/2727
2,727
Error in loading the Arabic Billion Words Corpus
closed
2
2021-07-29T12:53:09
2021-07-30T13:03:55
2021-07-30T13:03:55
M-Salti
[ "bug" ]
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results Th...
false
955,674,388
https://api.github.com/repos/huggingface/datasets/issues/2726
https://github.com/huggingface/datasets/pull/2726
2,726
Typo fix `tokenize_exemple`
closed
0
2021-07-29T10:03:37
2021-07-29T12:00:25
2021-07-29T12:00:25
shabie
[]
There is a small typo in the main README.md
true
955,020,776
https://api.github.com/repos/huggingface/datasets/issues/2725
https://github.com/huggingface/datasets/pull/2725
2,725
Pass use_auth_token to request_etags
closed
0
2021-07-28T16:13:29
2021-07-28T16:38:02
2021-07-28T16:38:02
albertvillanova
[]
Fix #2724.
true
954,919,607
https://api.github.com/repos/huggingface/datasets/issues/2724
https://github.com/huggingface/datasets/issues/2724
2,724
404 Error when loading remote data files from private repo
closed
3
2021-07-28T14:24:23
2021-07-29T04:58:49
2021-07-28T16:38:01
albertvillanova
[ "bug" ]
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not...
false
954,864,104
https://api.github.com/repos/huggingface/datasets/issues/2723
https://github.com/huggingface/datasets/pull/2723
2,723
Fix en subset by modifying dataset_info with correct validation infos
closed
0
2021-07-28T13:36:19
2021-07-28T15:22:23
2021-07-28T15:22:23
thomasw21
[]
- Related to: #2682 We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`. Instead of having: `{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}` We replace with correct values: `{"name": "vali...
true
954,446,053
https://api.github.com/repos/huggingface/datasets/issues/2722
https://github.com/huggingface/datasets/issues/2722
2,722
Missing cache file
closed
2
2021-07-28T03:52:07
2022-03-21T08:27:51
2022-03-21T08:27:51
PosoSAgapo
[ "bug" ]
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json...
false
954,238,230
https://api.github.com/repos/huggingface/datasets/issues/2721
https://github.com/huggingface/datasets/pull/2721
2,721
Deal with the bad check in test_load.py
closed
1
2021-07-27T20:23:23
2021-07-28T09:58:34
2021-07-28T08:53:18
mariosasko
[]
This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with: ```python m_paths = re.findall(r"\S*_dummy/_dummy.py\b...
true
954,024,426
https://api.github.com/repos/huggingface/datasets/issues/2720
https://github.com/huggingface/datasets/pull/2720
2,720
fix: 🐛 fix two typos
closed
0
2021-07-27T15:50:17
2021-07-27T18:38:17
2021-07-27T18:38:16
severo
[]
true
953,932,416
https://api.github.com/repos/huggingface/datasets/issues/2719
https://github.com/huggingface/datasets/issues/2719
2,719
Use ETag in streaming mode to detect resource updates
open
0
2021-07-27T14:17:09
2021-10-22T09:36:08
null
severo
[ "enhancement", "dataset-viewer" ]
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd lik...
false
953,360,663
https://api.github.com/repos/huggingface/datasets/issues/2718
https://github.com/huggingface/datasets/pull/2718
2,718
New documentation structure
closed
5
2021-07-26T23:15:13
2021-09-13T17:20:53
2021-09-13T17:20:52
stevhliu
[]
Organize Datasets documentation into four documentation types to improve clarity and discoverability of content. **Content to add in the very short term (feel free to add anything I'm missing):** - A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also b...
true
952,979,976
https://api.github.com/repos/huggingface/datasets/issues/2717
https://github.com/huggingface/datasets/pull/2717
2,717
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
closed
0
2021-07-26T14:42:22
2021-07-26T18:04:14
2021-07-26T16:30:06
amankhandelia
[]
Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call. As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable` Fix #2716.
true
952,902,778
https://api.github.com/repos/huggingface/datasets/issues/2716
https://github.com/huggingface/datasets/issues/2716
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
closed
3
2021-07-26T13:24:59
2021-07-26T18:04:43
2021-07-26T18:04:43
amankhandelia
[ "bug" ]
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/h...
false
952,845,229
https://api.github.com/repos/huggingface/datasets/issues/2715
https://github.com/huggingface/datasets/pull/2715
2,715
Update PAN-X data URL in XTREME dataset
closed
1
2021-07-26T12:21:17
2021-07-26T13:27:59
2021-07-26T13:27:59
albertvillanova
[]
Related to #2710, #2691.
true
952,580,820
https://api.github.com/repos/huggingface/datasets/issues/2714
https://github.com/huggingface/datasets/issues/2714
2,714
add more precise information for size
open
1
2021-07-26T07:11:03
2021-07-26T09:16:25
null
pennyl67
[ "enhancement" ]
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
false
952,515,256
https://api.github.com/repos/huggingface/datasets/issues/2713
https://github.com/huggingface/datasets/pull/2713
2,713
Enumerate all ner_tags values in WNUT 17 dataset
closed
0
2021-07-26T05:22:16
2021-07-26T09:30:55
2021-07-26T09:30:55
albertvillanova
[]
This PR does: - Enumerate all ner_tags in dataset card Data Fields section - Add all metadata tags to dataset card Close #2709.
true
951,723,326
https://api.github.com/repos/huggingface/datasets/issues/2710
https://github.com/huggingface/datasets/pull/2710
2,710
Update WikiANN data URL
closed
1
2021-07-23T16:29:21
2021-07-26T09:34:23
2021-07-26T09:34:23
albertvillanova
[]
WikiANN data source URL is no longer accessible: 404 error from Dropbox. We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card. Close #2691.
true
951,534,757
https://api.github.com/repos/huggingface/datasets/issues/2709
https://github.com/huggingface/datasets/issues/2709
2,709
Missing documentation for wnut_17 (ner_tags)
closed
1
2021-07-23T12:25:32
2021-07-26T09:30:55
2021-07-26T09:30:55
maxpel
[ "bug" ]
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` ...
false
951,092,660
https://api.github.com/repos/huggingface/datasets/issues/2708
https://github.com/huggingface/datasets/issues/2708
2,708
QASC: incomplete training set
closed
2
2021-07-22T21:59:44
2021-07-23T13:30:07
2021-07-23T13:30:07
danyaljj
[ "bug" ]
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instanc...
false
950,812,945
https://api.github.com/repos/huggingface/datasets/issues/2707
https://github.com/huggingface/datasets/issues/2707
2,707
404 Not Found Error when loading LAMA dataset
closed
3
2021-07-22T15:52:33
2021-07-26T14:29:07
2021-07-26T14:29:07
dwil2444
[]
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ...
false
950,606,561
https://api.github.com/repos/huggingface/datasets/issues/2706
https://github.com/huggingface/datasets/pull/2706
2,706
Update BibTeX entry
closed
0
2021-07-22T12:29:29
2021-07-22T12:43:00
2021-07-22T12:43:00
albertvillanova
[]
Update BibTeX entry.
true
950,488,583
https://api.github.com/repos/huggingface/datasets/issues/2705
https://github.com/huggingface/datasets/issues/2705
2,705
404 not found error on loading WIKIANN dataset
closed
1
2021-07-22T09:55:50
2021-07-23T08:07:32
2021-07-23T08:07:32
ronbutan
[ "bug" ]
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Act...
false
950,483,980
https://api.github.com/repos/huggingface/datasets/issues/2704
https://github.com/huggingface/datasets/pull/2704
2,704
Fix pick default config name message
closed
0
2021-07-22T09:49:43
2021-07-22T10:02:41
2021-07-22T10:02:40
lhoestq
[]
The error message to tell which config name to load is not displayed. This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659 I fixed that by ma...
true
950,482,284
https://api.github.com/repos/huggingface/datasets/issues/2703
https://github.com/huggingface/datasets/issues/2703
2,703
Bad message when config name is missing
closed
0
2021-07-22T09:47:23
2021-07-22T10:02:40
2021-07-22T10:02:40
lhoestq
[]
When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name. However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message: ```python import datasets datasets.load_dataset("glue") ``` raises ```python AttributeError: 'Bui...
false
950,448,159
https://api.github.com/repos/huggingface/datasets/issues/2702
https://github.com/huggingface/datasets/pull/2702
2,702
Update BibTeX entry
closed
0
2021-07-22T09:04:39
2021-07-22T09:17:39
2021-07-22T09:17:38
albertvillanova
[]
Update BibTeX entry.
true
950,422,403
https://api.github.com/repos/huggingface/datasets/issues/2701
https://github.com/huggingface/datasets/pull/2701
2,701
Fix download_mode docstrings
closed
0
2021-07-22T08:30:25
2021-07-22T09:33:31
2021-07-22T09:33:31
albertvillanova
[ "documentation" ]
Fix `download_mode` docstrings.
true
950,276,325
https://api.github.com/repos/huggingface/datasets/issues/2700
https://github.com/huggingface/datasets/issues/2700
2,700
from datasets import Dataset is failing
closed
1
2021-07-22T03:51:23
2021-07-22T07:23:45
2021-07-22T07:09:07
kswamy15
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or...
false
950,221,226
https://api.github.com/repos/huggingface/datasets/issues/2699
https://github.com/huggingface/datasets/issues/2699
2,699
cannot combine splits merging and streaming?
open
5
2021-07-22T01:13:25
2024-04-08T13:26:46
null
eyaler
[ "bug" ]
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_d...
false
950,159,867
https://api.github.com/repos/huggingface/datasets/issues/2698
https://github.com/huggingface/datasets/pull/2698
2,698
Ignore empty batch when writing
closed
0
2021-07-21T22:35:30
2021-07-26T14:56:03
2021-07-26T13:25:26
pcuenca
[]
This prevents an schema update with unknown column types, as reported in #2644. This is my first attempt at fixing the issue. I tested the following: - First batch returned by a batched map operation is empty. - An intermediate batch is empty. - `python -m unittest tests.test_arrow_writer` passes. However, `ar...
true
950,021,623
https://api.github.com/repos/huggingface/datasets/issues/2697
https://github.com/huggingface/datasets/pull/2697
2,697
Fix import on Colab
closed
1
2021-07-21T19:03:38
2021-07-22T07:09:08
2021-07-22T07:09:07
nateraw
[]
Fix #2695, fix #2700.
true
949,901,726
https://api.github.com/repos/huggingface/datasets/issues/2696
https://github.com/huggingface/datasets/pull/2696
2,696
Add support for disable_progress_bar on Windows
closed
1
2021-07-21T16:34:53
2021-07-26T13:31:14
2021-07-26T09:38:37
mariosasko
[]
This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would ...
true
949,864,823
https://api.github.com/repos/huggingface/datasets/issues/2695
https://github.com/huggingface/datasets/issues/2695
2,695
Cannot import load_dataset on Colab
closed
5
2021-07-21T15:52:51
2021-07-22T07:26:25
2021-07-22T07:09:07
bayartsogt-ya
[ "bug" ]
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
false
949,844,722
https://api.github.com/repos/huggingface/datasets/issues/2694
https://github.com/huggingface/datasets/pull/2694
2,694
fix: 🐛 change string format to allow copy/paste to work in bash
closed
0
2021-07-21T15:30:40
2021-07-22T10:41:47
2021-07-22T10:41:47
severo
[]
Before: copy/paste resulted in an error because the square bracket characters `[]` are special characters in bash
true
949,797,014
https://api.github.com/repos/huggingface/datasets/issues/2693
https://github.com/huggingface/datasets/pull/2693
2,693
Fix OSCAR Esperanto
closed
0
2021-07-21T14:43:50
2021-07-21T14:53:52
2021-07-21T14:53:51
lhoestq
[]
The Esperanto part (original) of OSCAR has the wrong number of examples: ```python from datasets import load_dataset raw_datasets = load_dataset("oscar", "unshuffled_original_eo") ``` raises ```python NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, da...
true
949,765,484
https://api.github.com/repos/huggingface/datasets/issues/2692
https://github.com/huggingface/datasets/pull/2692
2,692
Update BibTeX entry
closed
0
2021-07-21T14:23:35
2021-07-21T15:31:41
2021-07-21T15:31:40
albertvillanova
[]
Update BibTeX entry
true
949,758,379
https://api.github.com/repos/huggingface/datasets/issues/2691
https://github.com/huggingface/datasets/issues/2691
2,691
xtreme / pan-x cannot be downloaded
closed
5
2021-07-21T14:18:05
2021-07-26T09:34:22
2021-07-26T09:34:22
severo
[ "bug" ]
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
false
949,574,500
https://api.github.com/repos/huggingface/datasets/issues/2690
https://github.com/huggingface/datasets/pull/2690
2,690
Docs details
closed
1
2021-07-21T10:43:14
2021-07-27T18:40:54
2021-07-27T18:40:54
severo
[]
Some comments here: - the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + ...
true