id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,214,510,010
https://api.github.com/repos/huggingface/datasets/issues/4213
https://github.com/huggingface/datasets/pull/4213
4,213
ETT time series dataset
closed
2
2022-04-25T13:26:18
2022-05-05T12:19:21
2022-05-05T12:10:35
kashif
[]
Ready for review.
true
1,214,498,582
https://api.github.com/repos/huggingface/datasets/issues/4212
https://github.com/huggingface/datasets/pull/4212
4,212
[Common Voice] Make sure bytes are correctly deleted if `path` exists
closed
2
2022-04-25T13:18:26
2022-04-26T22:54:28
2022-04-26T22:48:27
patrickvonplaten
[]
`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.
true
1,214,361,837
https://api.github.com/repos/huggingface/datasets/issues/4211
https://github.com/huggingface/datasets/issues/4211
4,211
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
closed
10
2022-04-25T11:22:54
2023-04-06T19:25:50
2022-05-20T15:15:30
pietrolesci
[ "bug" ]
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code...
false
1,214,089,130
https://api.github.com/repos/huggingface/datasets/issues/4210
https://github.com/huggingface/datasets/issues/4210
4,210
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
closed
5
2022-04-25T07:28:42
2022-05-31T12:16:31
2022-05-31T12:16:31
loretoparisi
[ "bug" ]
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed ...
false
1,213,716,426
https://api.github.com/repos/huggingface/datasets/issues/4208
https://github.com/huggingface/datasets/pull/4208
4,208
Add CMU MoCap Dataset
closed
11
2022-04-24T17:31:08
2022-10-03T09:38:24
2022-10-03T09:36:30
dnaveenr
[ "dataset contribution" ]
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and descrip...
true
1,213,604,615
https://api.github.com/repos/huggingface/datasets/issues/4207
https://github.com/huggingface/datasets/pull/4207
4,207
[Minor edit] Fix typo in class name
closed
0
2022-04-24T09:49:37
2022-05-05T13:17:47
2022-05-05T13:17:47
cakiki
[]
Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`
true
1,212,715,581
https://api.github.com/repos/huggingface/datasets/issues/4206
https://github.com/huggingface/datasets/pull/4206
4,206
Add Nerval Metric
closed
1
2022-04-22T19:45:00
2023-07-11T09:34:56
2023-07-11T09:34:55
maridda
[ "transfer-to-evaluate" ]
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
true
1,212,466,138
https://api.github.com/repos/huggingface/datasets/issues/4205
https://github.com/huggingface/datasets/pull/4205
4,205
Fix `convert_file_size_to_int` for kilobits and megabits
closed
1
2022-04-22T14:56:21
2022-05-03T15:28:42
2022-05-03T15:21:48
mariosasko
[]
Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891)
true
1,212,431,764
https://api.github.com/repos/huggingface/datasets/issues/4204
https://github.com/huggingface/datasets/pull/4204
4,204
Add Recall Metric Card
closed
2
2022-04-22T14:24:26
2022-05-03T13:23:23
2022-05-03T13:16:24
emibaylor
[]
What this PR mainly does: - add metric card for recall metric - update docs in recall python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted met...
true
1,212,431,067
https://api.github.com/repos/huggingface/datasets/issues/4203
https://github.com/huggingface/datasets/pull/4203
4,203
Add Precision Metric Card
closed
1
2022-04-22T14:23:48
2022-05-03T14:23:40
2022-05-03T14:16:46
emibaylor
[]
What this PR mainly does: - add metric card for precision metric - update docs in precision python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatt...
true
1,212,326,288
https://api.github.com/repos/huggingface/datasets/issues/4202
https://github.com/huggingface/datasets/pull/4202
4,202
Fix some type annotation in doc
closed
1
2022-04-22T12:53:31
2022-04-22T15:03:00
2022-04-22T14:56:43
thomasw21
[]
null
true
1,212,086,420
https://api.github.com/repos/huggingface/datasets/issues/4201
https://github.com/huggingface/datasets/pull/4201
4,201
Update GH template for dataset viewer issues
closed
2
2022-04-22T09:34:44
2022-05-06T08:38:43
2022-04-26T08:45:55
albertvillanova
[]
Update template to use new issue forms instead. With this PR we can check if this new feature is useful for us. Once validated, we can update the other templates. CC: @severo
true
1,211,980,110
https://api.github.com/repos/huggingface/datasets/issues/4200
https://github.com/huggingface/datasets/pull/4200
4,200
Add to docs how to load from local script
closed
1
2022-04-22T08:08:25
2022-05-06T08:39:25
2022-04-23T05:47:25
albertvillanova
[]
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
true
1,211,953,308
https://api.github.com/repos/huggingface/datasets/issues/4199
https://github.com/huggingface/datasets/issues/4199
4,199
Cache miss during reload for datasets using image fetch utilities through map
closed
5
2022-04-22T07:47:08
2022-04-26T17:00:32
2022-04-26T13:38:26
apsdehal
[ "bug" ]
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ...
false
1,211,456,559
https://api.github.com/repos/huggingface/datasets/issues/4198
https://github.com/huggingface/datasets/issues/4198
4,198
There is no dataset
closed
0
2022-04-21T19:19:26
2022-05-03T11:29:05
2022-04-22T06:12:25
wilfoderek
[]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,211,342,558
https://api.github.com/repos/huggingface/datasets/issues/4197
https://github.com/huggingface/datasets/pull/4197
4,197
Add remove_columns=True
closed
4
2022-04-21T17:28:13
2023-09-24T10:02:32
2022-04-22T14:45:30
thomasw21
[]
This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like: ``` def apply(batch): batch_size = len(batch["id"]) batch["text"] = ["potato" for _ range(batch_size)] return {} # Columns are: {"id": int} dset.map(apply, bat...
true
1,211,271,261
https://api.github.com/repos/huggingface/datasets/issues/4196
https://github.com/huggingface/datasets/issues/4196
4,196
Embed image and audio files in `save_to_disk`
closed
0
2022-04-21T16:25:18
2022-12-14T18:22:59
2022-12-14T18:22:59
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a b...
false
1,210,958,602
https://api.github.com/repos/huggingface/datasets/issues/4194
https://github.com/huggingface/datasets/pull/4194
4,194
Support lists of multi-dimensional numpy arrays
closed
1
2022-04-21T12:22:26
2022-05-12T15:16:34
2022-05-12T15:08:40
albertvillanova
[]
Fix #4191. CC: @SaulLu
true
1,210,734,701
https://api.github.com/repos/huggingface/datasets/issues/4193
https://github.com/huggingface/datasets/pull/4193
4,193
Document save_to_disk and push_to_hub on images and audio files
closed
2
2022-04-21T09:04:36
2022-04-22T09:55:55
2022-04-22T09:49:31
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.
true
1,210,692,554
https://api.github.com/repos/huggingface/datasets/issues/4192
https://github.com/huggingface/datasets/issues/4192
4,192
load_dataset can't load local dataset,Unable to find ...
closed
4
2022-04-21T08:28:58
2022-04-25T16:51:57
2022-04-22T07:39:53
ahf876828330
[ "bug" ]
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwa...
false
1,210,028,090
https://api.github.com/repos/huggingface/datasets/issues/4191
https://github.com/huggingface/datasets/issues/4191
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
closed
2
2022-04-20T18:04:32
2022-05-12T15:08:40
2022-05-12T15:08:40
SaulLu
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the...
false
1,209,901,677
https://api.github.com/repos/huggingface/datasets/issues/4190
https://github.com/huggingface/datasets/pull/4190
4,190
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
closed
1
2022-04-20T16:08:01
2022-04-22T13:58:25
2022-04-22T13:52:00
mariosasko
[]
This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b1779173...
true
1,209,881,351
https://api.github.com/repos/huggingface/datasets/issues/4189
https://github.com/huggingface/datasets/pull/4189
4,189
Document how to use FAISS index for special operations
closed
1
2022-04-20T15:51:56
2022-05-06T08:43:10
2022-05-06T08:35:52
albertvillanova
[]
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
true
1,209,740,957
https://api.github.com/repos/huggingface/datasets/issues/4188
https://github.com/huggingface/datasets/pull/4188
4,188
Support streaming cnn_dailymail dataset
closed
2
2022-04-20T14:04:36
2022-05-11T13:39:06
2022-04-20T15:52:49
albertvillanova
[]
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
true
1,209,721,532
https://api.github.com/repos/huggingface/datasets/issues/4187
https://github.com/huggingface/datasets/pull/4187
4,187
Don't duplicate data when encoding audio or image
closed
5
2022-04-20T13:50:37
2022-04-21T09:17:00
2022-04-21T09:10:47
lhoestq
[]
Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`. This PR discards the `bytes` when the audio or image file exists locally. In particular it's common for audio datasets builder...
true
1,209,463,599
https://api.github.com/repos/huggingface/datasets/issues/4186
https://github.com/huggingface/datasets/pull/4186
4,186
Fix outdated docstring about default dataset config
closed
1
2022-04-20T10:04:51
2022-04-22T12:54:44
2022-04-22T12:48:31
lhoestq
[]
null
true
1,209,429,743
https://api.github.com/repos/huggingface/datasets/issues/4185
https://github.com/huggingface/datasets/issues/4185
4,185
Librispeech documentation, clarification on format
open
8
2022-04-20T09:35:55
2022-04-21T11:00:53
null
albertz
[]
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53 > Note that in order to limit the required storage for preparing this dataset, the audio > is stored in the .flac format and is not converted to a float32 array. To convert, the audi...
false
1,208,592,669
https://api.github.com/repos/huggingface/datasets/issues/4184
https://github.com/huggingface/datasets/pull/4184
4,184
[Librispeech] Add 'all' config
closed
29
2022-04-19T16:27:56
2024-08-02T05:03:04
2022-04-22T09:45:17
patrickvonplaten
[]
Add `"all"` config to Librispeech Closed #4179
true
1,208,449,335
https://api.github.com/repos/huggingface/datasets/issues/4183
https://github.com/huggingface/datasets/pull/4183
4,183
Document librispeech configs
closed
5
2022-04-19T14:26:59
2023-09-24T10:02:24
2022-04-19T15:15:20
lhoestq
[]
Added an example of how to load one config or the other
true
1,208,285,235
https://api.github.com/repos/huggingface/datasets/issues/4182
https://github.com/huggingface/datasets/issues/4182
4,182
Zenodo.org download is not responding
closed
5
2022-04-19T12:26:57
2022-04-20T07:11:05
2022-04-20T07:11:05
dkajtoch
[ "bug" ]
## Describe the bug Source download_url from zenodo.org does not respond. `_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"` Other datasets also use zenodo.org to store data and they cannot be downloaded as well. It would be better to actually use more reliable way to store original ...
false
1,208,194,805
https://api.github.com/repos/huggingface/datasets/issues/4181
https://github.com/huggingface/datasets/issues/4181
4,181
Support streaming FLEURS dataset
closed
9
2022-04-19T11:09:56
2022-07-25T11:44:02
2022-07-25T11:44:02
patrickvonplaten
[ "dataset bug" ]
## Dataset viewer issue for '*name of the dataset*' https://huggingface.co/datasets/google/fleurs ``` Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str...
false
1,208,042,320
https://api.github.com/repos/huggingface/datasets/issues/4180
https://github.com/huggingface/datasets/issues/4180
4,180
Add some iteration method on a dataset column (specific for inference)
closed
5
2022-04-19T09:15:45
2025-06-17T13:08:50
2025-06-17T13:08:50
Narsil
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset. Having an iterator (or sequence) type of object, would make inference ...
false
1,208,001,118
https://api.github.com/repos/huggingface/datasets/issues/4179
https://github.com/huggingface/datasets/issues/4179
4,179
Dataset librispeech_asr fails to load
closed
21
2022-04-19T08:45:48
2022-07-27T16:10:00
2022-07-27T16:10:00
albertz
[ "bug" ]
## Describe the bug The dataset librispeech_asr (standard Librispeech) fails to load. ## Steps to reproduce the bug ```python datasets.load_dataset("librispeech_asr") ``` ## Expected results It should download and prepare the whole dataset (all subsets). In [the doc](https://huggingface.co/datasets/libris...
false
1,207,787,073
https://api.github.com/repos/huggingface/datasets/issues/4178
https://github.com/huggingface/datasets/pull/4178
4,178
[feat] Add ImageNet dataset
closed
3
2022-04-19T06:01:35
2022-04-29T21:43:59
2022-04-29T21:37:08
apsdehal
[]
To use the dataset download the tar file [imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using: ```py from datasets impo...
true
1,207,535,920
https://api.github.com/repos/huggingface/datasets/issues/4177
https://github.com/huggingface/datasets/pull/4177
4,177
Adding missing subsets to the `SemEval-2018 Task 1` dataset
open
1
2022-04-18T22:59:30
2022-10-05T10:38:16
null
micahcarroll
[ "dataset contribution" ]
This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, b...
true
1,206,515,563
https://api.github.com/repos/huggingface/datasets/issues/4176
https://github.com/huggingface/datasets/issues/4176
4,176
Very slow between two operations
closed
0
2022-04-17T23:52:29
2022-04-18T00:03:00
2022-04-18T00:03:00
yanan1116
[ "bug" ]
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores. Also, there is a significant lag between them. Am I missing something ? ``` raw_datasets = raw_datasets.map(split_func...
false
1,205,589,842
https://api.github.com/repos/huggingface/datasets/issues/4175
https://github.com/huggingface/datasets/pull/4175
4,175
Add WIT Dataset
closed
6
2022-04-15T13:42:32
2023-09-24T10:02:38
2022-05-02T14:26:41
thomasw21
[]
closes #2981 #2810 @nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset
true
1,205,575,941
https://api.github.com/repos/huggingface/datasets/issues/4174
https://github.com/huggingface/datasets/pull/4174
4,174
Fix when map function modifies input in-place
closed
1
2022-04-15T13:23:15
2022-04-15T14:52:07
2022-04-15T14:45:58
thomasw21
[]
When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.
true
1,204,657,114
https://api.github.com/repos/huggingface/datasets/issues/4173
https://github.com/huggingface/datasets/pull/4173
4,173
Stream private zipped images
closed
3
2022-04-14T15:15:07
2022-05-05T14:05:54
2022-05-05T13:58:35
lhoestq
[]
As mentioned in https://github.com/huggingface/datasets/issues/4139 it's currently not possible to stream private/gated zipped images from the Hub. This is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository. In this P...
true
1,204,433,160
https://api.github.com/repos/huggingface/datasets/issues/4172
https://github.com/huggingface/datasets/pull/4172
4,172
Update assin2 dataset_infos.json
closed
1
2022-04-14T11:53:06
2022-04-15T14:47:42
2022-04-15T14:41:22
lhoestq
[]
Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset
true
1,204,413,620
https://api.github.com/repos/huggingface/datasets/issues/4170
https://github.com/huggingface/datasets/pull/4170
4,170
to_tf_dataset rewrite
closed
15
2022-04-14T11:30:58
2022-06-06T14:31:12
2022-06-06T14:22:09
Rocketknight1
[]
This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are: - Much better stability and no more dropping unexpected column names (Sorry @NielsRogge) - Doesn't clobber custom transforms on the data (Sorry @NielsRogge again) - Much better han...
true
1,203,995,869
https://api.github.com/repos/huggingface/datasets/issues/4169
https://github.com/huggingface/datasets/issues/4169
4,169
Timit_asr dataset cannot be previewed recently
closed
5
2022-04-14T03:28:31
2023-02-03T04:54:57
2022-05-06T16:06:51
YingLi001
[]
## Dataset viewer issue for '*timit_asr*' **Link:** *https://huggingface.co/datasets/timit_asr* Issue: The timit-asr dataset cannot be previewed recently. Am I the one who added this dataset ? Yes-No No
false
1,203,867,540
https://api.github.com/repos/huggingface/datasets/issues/4168
https://github.com/huggingface/datasets/pull/4168
4,168
Add code examples to API docs
closed
4
2022-04-13T23:03:38
2022-04-27T18:53:37
2022-04-27T18:48:34
stevhliu
[ "documentation" ]
This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on: - Do you think it is clearer to make every code example fully reproducible so when users co...
true
1,203,761,614
https://api.github.com/repos/huggingface/datasets/issues/4167
https://github.com/huggingface/datasets/pull/4167
4,167
Avoid rate limit in update hub repositories
closed
2
2022-04-13T20:32:17
2022-04-13T20:56:41
2022-04-13T20:50:32
lhoestq
[]
use http.extraHeader to avoid rate limit
true
1,203,758,004
https://api.github.com/repos/huggingface/datasets/issues/4166
https://github.com/huggingface/datasets/pull/4166
4,166
Fix exact match
closed
1
2022-04-13T20:28:06
2022-05-03T12:23:31
2022-05-03T12:16:27
emibaylor
[]
Clarify docs and add clarifying example to the exact_match metric
true
1,203,730,187
https://api.github.com/repos/huggingface/datasets/issues/4165
https://github.com/huggingface/datasets/pull/4165
4,165
Fix google bleu typos, examples
closed
1
2022-04-13T19:59:54
2022-05-03T12:23:52
2022-05-03T12:16:44
emibaylor
[]
null
true
1,203,661,346
https://api.github.com/repos/huggingface/datasets/issues/4164
https://github.com/huggingface/datasets/pull/4164
4,164
Fix duplicate key in multi_news
closed
1
2022-04-13T18:48:24
2022-04-13T21:04:16
2022-04-13T20:58:02
lhoestq
[]
To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928
true
1,203,539,268
https://api.github.com/repos/huggingface/datasets/issues/4163
https://github.com/huggingface/datasets/issues/4163
4,163
Optional Content Warning for Datasets
open
5
2022-04-13T16:38:01
2022-06-09T20:39:02
null
TristanThrush
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild I'm wondering if there is an option to select a content warning messa...
false
1,203,421,909
https://api.github.com/repos/huggingface/datasets/issues/4162
https://github.com/huggingface/datasets/pull/4162
4,162
Add Conceptual 12M
closed
2
2022-04-13T14:57:23
2022-04-15T08:13:01
2022-04-15T08:06:25
thomasw21
[]
null
true
1,203,230,485
https://api.github.com/repos/huggingface/datasets/issues/4161
https://github.com/huggingface/datasets/pull/4161
4,161
Add Visual Genome
closed
4
2022-04-13T12:25:24
2022-04-21T15:42:49
2022-04-21T13:08:52
thomasw21
[]
null
true
1,202,845,874
https://api.github.com/repos/huggingface/datasets/issues/4160
https://github.com/huggingface/datasets/issues/4160
4,160
RGBA images not showing
closed
2
2022-04-13T06:59:23
2022-06-21T16:43:11
2022-06-21T16:43:11
cceyda
[ "dataset-viewer", "dataset-viewer-rgba-images" ]
## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent [**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent) ![image](https://user-images.githubusercontent.com/15624271/163117683-e91edb28-41bf-43d9-b371-5c62e14f40c9.png) Am I the...
false
1,202,522,153
https://api.github.com/repos/huggingface/datasets/issues/4159
https://github.com/huggingface/datasets/pull/4159
4,159
Add `TruthfulQA` dataset
closed
2
2022-04-12T23:19:04
2022-06-08T15:51:33
2022-06-08T14:43:34
jon-tow
[]
null
true
1,202,376,843
https://api.github.com/repos/huggingface/datasets/issues/4158
https://github.com/huggingface/datasets/pull/4158
4,158
Add AUC ROC Metric
closed
1
2022-04-12T20:53:28
2022-04-26T19:41:50
2022-04-26T19:35:22
emibaylor
[]
null
true
1,202,239,622
https://api.github.com/repos/huggingface/datasets/issues/4157
https://github.com/huggingface/datasets/pull/4157
4,157
Fix formatting in BLEU metric card
closed
1
2022-04-12T18:29:51
2022-04-13T14:30:25
2022-04-13T14:16:34
mariosasko
[]
Fix #4148
true
1,202,220,531
https://api.github.com/repos/huggingface/datasets/issues/4156
https://github.com/huggingface/datasets/pull/4156
4,156
Adding STSb-TR dataset
closed
1
2022-04-12T18:10:05
2022-10-03T09:36:25
2022-10-03T09:36:25
figenfikri
[ "dataset contribution" ]
Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added.
true
1,202,183,608
https://api.github.com/repos/huggingface/datasets/issues/4155
https://github.com/huggingface/datasets/pull/4155
4,155
Make HANS dataset streamable
closed
1
2022-04-12T17:34:13
2022-04-13T12:03:46
2022-04-13T11:57:35
mariosasko
[]
Fix #4133
true
1,202,145,721
https://api.github.com/repos/huggingface/datasets/issues/4154
https://github.com/huggingface/datasets/pull/4154
4,154
Generate tasks.json taxonomy from `huggingface_hub`
closed
7
2022-04-12T17:12:46
2022-04-14T10:32:32
2022-04-14T10:26:13
julien-c
[]
null
true
1,202,040,506
https://api.github.com/repos/huggingface/datasets/issues/4153
https://github.com/huggingface/datasets/pull/4153
4,153
Adding Text-based NP Enrichment (TNE) dataset
closed
3
2022-04-12T15:47:03
2022-05-03T14:05:48
2022-05-03T14:05:48
yanaiela
[]
Added the [TNE](https://github.com/yanaiela/TNE) dataset to the library
true
1,202,034,115
https://api.github.com/repos/huggingface/datasets/issues/4152
https://github.com/huggingface/datasets/issues/4152
4,152
ArrayND error in pyarrow 5
closed
2
2022-04-12T15:41:40
2022-05-04T09:29:46
2022-05-04T09:29:46
lhoestq
[]
As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5: ```python import pyarrow as pa from datasets import Array2D from datasets.table import cast_array_to_feature arr = pa.array([[[0]]]) feature_type = Array2D(shape=(1, 1), dtype="int64") cast_array_to_feature(a...
false
1,201,837,999
https://api.github.com/repos/huggingface/datasets/issues/4151
https://github.com/huggingface/datasets/pull/4151
4,151
Add missing label for emotion description
closed
0
2022-04-12T13:17:37
2022-04-12T13:58:50
2022-04-12T13:58:50
lijiazheng99
[]
null
true
1,201,689,730
https://api.github.com/repos/huggingface/datasets/issues/4150
https://github.com/huggingface/datasets/issues/4150
4,150
Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split)
closed
0
2022-04-12T11:15:55
2022-04-28T21:02:44
2022-04-28T21:02:44
polinaeterna
[ "bug" ]
## Describe the bug Splits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users. ## Steps to reproduce the bug * If you load a packaged datasets from Hub, it infers splits from directory structure / filenames (check out the data [here](https://huggingface.co/data...
false
1,201,389,221
https://api.github.com/repos/huggingface/datasets/issues/4149
https://github.com/huggingface/datasets/issues/4149
4,149
load_dataset for winoground returning decoding error
closed
10
2022-04-12T08:16:16
2022-05-04T23:40:38
2022-05-04T23:40:38
odellus
[ "bug" ]
## Describe the bug I am trying to use datasets to load winoground and I'm getting a JSON decoding error. ## Steps to reproduce the bug ```python from datasets import load_dataset token = 'hf_XXXXX' # my HF access token datasets = load_dataset('facebook/winoground', use_auth_token=token) ``` ## Expected res...
false
1,201,169,242
https://api.github.com/repos/huggingface/datasets/issues/4148
https://github.com/huggingface/datasets/issues/4148
4,148
fix confusing bleu metric example
closed
0
2022-04-12T06:18:26
2022-04-13T14:16:34
2022-04-13T14:16:34
aizawa-naoki
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I would like to see the example in "Metric Card for BLEU" changed. The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma. The BLEU score are calculated correctly, but it is difficult to understa...
false
1,200,756,008
https://api.github.com/repos/huggingface/datasets/issues/4147
https://github.com/huggingface/datasets/pull/4147
4,147
Adjust path to datasets tutorial in How-To
closed
1
2022-04-12T01:20:34
2022-04-12T08:32:24
2022-04-12T08:26:02
NimaBoscarino
[]
The link in the How-To overview page to the Datasets tutorials is currently broken. This is just a small adjustment to make it match the format used in https://github.com/huggingface/datasets/blob/master/docs/source/tutorial.md. (Edit to add: The link in the PR deployment (https://moon-ci-docs.huggingface.co/docs/da...
true
1,200,215,789
https://api.github.com/repos/huggingface/datasets/issues/4146
https://github.com/huggingface/datasets/issues/4146
4,146
SAMSum dataset viewer not working
closed
3
2022-04-11T16:22:57
2022-04-29T16:26:09
2022-04-29T16:26:09
aakashnegi10
[ "bug" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,200,209,781
https://api.github.com/repos/huggingface/datasets/issues/4145
https://github.com/huggingface/datasets/pull/4145
4,145
Redirect TIMIT download from LDC
closed
4
2022-04-11T16:17:55
2022-04-13T15:39:31
2022-04-13T15:33:04
lhoestq
[]
LDC data is protected under US copyright laws and under various legal agreements between the Linguistic Data Consortium/the University of Pennsylvania and data providers which prohibit redistribution of that data by anyone other than LDC. Similarly, LDC's membership agreements, non-member user agreement and various cor...
true
1,200,016,983
https://api.github.com/repos/huggingface/datasets/issues/4144
https://github.com/huggingface/datasets/pull/4144
4,144
Fix splits in local packaged modules, local datasets without script and hub datasets without script
closed
7
2022-04-11T13:57:33
2022-04-29T09:12:14
2022-04-28T21:02:45
polinaeterna
[]
fixes #4150 I suggest to infer splits structure from files when `data_dir` is passed with `get_patterns_locally`, analogous to what's done in `LocalDatasetModuleFactoryWithoutScript` with `self.path`, instead of generating files with `data_dir/**` patterns and putting them all into a single default (train) split. ...
true
1,199,937,961
https://api.github.com/repos/huggingface/datasets/issues/4143
https://github.com/huggingface/datasets/issues/4143
4,143
Unable to download `Wikepedia` 20220301.en version
closed
3
2022-04-11T13:00:14
2022-08-17T00:37:55
2022-04-21T17:04:14
beyondguo
[ "bug" ]
## Describe the bug Unable to download `Wikepedia` dataset, 20220301.en version ## Steps to reproduce the bug ```python !pip install apache_beam mwparserfromhell dataset_wikipedia = load_dataset("wikipedia", "20220301.en") ``` ## Actual results ``` ValueError: BuilderConfig 20220301.en not found. Avail...
false
1,199,794,750
https://api.github.com/repos/huggingface/datasets/issues/4142
https://github.com/huggingface/datasets/issues/4142
4,142
Add ObjectFolder 2.0 dataset
open
1
2022-04-11T10:57:51
2022-10-05T10:30:49
null
osanseviero
[ "dataset request" ]
## Adding a Dataset - **Name:** ObjectFolder 2.0 - **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance. - **Paper:** [*link to the dataset paper if available*](...
false
1,199,610,885
https://api.github.com/repos/huggingface/datasets/issues/4141
https://github.com/huggingface/datasets/issues/4141
4,141
Why is the dataset not visible under the dataset preview section?
closed
0
2022-04-11T08:36:42
2022-04-11T18:55:32
2022-04-11T17:09:49
Nid989
[ "dataset-viewer" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,199,492,356
https://api.github.com/repos/huggingface/datasets/issues/4140
https://github.com/huggingface/datasets/issues/4140
4,140
Error loading arxiv data set
closed
3
2022-04-11T07:06:34
2022-04-12T16:24:08
2022-04-12T16:24:08
yjqiu
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`. ``` Traceback (most recent call last): File "scripts/summarization.py", line 354, in <module> main(args) File "scripts/summari...
false
1,199,443,822
https://api.github.com/repos/huggingface/datasets/issues/4139
https://github.com/huggingface/datasets/issues/4139
4,139
Dataset viewer issue for Winoground
closed
11
2022-04-11T06:11:41
2022-06-21T16:43:58
2022-06-21T16:43:58
alcinos
[ "dataset-viewer", "dataset-viewer-gated" ]
## Dataset viewer issue for 'Winoground' **Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train) *short description of the issue* Getting 401, message='Unauthorized' The dataset is subject to authorization, but I can access the files f...
false
1,199,291,730
https://api.github.com/repos/huggingface/datasets/issues/4138
https://github.com/huggingface/datasets/issues/4138
4,138
Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
closed
5
2022-04-11T02:07:13
2022-04-19T03:15:46
2022-04-16T15:46:29
iluvvatar
[]
## Dataset viewer issue for 'MalakhovIlya/RuREBus' **Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus **Description** Using os.walk(topdown=False) in DatasetBuilder causes following error: Status code: 400 Exception: TypeError Message: xwalk() got an unexpected keyword argument 'topdow...
false
1,199,000,453
https://api.github.com/repos/huggingface/datasets/issues/4137
https://github.com/huggingface/datasets/pull/4137
4,137
Add single dataset citations for TweetEval
closed
2
2022-04-10T11:51:54
2022-04-12T07:57:22
2022-04-12T07:51:15
gchhablani
[]
This PR adds single data citations as per request of the original creators of the TweetEval dataset. This is a recent email from the creator: > Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://githu...
true
1,198,307,610
https://api.github.com/repos/huggingface/datasets/issues/4135
https://github.com/huggingface/datasets/pull/4135
4,135
Support streaming xtreme dataset for PAN-X config
closed
1
2022-04-09T06:19:48
2022-05-06T08:39:40
2022-04-11T06:54:14
albertvillanova
[]
Support streaming xtreme dataset for PAN-X config.
true
1,197,937,146
https://api.github.com/repos/huggingface/datasets/issues/4134
https://github.com/huggingface/datasets/issues/4134
4,134
ELI5 supporting documents
open
1
2022-04-08T23:36:27
2022-04-13T13:52:46
null
saurabh-0077
[ "question" ]
if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??
false
1,197,830,623
https://api.github.com/repos/huggingface/datasets/issues/4133
https://github.com/huggingface/datasets/issues/4133
4,133
HANS dataset preview broken
closed
3
2022-04-08T21:06:15
2022-04-13T11:57:34
2022-04-13T11:57:34
pietrolesci
[ "streaming" ]
## Dataset viewer issue for '*hans*' **Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans) HANS dataset preview is broken with error 400 Am I the one who added this dataset ? No
false
1,197,661,720
https://api.github.com/repos/huggingface/datasets/issues/4132
https://github.com/huggingface/datasets/pull/4132
4,132
Support streaming xtreme dataset for PAWS-X config
closed
1
2022-04-08T18:25:32
2022-05-06T08:39:42
2022-04-08T21:02:44
albertvillanova
[]
Support streaming xtreme dataset for PAWS-X config.
true
1,197,472,249
https://api.github.com/repos/huggingface/datasets/issues/4131
https://github.com/huggingface/datasets/pull/4131
4,131
Support streaming xtreme dataset for udpos config
closed
1
2022-04-08T15:30:49
2022-05-06T08:39:46
2022-04-08T16:28:07
albertvillanova
[]
Support streaming xtreme dataset for udpos config.
true
1,197,456,857
https://api.github.com/repos/huggingface/datasets/issues/4130
https://github.com/huggingface/datasets/pull/4130
4,130
Add SBU Captions Photo Dataset
closed
1
2022-04-08T15:17:39
2022-04-12T10:47:31
2022-04-12T10:41:29
thomasw21
[]
null
true
1,197,376,796
https://api.github.com/repos/huggingface/datasets/issues/4129
https://github.com/huggingface/datasets/issues/4129
4,129
dataset metadata for reproducibility
open
1
2022-04-08T14:17:28
2023-09-29T09:23:56
null
nbroad1881
[ "enhancement" ]
When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits/branches) of ...
false
1,197,326,311
https://api.github.com/repos/huggingface/datasets/issues/4128
https://github.com/huggingface/datasets/pull/4128
4,128
More robust `cast_to_python_objects` in `TypedSequence`
closed
1
2022-04-08T13:33:35
2022-04-13T14:07:41
2022-04-13T14:01:16
mariosasko
[]
Adds a fallback to run an expensive version of `cast_to_python_objects` which exhaustively checks entire lists to avoid the `ArrowInvalid: Could not convert` error in `TypedSequence`. Currently, this error can happen in situations where only some images are decoded in `map`, in which case `cast_to_python_objects` fails...
true
1,197,297,756
https://api.github.com/repos/huggingface/datasets/issues/4127
https://github.com/huggingface/datasets/pull/4127
4,127
Add configs with processed data in medical_dialog dataset
closed
1
2022-04-08T13:08:16
2022-05-06T08:39:50
2022-04-08T16:20:51
albertvillanova
[]
There exist processed data files that do not require parsing the raw data files (which can take long time). Fix #4122.
true
1,196,665,194
https://api.github.com/repos/huggingface/datasets/issues/4126
https://github.com/huggingface/datasets/issues/4126
4,126
dataset viewer issue for common_voice
closed
2
2022-04-07T23:34:28
2022-04-25T13:42:17
2022-04-25T13:42:16
laphang
[ "dataset-viewer", "audio_column" ]
## Dataset viewer issue for 'common_voice' **Link:** https://huggingface.co/datasets/common_voice Server Error Status code: 400 Exception: TypeError Message: __init__() got an unexpected keyword argument 'audio_column' Am I the one who added this dataset ? No
false
1,196,633,936
https://api.github.com/repos/huggingface/datasets/issues/4125
https://github.com/huggingface/datasets/pull/4125
4,125
BIG-bench
closed
21
2022-04-07T22:33:30
2022-06-08T17:57:48
2022-06-08T17:32:32
andersjohanandreassen
[]
This PR adds all BIG-bench json tasks to huggingface/datasets.
true
1,196,469,842
https://api.github.com/repos/huggingface/datasets/issues/4124
https://github.com/huggingface/datasets/issues/4124
4,124
Image decoding often fails when transforming Image datasets
closed
7
2022-04-07T19:17:25
2022-04-13T14:01:16
2022-04-13T14:01:16
RafayAK
[ "bug" ]
## Describe the bug When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors. Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa...
false
1,196,367,512
https://api.github.com/repos/huggingface/datasets/issues/4123
https://github.com/huggingface/datasets/issues/4123
4,123
Building C4 takes forever
closed
1
2022-04-07T17:41:30
2023-06-26T22:01:29
2023-06-26T22:01:29
StellaAthena
[ "bug" ]
## Describe the bug C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources. ## Steps to reproduce the bug ```python c4 = datasets.load...
false
1,196,095,072
https://api.github.com/repos/huggingface/datasets/issues/4122
https://github.com/huggingface/datasets/issues/4122
4,122
medical_dialog zh has very slow _generate_examples
closed
3
2022-04-07T14:00:51
2022-04-08T16:20:51
2022-04-08T16:20:51
nbroad1881
[ "bug" ]
## Describe the bug After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours. ## Steps to reproduce the bug The easiest way I've found to download files from...
false
1,196,000,018
https://api.github.com/repos/huggingface/datasets/issues/4121
https://github.com/huggingface/datasets/issues/4121
4,121
datasets.load_metric can not load a local metirc
closed
1
2022-04-07T12:48:56
2023-01-18T14:30:46
2022-04-07T13:53:27
SadGare
[ "bug" ]
## Describe the bug No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. A...
false
1,195,887,430
https://api.github.com/repos/huggingface/datasets/issues/4120
https://github.com/huggingface/datasets/issues/4120
4,120
Representing dictionaries (json) objects as features
open
0
2022-04-07T11:07:41
2022-04-07T11:07:41
null
yanaiela
[ "enhancement" ]
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442). F...
false
1,195,641,298
https://api.github.com/repos/huggingface/datasets/issues/4119
https://github.com/huggingface/datasets/pull/4119
4,119
Hotfix failing CI tests on Windows
closed
1
2022-04-07T07:38:46
2022-04-07T09:47:24
2022-04-07T07:57:13
albertvillanova
[]
This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414 Fix #4118 I guess this issue is related to this PR: - huggingface/huggingface_hub#815
true
1,195,638,944
https://api.github.com/repos/huggingface/datasets/issues/4118
https://github.com/huggingface/datasets/issues/4118
4,118
Failing CI tests on Windows
closed
0
2022-04-07T07:36:25
2022-04-07T07:57:13
2022-04-07T07:57:13
albertvillanova
[ "bug" ]
## Describe the bug Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
false
1,195,552,406
https://api.github.com/repos/huggingface/datasets/issues/4117
https://github.com/huggingface/datasets/issues/4117
4,117
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
closed
13
2022-04-07T05:52:36
2024-05-07T09:24:35
2022-04-19T15:36:35
arymbe
[ "bug" ]
## Describe the bug Could you help me please. I got this following error. AttributeError: module 'huggingface_hub' has no attribute 'hf_api' ## Steps to reproduce the bug when I imported the datasets # Sample code to reproduce the bug from datasets import list_datasets, load_dataset, list_metrics, load_metr...
false
1,194,926,459
https://api.github.com/repos/huggingface/datasets/issues/4116
https://github.com/huggingface/datasets/pull/4116
4,116
Pretty print dataset info files
closed
5
2022-04-06T17:40:48
2022-04-08T11:28:01
2022-04-08T11:21:53
mariosasko
[]
Adds indentation to the `dataset_infos.json` file when saving for nicer diffs. (suggested by @julien-c) This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sur...
true
1,194,907,555
https://api.github.com/repos/huggingface/datasets/issues/4115
https://github.com/huggingface/datasets/issues/4115
4,115
ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
closed
5
2022-04-06T17:29:43
2022-06-01T13:04:16
2022-06-01T13:04:16
cceyda
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t...
false
1,194,855,345
https://api.github.com/repos/huggingface/datasets/issues/4114
https://github.com/huggingface/datasets/issues/4114
4,114
Allow downloading just some columns of a dataset
open
14
2022-04-06T16:38:46
2025-02-17T15:10:56
null
osanseviero
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case **Describe the solution you'd like** Be able to just download...
false
1,194,843,532
https://api.github.com/repos/huggingface/datasets/issues/4113
https://github.com/huggingface/datasets/issues/4113
4,113
Multiprocessing with FileLock fails in python 3.9
closed
1
2022-04-06T16:27:09
2022-11-28T11:49:14
2022-11-28T11:49:14
lhoestq
[ "bug" ]
On python 3.9, this code hangs: ```python from multiprocessing import Pool from filelock import FileLock def run(i): print(f"got the lock in multi process [{i}]") with FileLock("tmp.lock"): with Pool(2) as pool: pool.map(run, range(2)) ``` This is because the subprocesses try to ac...
false
1,194,752,765
https://api.github.com/repos/huggingface/datasets/issues/4112
https://github.com/huggingface/datasets/issues/4112
4,112
ImageFolder with Grayscale images dataset
closed
3
2022-04-06T15:10:00
2022-04-22T10:21:53
2022-04-22T10:21:52
chainyo
[]
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP) I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback: ```bash AttributeError: Caught AttributeError in D...
false
1,194,660,699
https://api.github.com/repos/huggingface/datasets/issues/4111
https://github.com/huggingface/datasets/pull/4111
4,111
Update security policy
closed
1
2022-04-06T13:59:51
2022-04-07T09:46:30
2022-04-07T09:40:27
albertvillanova
[]
null
true
1,194,581,375
https://api.github.com/repos/huggingface/datasets/issues/4110
https://github.com/huggingface/datasets/pull/4110
4,110
Matthews Correlation Metric Card
closed
1
2022-04-06T12:59:35
2022-05-03T13:43:17
2022-05-03T13:36:13
emibaylor
[]
null
true