id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
โŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
โŒ€
is_pull_request
bool
2 classes
1,232,681,207
https://api.github.com/repos/huggingface/datasets/issues/4316
https://github.com/huggingface/datasets/pull/4316
4,316
Support passing config_kwargs to CLI run_beam
closed
1
2022-05-11T13:53:37
2022-05-11T14:36:49
2022-05-11T14:28:31
albertvillanova
[]
This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass: ``` --date 20220501 --language ca ```
true
1,232,549,330
https://api.github.com/repos/huggingface/datasets/issues/4315
https://github.com/huggingface/datasets/pull/4315
4,315
Fix CLI run_beam namespace
closed
1
2022-05-11T12:21:00
2022-05-11T13:13:00
2022-05-11T13:05:08
albertvillanova
[]
Currently, it raises TypeError: ``` TypeError: __init__() got an unexpected keyword argument 'namespace' ```
true
1,232,326,726
https://api.github.com/repos/huggingface/datasets/issues/4314
https://github.com/huggingface/datasets/pull/4314
4,314
Catch pull error when mirroring
closed
1
2022-05-11T09:38:35
2022-05-11T12:54:07
2022-05-11T12:46:42
lhoestq
[]
Catch pull errors when mirroring so that the script continues to update the other datasets. The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed.
true
1,231,764,100
https://api.github.com/repos/huggingface/datasets/issues/4313
https://github.com/huggingface/datasets/pull/4313
4,313
Add API code examples for Builder classes
closed
1
2022-05-10T22:22:32
2022-05-12T17:02:43
2022-05-12T12:36:57
stevhliu
[ "documentation" ]
This PR adds API code examples for the Builder classes.
true
1,231,662,775
https://api.github.com/repos/huggingface/datasets/issues/4312
https://github.com/huggingface/datasets/pull/4312
4,312
added TR-News dataset
closed
1
2022-05-10T20:33:00
2022-10-03T09:36:45
2022-10-03T09:36:45
batubayk
[ "dataset contribution" ]
null
true
1,231,369,438
https://api.github.com/repos/huggingface/datasets/issues/4311
https://github.com/huggingface/datasets/pull/4311
4,311
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
closed
2
2022-05-10T15:52:15
2022-05-10T17:19:42
2022-05-10T17:11:47
lhoestq
[]
I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`. While doing so I also improved a few aspects: - we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary - rai...
true
1,231,319,815
https://api.github.com/repos/huggingface/datasets/issues/4310
https://github.com/huggingface/datasets/issues/4310
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
closed
0
2022-05-10T15:12:53
2022-05-11T16:46:31
2022-05-11T16:46:31
milmin
[ "bug" ]
## Describe the bug Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine. In the following steps we load parquet files but the same happens with pickle files. The problem seems ...
false
1,231,232,935
https://api.github.com/repos/huggingface/datasets/issues/4309
https://github.com/huggingface/datasets/pull/4309
4,309
[WIP] Add TEDLIUM dataset
closed
11
2022-05-10T14:12:47
2022-06-17T12:54:40
2022-06-17T11:44:01
sanchit-gandhi
[ "dataset request", "speech" ]
Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3 TODO: - [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script - [x] Make `load_dataset` work - [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~ - [ ] ~~Create dummy data for conti...
true
1,231,217,783
https://api.github.com/repos/huggingface/datasets/issues/4308
https://github.com/huggingface/datasets/pull/4308
4,308
Remove unused multiprocessing args from test CLI
closed
1
2022-05-10T14:02:15
2022-05-11T12:58:25
2022-05-11T12:50:43
albertvillanova
[]
Multiprocessing is not used in the test CLI.
true
1,231,175,639
https://api.github.com/repos/huggingface/datasets/issues/4307
https://github.com/huggingface/datasets/pull/4307
4,307
Add packaged builder configs to the documentation
closed
1
2022-05-10T13:34:19
2022-05-10T14:03:50
2022-05-10T13:55:54
lhoestq
[]
Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc.
true
1,231,137,204
https://api.github.com/repos/huggingface/datasets/issues/4306
https://github.com/huggingface/datasets/issues/4306
4,306
`load_dataset` does not work with certain filename.
closed
1
2022-05-10T13:14:04
2022-05-10T18:58:36
2022-05-10T18:58:09
whatever60
[ "bug" ]
## Describe the bug This is a weird bug that took me some time to find out. I have a JSON dataset that I want to load with `load_dataset` like this: ``` data_files = dict(train="train.json.zip", val="val.json.zip") dataset = load_dataset("json", data_files=data_files, field="data") ``` ## Expected results ...
false
1,231,099,934
https://api.github.com/repos/huggingface/datasets/issues/4305
https://github.com/huggingface/datasets/pull/4305
4,305
Fixes FrugalScore
open
2
2022-05-10T12:44:06
2022-09-22T16:42:06
null
moussaKam
[ "transfer-to-evaluate" ]
There are two minor modifications in this PR: 1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper. 2) I switched to d...
true
1,231,047,051
https://api.github.com/repos/huggingface/datasets/issues/4304
https://github.com/huggingface/datasets/issues/4304
4,304
Language code search does direct matches
open
1
2022-05-10T11:59:16
2022-05-10T12:38:42
null
leondz
[ "bug" ]
## Describe the bug Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-taggin...
false
1,230,867,728
https://api.github.com/repos/huggingface/datasets/issues/4303
https://github.com/huggingface/datasets/pull/4303
4,303
Fix: Add missing comma
closed
1
2022-05-10T09:21:38
2022-05-11T08:50:15
2022-05-11T08:50:14
mrm8488
[]
null
true
1,230,651,117
https://api.github.com/repos/huggingface/datasets/issues/4302
https://github.com/huggingface/datasets/pull/4302
4,302
Remove hacking license tags when mirroring datasets on the Hub
closed
9
2022-05-10T05:52:46
2022-05-20T09:48:30
2022-05-20T09:40:20
albertvillanova
[]
Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub. I guess this hacking is no longer necessary: - it is not applied...
true
1,230,401,256
https://api.github.com/repos/huggingface/datasets/issues/4301
https://github.com/huggingface/datasets/pull/4301
4,301
Add ImageNet-Sketch dataset
closed
2
2022-05-09T23:38:45
2022-05-23T18:14:14
2022-05-23T18:05:29
nateraw
[]
This PR adds the ImageNet-Sketch dataset and resolves #3953 .
true
1,230,272,761
https://api.github.com/repos/huggingface/datasets/issues/4300
https://github.com/huggingface/datasets/pull/4300
4,300
Add API code examples for loading methods
closed
1
2022-05-09T21:30:26
2022-05-25T16:23:15
2022-05-25T09:20:13
stevhliu
[ "documentation" ]
This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :) I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`,...
true
1,230,236,782
https://api.github.com/repos/huggingface/datasets/issues/4299
https://github.com/huggingface/datasets/pull/4299
4,299
Remove manual download from imagenet-1k
closed
3
2022-05-09T20:49:18
2022-05-25T14:54:59
2022-05-25T14:46:16
mariosasko
[]
Remove the manual download code from `imagenet-1k` to make it a regular dataset.
true
1,229,748,006
https://api.github.com/repos/huggingface/datasets/issues/4298
https://github.com/huggingface/datasets/issues/4298
4,298
Normalise license names
closed
2
2022-05-09T13:51:32
2022-05-20T09:51:50
2022-05-20T09:51:50
leondz
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the ...
false
1,229,735,498
https://api.github.com/repos/huggingface/datasets/issues/4297
https://github.com/huggingface/datasets/issues/4297
4,297
Datasets YAML tagging space is down
closed
3
2022-05-09T13:45:05
2022-05-09T14:44:25
2022-05-09T14:44:25
leondz
[ "bug" ]
## Describe the bug The neat hf spaces app for generating YAML tags for dataset `README.md`s is down ## Steps to reproduce the bug 1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging ## Expected results There'll be a HF spaces web app for generating dataset metadata YAML ## Actual results T...
false
1,229,554,645
https://api.github.com/repos/huggingface/datasets/issues/4296
https://github.com/huggingface/datasets/pull/4296
4,296
Fix URL query parameters in compression hop path when streaming
open
1
2022-05-09T11:18:22
2022-07-06T15:19:53
null
albertvillanova
[]
Fix #3488.
true
1,229,527,283
https://api.github.com/repos/huggingface/datasets/issues/4295
https://github.com/huggingface/datasets/pull/4295
4,295
Fix missing lz4 dependency for tests
closed
1
2022-05-09T10:53:20
2022-05-09T11:21:22
2022-05-09T11:13:44
albertvillanova
[]
Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped.
true
1,229,455,582
https://api.github.com/repos/huggingface/datasets/issues/4294
https://github.com/huggingface/datasets/pull/4294
4,294
Fix CLI run_beam save_infos
closed
1
2022-05-09T09:47:43
2022-05-10T07:04:04
2022-05-10T06:56:10
albertvillanova
[]
Currently, it raises TypeError: ``` TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos' ```
true
1,228,815,477
https://api.github.com/repos/huggingface/datasets/issues/4293
https://github.com/huggingface/datasets/pull/4293
4,293
Fix wrong map parameter name in cache docs
closed
1
2022-05-08T07:27:46
2022-06-14T16:49:00
2022-06-14T16:07:00
h4iku
[]
The `load_from_cache` parameter of `map` should be `load_from_cache_file`.
true
1,228,216,788
https://api.github.com/repos/huggingface/datasets/issues/4292
https://github.com/huggingface/datasets/pull/4292
4,292
Add API code examples for remaining main classes
closed
1
2022-05-06T18:15:31
2022-05-25T18:05:13
2022-05-25T17:56:36
stevhliu
[ "documentation" ]
This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :)
true
1,227,777,500
https://api.github.com/repos/huggingface/datasets/issues/4291
https://github.com/huggingface/datasets/issues/4291
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
closed
2
2022-05-06T12:03:27
2022-05-09T08:25:58
2022-05-09T08:25:58
leondz
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train ### Description The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss? ### Owner Yes
false
1,227,592,826
https://api.github.com/repos/huggingface/datasets/issues/4290
https://github.com/huggingface/datasets/pull/4290
4,290
Update paper link in medmcqa dataset card
closed
2
2022-05-06T08:52:51
2022-09-30T11:51:28
2022-09-30T11:49:07
monk1337
[ "dataset contribution" ]
Updating readme in medmcqa dataset.
true
1,226,821,732
https://api.github.com/repos/huggingface/datasets/issues/4288
https://github.com/huggingface/datasets/pull/4288
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
closed
0
2022-05-05T15:21:49
2022-05-10T12:55:06
2022-05-10T12:09:48
alvarobartt
[]
This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 ๐Ÿค—
true
1,226,806,652
https://api.github.com/repos/huggingface/datasets/issues/4287
https://github.com/huggingface/datasets/issues/4287
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
closed
3
2022-05-05T15:09:45
2022-05-10T13:53:19
2022-05-10T13:53:19
alvarobartt
[ "bug" ]
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly...
false
1,226,758,621
https://api.github.com/repos/huggingface/datasets/issues/4286
https://github.com/huggingface/datasets/pull/4286
4,286
Add Lahnda language tag
closed
1
2022-05-05T14:34:20
2022-05-10T12:10:04
2022-05-10T12:02:38
mariosasko
[]
This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset.
true
1,226,374,831
https://api.github.com/repos/huggingface/datasets/issues/4285
https://github.com/huggingface/datasets/pull/4285
4,285
Update LexGLUE README.md
closed
1
2022-05-05T08:36:50
2022-05-05T13:39:04
2022-05-05T13:33:35
iliaschalkidis
[]
Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.
true
1,226,200,727
https://api.github.com/repos/huggingface/datasets/issues/4284
https://github.com/huggingface/datasets/issues/4284
4,284
Issues in processing very large datasets
closed
2
2022-05-05T05:01:09
2023-07-25T15:12:38
2023-07-25T15:12:38
sajastu
[ "bug" ]
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Can...
false
1,225,686,988
https://api.github.com/repos/huggingface/datasets/issues/4283
https://github.com/huggingface/datasets/pull/4283
4,283
Fix filesystem docstring
closed
1
2022-05-04T17:42:42
2022-05-06T16:32:02
2022-05-06T06:22:17
stevhliu
[]
This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed.
true
1,225,616,545
https://api.github.com/repos/huggingface/datasets/issues/4282
https://github.com/huggingface/datasets/pull/4282
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
closed
3
2022-05-04T16:37:01
2022-05-06T10:43:58
2022-05-06T10:37:00
lhoestq
[]
In certain cases, `None` values are replaced by empty lists when casting feature types. It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyA...
true
1,225,556,939
https://api.github.com/repos/huggingface/datasets/issues/4281
https://github.com/huggingface/datasets/pull/4281
4,281
Remove a copy-paste sentence in dataset cards
closed
2
2022-05-04T15:41:55
2022-05-06T08:38:03
2022-05-04T18:33:16
albertvillanova
[]
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
true
1,225,446,844
https://api.github.com/repos/huggingface/datasets/issues/4280
https://github.com/huggingface/datasets/pull/4280
4,280
Add missing features to commonsense_qa dataset
closed
3
2022-05-04T14:24:26
2022-05-06T14:23:57
2022-05-06T14:16:46
albertvillanova
[]
Fix partially #4275.
true
1,225,300,273
https://api.github.com/repos/huggingface/datasets/issues/4279
https://github.com/huggingface/datasets/pull/4279
4,279
Update minimal PyArrow version warning
closed
1
2022-05-04T12:26:09
2022-05-05T08:50:58
2022-05-05T08:43:47
mariosasko
[]
Update the minimal PyArrow version warning (should've been part of #4250).
true
1,225,122,123
https://api.github.com/repos/huggingface/datasets/issues/4278
https://github.com/huggingface/datasets/pull/4278
4,278
Add missing features to openbookqa dataset for additional config
closed
2
2022-05-04T09:22:50
2022-05-06T13:13:20
2022-05-06T13:06:01
albertvillanova
[]
Fix partially #4276.
true
1,225,002,286
https://api.github.com/repos/huggingface/datasets/issues/4277
https://github.com/huggingface/datasets/pull/4277
4,277
Enable label alignment for token classification datasets
closed
3
2022-05-04T07:15:16
2022-05-06T15:42:15
2022-05-06T15:36:31
lewtun
[]
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0,...
true
1,224,949,252
https://api.github.com/repos/huggingface/datasets/issues/4276
https://github.com/huggingface/datasets/issues/4276
4,276
OpenBookQA has missing and inconsistent field names
closed
11
2022-05-04T05:51:52
2022-10-11T17:11:53
2022-10-05T13:50:03
vblagoje
[ "dataset bug" ]
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanSc...
false
1,224,943,414
https://api.github.com/repos/huggingface/datasets/issues/4275
https://github.com/huggingface/datasets/issues/4275
4,275
CommonSenseQA has missing and inconsistent field names
open
1
2022-05-04T05:38:59
2022-05-04T11:41:18
null
vblagoje
[ "dataset bug" ]
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [โ€œquestionโ€][โ€œstemโ€] field is flattened into "question". We sh...
false
1,224,740,303
https://api.github.com/repos/huggingface/datasets/issues/4274
https://github.com/huggingface/datasets/pull/4274
4,274
Add API code examples for IterableDataset
closed
1
2022-05-03T22:44:17
2022-05-04T16:29:32
2022-05-04T16:22:04
stevhliu
[ "documentation" ]
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
true
1,224,681,036
https://api.github.com/repos/huggingface/datasets/issues/4273
https://github.com/huggingface/datasets/pull/4273
4,273
leadboard info added for TNE
closed
1
2022-05-03T21:35:41
2022-05-05T13:25:24
2022-05-05T13:18:13
yanaiela
[]
null
true
1,224,635,660
https://api.github.com/repos/huggingface/datasets/issues/4272
https://github.com/huggingface/datasets/pull/4272
4,272
Fix typo in logging docs
closed
4
2022-05-03T20:47:57
2022-05-04T15:42:27
2022-05-04T06:58:36
stevhliu
[]
This PR fixes #4271.
true
1,224,404,403
https://api.github.com/repos/huggingface/datasets/issues/4271
https://github.com/huggingface/datasets/issues/4271
4,271
A typo in docs of datasets.disable_progress_bar
closed
1
2022-05-03T17:44:56
2022-05-04T06:58:35
2022-05-04T06:58:35
jiangwangyi
[ "bug" ]
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
false
1,224,244,460
https://api.github.com/repos/huggingface/datasets/issues/4270
https://github.com/huggingface/datasets/pull/4270
4,270
Fix style in openbookqa dataset
closed
1
2022-05-03T15:21:34
2022-05-06T08:38:06
2022-05-03T16:20:52
albertvillanova
[]
CI in PR: - #4259 was green, but after merging it to master, a code quality error appeared.
true
1,223,865,145
https://api.github.com/repos/huggingface/datasets/issues/4269
https://github.com/huggingface/datasets/pull/4269
4,269
Add license and point of contact to big_patent dataset
closed
1
2022-05-03T09:24:07
2022-05-06T08:38:09
2022-05-03T11:16:19
albertvillanova
[]
Update metadata of big_patent dataset with: - license - point of contact
true
1,223,331,964
https://api.github.com/repos/huggingface/datasets/issues/4268
https://github.com/huggingface/datasets/issues/4268
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
closed
10
2022-05-02T20:34:25
2022-05-06T15:53:30
2022-05-03T11:23:48
i-am-neo
[ "dataset bug" ]
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results...
false
1,223,214,275
https://api.github.com/repos/huggingface/datasets/issues/4267
https://github.com/huggingface/datasets/pull/4267
4,267
Replace data URL in SAMSum dataset within the same repository
closed
1
2022-05-02T18:38:08
2022-05-06T08:38:13
2022-05-02T19:03:49
albertvillanova
[]
Replace data URL with one in the same repository.
true
1,223,116,436
https://api.github.com/repos/huggingface/datasets/issues/4266
https://github.com/huggingface/datasets/pull/4266
4,266
Add HF Speech Bench to Librispeech Dataset Card
closed
1
2022-05-02T16:59:31
2022-05-05T08:47:20
2022-05-05T08:40:09
sanchit-gandhi
[]
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/dat...
true
1,222,723,083
https://api.github.com/repos/huggingface/datasets/issues/4263
https://github.com/huggingface/datasets/pull/4263
4,263
Rename imagenet2012 -> imagenet-1k
closed
4
2022-05-02T10:26:21
2022-05-02T17:50:46
2022-05-02T16:32:57
lhoestq
[]
On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags. To correctly link models to imagenet, we should rename this dataset `imagenet-1k`. Later we can add `imagenet-21k` as a new dataset if we want. Once this one is merged we can delete the `imagenet2012` dataset...
true
1,222,130,749
https://api.github.com/repos/huggingface/datasets/issues/4262
https://github.com/huggingface/datasets/pull/4262
4,262
Add YAML tags to Dataset Card rotten tomatoes
closed
1
2022-05-01T11:59:08
2022-05-03T14:27:33
2022-05-03T14:20:35
mo6zes
[]
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
true
1,221,883,779
https://api.github.com/repos/huggingface/datasets/issues/4261
https://github.com/huggingface/datasets/issues/4261
4,261
data leakage in `webis/conclugen` dataset
closed
5
2022-04-30T17:43:37
2022-05-03T06:04:26
2022-05-03T06:04:26
xflashxx
[ "dataset bug" ]
## Describe the bug Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results. Furthermore, all splits contain duplicate samples. ## Steps to reproduce the bug ```pyth...
false
1,221,830,292
https://api.github.com/repos/huggingface/datasets/issues/4260
https://github.com/huggingface/datasets/pull/4260
4,260
Add mr_polarity movie review sentiment classification
closed
1
2022-04-30T13:19:33
2022-04-30T14:16:25
2022-04-30T14:16:25
mo6zes
[]
Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative". Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/) paperswithcode: [https://paperswithcode.com/d...
true
1,221,768,025
https://api.github.com/repos/huggingface/datasets/issues/4259
https://github.com/huggingface/datasets/pull/4259
4,259
Fix bug in choices labels in openbookqa dataset
closed
1
2022-04-30T07:41:39
2022-05-04T06:31:31
2022-05-03T15:14:21
manandey
[]
This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550. Fix #3550. cc. @lhoestq @mariosasko
true
1,221,637,727
https://api.github.com/repos/huggingface/datasets/issues/4258
https://github.com/huggingface/datasets/pull/4258
4,258
Fix/start token mask issue and update documentation
closed
2
2022-04-29T22:42:44
2022-05-02T16:33:20
2022-05-02T16:26:12
TristanThrush
[]
This pr fixes a couple bugs: 1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct 2) the documentation was not updated
true
1,221,393,137
https://api.github.com/repos/huggingface/datasets/issues/4257
https://github.com/huggingface/datasets/pull/4257
4,257
Create metric card for Mahalanobis Distance
closed
1
2022-04-29T18:37:27
2022-05-02T14:50:18
2022-05-02T14:43:24
sashavor
[]
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
true
1,221,379,625
https://api.github.com/repos/huggingface/datasets/issues/4256
https://github.com/huggingface/datasets/pull/4256
4,256
Create metric card for MSE
closed
1
2022-04-29T18:21:22
2022-05-02T14:55:42
2022-05-02T14:48:47
sashavor
[]
Proposing a metric card for Mean Squared Error
true
1,221,142,899
https://api.github.com/repos/huggingface/datasets/issues/4255
https://github.com/huggingface/datasets/pull/4255
4,255
No google drive URL for pubmed_qa
closed
2
2022-04-29T15:55:46
2022-04-29T16:24:55
2022-04-29T16:18:56
lhoestq
[]
I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license. cc @stas00
true
1,220,204,395
https://api.github.com/repos/huggingface/datasets/issues/4254
https://github.com/huggingface/datasets/pull/4254
4,254
Replace data URL in SAMSum dataset and support streaming
closed
1
2022-04-29T08:21:43
2022-05-06T08:38:16
2022-04-29T16:26:09
albertvillanova
[]
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
true
1,219,286,408
https://api.github.com/repos/huggingface/datasets/issues/4253
https://github.com/huggingface/datasets/pull/4253
4,253
Create metric cards for mean IOU
closed
1
2022-04-28T20:58:27
2022-04-29T17:44:47
2022-04-29T17:38:06
sashavor
[]
Proposing a metric card for mIoU :rocket: sorry for spamming you with review requests, @albertvillanova ! :hugs:
true
1,219,151,100
https://api.github.com/repos/huggingface/datasets/issues/4252
https://github.com/huggingface/datasets/pull/4252
4,252
Creating metric card for MAE
closed
1
2022-04-28T19:04:33
2022-04-29T16:59:11
2022-04-29T16:52:30
sashavor
[]
Initial proposal for MAE metric card
true
1,219,116,354
https://api.github.com/repos/huggingface/datasets/issues/4251
https://github.com/huggingface/datasets/pull/4251
4,251
Metric card for the XTREME-S dataset
closed
1
2022-04-28T18:32:19
2022-04-29T16:46:11
2022-04-29T16:38:46
sashavor
[]
Proposing a metric card for the XTREME-S dataset :hugs:
true
1,219,093,830
https://api.github.com/repos/huggingface/datasets/issues/4250
https://github.com/huggingface/datasets/pull/4250
4,250
Bump PyArrow Version to 6
closed
4
2022-04-28T18:10:50
2022-05-04T09:36:52
2022-05-04T09:29:46
dnaveenr
[]
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
true
1,218,524,424
https://api.github.com/repos/huggingface/datasets/issues/4249
https://github.com/huggingface/datasets/pull/4249
4,249
Support streaming XGLUE dataset
closed
1
2022-04-28T10:27:23
2022-05-06T08:38:21
2022-04-28T16:08:03
albertvillanova
[]
Support streaming XGLUE dataset. Fix #4247. CC: @severo
true
1,218,460,444
https://api.github.com/repos/huggingface/datasets/issues/4248
https://github.com/huggingface/datasets/issues/4248
4,248
conll2003 dataset loads original data.
closed
1
2022-04-28T09:33:31
2022-07-18T07:15:48
2022-07-18T07:15:48
sue991
[ "bug" ]
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to...
false
1,218,320,882
https://api.github.com/repos/huggingface/datasets/issues/4247
https://github.com/huggingface/datasets/issues/4247
4,247
The data preview of XGLUE
closed
3
2022-04-28T07:30:50
2022-04-29T08:23:28
2022-04-28T16:08:03
czq1999
[]
It seems that something wrong with the data previvew of XGLUE
false
1,218,320,293
https://api.github.com/repos/huggingface/datasets/issues/4246
https://github.com/huggingface/datasets/pull/4246
4,246
Support to load dataset with TSV files by passing only dataset name
closed
1
2022-04-28T07:30:15
2022-05-06T08:38:28
2022-05-06T08:14:07
albertvillanova
[]
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
true
1,217,959,400
https://api.github.com/repos/huggingface/datasets/issues/4245
https://github.com/huggingface/datasets/pull/4245
4,245
Add code examples for DatasetDict
closed
1
2022-04-27T22:52:22
2022-04-29T18:19:34
2022-04-29T18:13:03
stevhliu
[ "documentation" ]
This PR adds code examples for `DatasetDict` in the API reference :)
true
1,217,732,221
https://api.github.com/repos/huggingface/datasets/issues/4244
https://github.com/huggingface/datasets/pull/4244
4,244
task id update
closed
2
2022-04-27T18:28:14
2022-05-04T10:43:53
2022-05-04T10:36:37
nazneenrajani
[]
changed multi input text classification as task id instead of category
true
1,217,689,909
https://api.github.com/repos/huggingface/datasets/issues/4243
https://github.com/huggingface/datasets/pull/4243
4,243
WIP: Initial shades loading script and readme
closed
1
2022-04-27T17:45:43
2022-10-03T09:36:35
2022-10-03T09:36:35
shayne-longpre
[ "dataset contribution" ]
null
true
1,217,665,960
https://api.github.com/repos/huggingface/datasets/issues/4242
https://github.com/huggingface/datasets/pull/4242
4,242
Update auth when mirroring datasets on the hub
closed
1
2022-04-27T17:22:31
2022-04-27T17:37:04
2022-04-27T17:30:42
lhoestq
[]
We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.
true
1,217,423,686
https://api.github.com/repos/huggingface/datasets/issues/4241
https://github.com/huggingface/datasets/issues/4241
4,241
NonMatchingChecksumError when attempting to download GLUE
closed
2
2022-04-27T14:14:21
2022-04-28T07:45:27
2022-04-28T07:45:27
drussellmrichie
[ "bug" ]
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to ...
false
1,217,287,594
https://api.github.com/repos/huggingface/datasets/issues/4240
https://github.com/huggingface/datasets/pull/4240
4,240
Fix yield for crd3
closed
2
2022-04-27T12:31:36
2022-04-29T12:41:41
2022-04-29T12:41:41
shanyas10
[]
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": ...
true
1,217,269,689
https://api.github.com/repos/huggingface/datasets/issues/4239
https://github.com/huggingface/datasets/pull/4239
4,239
Small fixes in ROC AUC docs
closed
1
2022-04-27T12:15:50
2022-05-02T13:28:57
2022-05-02T13:22:03
wschella
[]
The list of use cases did not render on GitHub with the prepended spacing. Additionally, some typo's we're fixed.
true
1,217,168,123
https://api.github.com/repos/huggingface/datasets/issues/4238
https://github.com/huggingface/datasets/issues/4238
4,238
Dataset caching policy
closed
3
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
loretoparisi
[ "bug" ]
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/d...
false
1,217,121,044
https://api.github.com/repos/huggingface/datasets/issues/4237
https://github.com/huggingface/datasets/issues/4237
4,237
Common Voice 8 doesn't show datasets viewer
closed
9
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
patrickvonplaten
[ "dataset-viewer" ]
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
false
1,217,115,691
https://api.github.com/repos/huggingface/datasets/issues/4236
https://github.com/huggingface/datasets/pull/4236
4,236
Replace data URL in big_patent dataset and support streaming
closed
5
2022-04-27T10:01:13
2022-06-10T08:10:55
2022-05-02T18:21:15
albertvillanova
[]
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
true
1,216,952,640
https://api.github.com/repos/huggingface/datasets/issues/4235
https://github.com/huggingface/datasets/issues/4235
4,235
How to load VERY LARGE dataset?
closed
1
2022-04-27T07:50:13
2023-07-25T15:07:57
2023-07-25T15:07:57
CaoYiqingT
[ "bug" ]
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of da...
false
1,216,818,846
https://api.github.com/repos/huggingface/datasets/issues/4234
https://github.com/huggingface/datasets/pull/4234
4,234
Autoeval config
closed
15
2022-04-27T05:32:10
2022-05-06T13:20:31
2022-05-05T18:20:58
nazneenrajani
[]
Added autoeval config to imdb as pilot
true
1,216,665,044
https://api.github.com/repos/huggingface/datasets/issues/4233
https://github.com/huggingface/datasets/pull/4233
4,233
Autoeval
closed
1
2022-04-27T01:32:09
2022-04-27T05:29:30
2022-04-27T01:32:23
nazneenrajani
[]
null
true
1,216,659,444
https://api.github.com/repos/huggingface/datasets/issues/4232
https://github.com/huggingface/datasets/pull/4232
4,232
adding new tag to tasks.json and modified for existing datasets
closed
2
2022-04-27T01:21:09
2022-05-03T14:23:56
2022-05-03T14:16:39
nazneenrajani
[]
null
true
1,216,651,960
https://api.github.com/repos/huggingface/datasets/issues/4231
https://github.com/huggingface/datasets/pull/4231
4,231
Fix invalid url to CC-Aligned dataset
closed
1
2022-04-27T01:07:01
2022-05-16T17:01:13
2022-05-16T16:53:12
juntang-zhuang
[]
The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid
true
1,216,643,661
https://api.github.com/repos/huggingface/datasets/issues/4230
https://github.com/huggingface/datasets/issues/4230
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
closed
3
2022-04-27T00:53:52
2023-07-25T15:10:15
2023-07-25T15:10:15
beyondguo
[ "enhancement" ]
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
false
1,216,638,968
https://api.github.com/repos/huggingface/datasets/issues/4229
https://github.com/huggingface/datasets/pull/4229
4,229
new task tag
closed
0
2022-04-27T00:47:08
2022-04-27T00:48:28
2022-04-27T00:48:17
nazneenrajani
[]
multi-input-text-classification tag for classification datasets that take more than one input
true
1,216,523,043
https://api.github.com/repos/huggingface/datasets/issues/4228
https://github.com/huggingface/datasets/pull/4228
4,228
new task tag
closed
0
2022-04-26T22:00:33
2022-04-27T00:48:31
2022-04-27T00:46:31
nazneenrajani
[]
multi-input-text-classification tag for classification datasets that take more than one input
true
1,216,455,316
https://api.github.com/repos/huggingface/datasets/issues/4227
https://github.com/huggingface/datasets/pull/4227
4,227
Add f1 metric card, update docstring in py file
closed
1
2022-04-26T20:41:03
2022-05-03T12:50:23
2022-05-03T12:43:33
emibaylor
[]
null
true
1,216,331,073
https://api.github.com/repos/huggingface/datasets/issues/4226
https://github.com/huggingface/datasets/pull/4226
4,226
Add pearsonr mc, update functionality to match the original docs
closed
2
2022-04-26T18:30:46
2022-05-03T17:09:24
2022-05-03T17:02:28
emibaylor
[]
- adds pearsonr metric card - adds ability to return p-value - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.
true
1,216,213,464
https://api.github.com/repos/huggingface/datasets/issues/4225
https://github.com/huggingface/datasets/pull/4225
4,225
autoeval config
closed
0
2022-04-26T16:38:34
2022-04-27T00:48:31
2022-04-26T22:00:26
nazneenrajani
[]
add train eval index for autoeval
true
1,216,209,667
https://api.github.com/repos/huggingface/datasets/issues/4224
https://github.com/huggingface/datasets/pull/4224
4,224
autoeval config
closed
0
2022-04-26T16:35:19
2022-04-26T16:36:45
2022-04-26T16:36:45
nazneenrajani
[]
add train eval index for autoeval
true
1,216,107,082
https://api.github.com/repos/huggingface/datasets/issues/4223
https://github.com/huggingface/datasets/pull/4223
4,223
Add Accuracy Metric Card
closed
1
2022-04-26T15:10:46
2022-05-03T14:27:45
2022-05-03T14:20:47
emibaylor
[]
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
true
1,216,056,439
https://api.github.com/repos/huggingface/datasets/issues/4222
https://github.com/huggingface/datasets/pull/4222
4,222
Fix description links in dataset cards
closed
2
2022-04-26T14:36:25
2022-05-06T08:38:38
2022-04-26T16:52:29
albertvillanova
[]
I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent This PR fixes all description links in dataset cards.
true
1,215,911,182
https://api.github.com/repos/huggingface/datasets/issues/4221
https://github.com/huggingface/datasets/issues/4221
4,221
Dictionary Feature
closed
2
2022-04-26T12:50:18
2022-04-29T14:52:19
2022-04-28T17:04:58
jordiae
[ "question" ]
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
false
1,215,225,802
https://api.github.com/repos/huggingface/datasets/issues/4220
https://github.com/huggingface/datasets/pull/4220
4,220
Altered faiss installation comment
closed
3
2022-04-26T01:20:43
2022-05-09T17:29:34
2022-05-09T17:22:09
vishalsrao
[]
null
true
1,214,934,025
https://api.github.com/repos/huggingface/datasets/issues/4219
https://github.com/huggingface/datasets/pull/4219
4,219
Add F1 Metric Card
closed
1
2022-04-25T19:14:56
2022-04-26T20:44:18
2022-04-26T20:37:46
emibaylor
[]
null
true
1,214,748,226
https://api.github.com/repos/huggingface/datasets/issues/4218
https://github.com/huggingface/datasets/pull/4218
4,218
Make code for image downloading from image urls cacheable
closed
1
2022-04-25T16:17:59
2022-04-26T17:00:24
2022-04-26T13:38:26
mariosasko
[]
Fix #4199
true
1,214,688,141
https://api.github.com/repos/huggingface/datasets/issues/4217
https://github.com/huggingface/datasets/issues/4217
4,217
Big_Patent dataset broken
closed
3
2022-04-25T15:31:45
2022-05-26T06:29:43
2022-05-02T18:21:15
Matthew-Larsen
[ "hosted-on-google-drive" ]
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
false
1,214,614,029
https://api.github.com/repos/huggingface/datasets/issues/4216
https://github.com/huggingface/datasets/pull/4216
4,216
Avoid recursion error in map if example is returned as dict value
closed
1
2022-04-25T14:40:32
2022-05-04T17:20:06
2022-05-04T17:12:52
mariosasko
[]
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko). This code replicates the bug: ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: ...
true
1,214,579,162
https://api.github.com/repos/huggingface/datasets/issues/4215
https://github.com/huggingface/datasets/pull/4215
4,215
Add `drop_last_batch` to `IterableDataset.map`
closed
1
2022-04-25T14:15:19
2022-05-03T15:56:07
2022-05-03T15:48:54
mariosasko
[]
Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921
true
1,214,572,430
https://api.github.com/repos/huggingface/datasets/issues/4214
https://github.com/huggingface/datasets/pull/4214
4,214
Skip checksum computation in Imagefolder by default
closed
1
2022-04-25T14:10:41
2022-05-03T15:28:32
2022-05-03T15:21:29
mariosasko
[]
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
true