id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,132,218,874
https://api.github.com/repos/huggingface/datasets/issues/3706
https://github.com/huggingface/datasets/issues/3706
3,706
Unable to load dataset 'big_patent'
closed
5
2022-02-11T09:48:34
2022-02-14T15:26:03
2022-02-14T15:26:03
ankitk2109
[ "bug" ]
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
false
1,132,053,226
https://api.github.com/repos/huggingface/datasets/issues/3705
https://github.com/huggingface/datasets/pull/3705
3,705
Raise informative error when loading a save_to_disk dataset
closed
0
2022-02-11T08:21:03
2022-02-11T22:56:40
2022-02-11T22:56:39
albertvillanova
[]
People recurrently report error when trying to load a dataset (using `load_dataset`) that was previously saved using `save_to_disk`. This PR raises an informative error message telling them they should use `load_from_disk` instead. Close #3700.
true
1,132,042,631
https://api.github.com/repos/huggingface/datasets/issues/3704
https://github.com/huggingface/datasets/issues/3704
3,704
OSCAR-2109 datasets are misaligned and truncated
closed
10
2022-02-11T08:14:59
2022-03-17T18:01:04
2022-03-16T16:21:28
adrianeboyd
[ "bug" ]
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
false
1,131,882,772
https://api.github.com/repos/huggingface/datasets/issues/3703
https://github.com/huggingface/datasets/issues/3703
3,703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
closed
9
2022-02-11T06:38:42
2023-07-11T09:31:59
2023-07-11T09:31:59
zhangyifei1
[]
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File...
false
1,130,666,707
https://api.github.com/repos/huggingface/datasets/issues/3702
https://github.com/huggingface/datasets/pull/3702
3,702
Update data URL of lm1b dataset
closed
2
2022-02-10T18:46:30
2022-09-23T11:52:39
2022-09-23T11:52:39
yazdanbakhsh
[ "dataset contribution" ]
The http address doesn't work anymore
true
1,130,498,738
https://api.github.com/repos/huggingface/datasets/issues/3701
https://github.com/huggingface/datasets/pull/3701
3,701
Pin ElasticSearch
closed
0
2022-02-10T17:15:26
2022-02-10T17:31:13
2022-02-10T17:31:12
lhoestq
[]
Until we manage to support ES 8.0, I'm setting the version to `<8.0.0` Currently we're getting this error on 8.0: ```python ValueError: Either 'hosts' or 'cloud_id' must be specified ``` When instantiating a `Elasticsearch()` object
true
1,130,200,593
https://api.github.com/repos/huggingface/datasets/issues/3699
https://github.com/huggingface/datasets/pull/3699
3,699
Add dev-only config to Natural Questions dataset
closed
2
2022-02-10T14:42:24
2022-02-11T09:50:22
2022-02-11T09:50:21
albertvillanova
[]
As suggested by @lhoestq and @thomwolf, a new config has been added to Natural Questions dataset, so that only dev split can be downloaded. Fix #413.
true
1,129,864,282
https://api.github.com/repos/huggingface/datasets/issues/3698
https://github.com/huggingface/datasets/pull/3698
3,698
Add finetune-data CodeFill
closed
1
2022-02-10T11:12:51
2022-10-03T09:36:18
2022-10-03T09:36:18
rgismondi
[ "dataset contribution" ]
null
true
1,129,795,724
https://api.github.com/repos/huggingface/datasets/issues/3697
https://github.com/huggingface/datasets/pull/3697
3,697
Add code-fill datasets for pretraining/finetuning/evaluating
closed
1
2022-02-10T10:31:48
2022-07-06T15:19:58
2022-07-06T15:19:58
rgismondi
[]
null
true
1,129,764,534
https://api.github.com/repos/huggingface/datasets/issues/3696
https://github.com/huggingface/datasets/pull/3696
3,696
Force unique keys in newsqa dataset
closed
0
2022-02-10T10:09:19
2022-02-14T08:37:20
2022-02-14T08:37:19
albertvillanova
[]
Currently, it may raise `DuplicatedKeysError`. Fix #3630.
true
1,129,730,148
https://api.github.com/repos/huggingface/datasets/issues/3695
https://github.com/huggingface/datasets/pull/3695
3,695
Fix ClassLabel to/from dict when passed names_file
closed
0
2022-02-10T09:47:10
2022-02-11T23:02:32
2022-02-11T23:02:31
albertvillanova
[]
Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`. This PR, removes `names_file` as a field of the data class `ClassLabel`. - it is only used at ...
true
1,128,554,365
https://api.github.com/repos/huggingface/datasets/issues/3693
https://github.com/huggingface/datasets/pull/3693
3,693
Standardize to `Example::`
closed
1
2022-02-09T13:37:13
2022-02-17T10:20:55
2022-02-17T10:20:52
mishig25
[]
null
true
1,128,320,004
https://api.github.com/repos/huggingface/datasets/issues/3692
https://github.com/huggingface/datasets/pull/3692
3,692
Update data URL in pubmed dataset
closed
2
2022-02-09T10:06:21
2022-02-14T14:15:42
2022-02-14T14:15:41
albertvillanova
[]
Fix #3655.
true
1,127,629,306
https://api.github.com/repos/huggingface/datasets/issues/3691
https://github.com/huggingface/datasets/pull/3691
3,691
Upgrade black to version ~=22.0
closed
0
2022-02-08T18:45:19
2022-02-08T19:56:40
2022-02-08T19:56:39
LysandreJik
[]
Upgrades the `datasets` library quality tool `black` to use the first stable release of `black`, version 22.0.
true
1,127,493,538
https://api.github.com/repos/huggingface/datasets/issues/3690
https://github.com/huggingface/datasets/pull/3690
3,690
Update docs to new frontend/UI
closed
17
2022-02-08T16:38:09
2022-03-03T20:04:21
2022-03-03T20:04:20
mishig25
[]
### TLDR: Update `datasets` `docs` to the new syntax (markdown and mdx files) & frontend (as how it looks on [hf.co/transformers](https://huggingface.co/docs/transformers/index)) | Light mode | Dark mode ...
true
1,127,422,478
https://api.github.com/repos/huggingface/datasets/issues/3689
https://github.com/huggingface/datasets/pull/3689
3,689
Fix streaming for servers not supporting HTTP range requests
closed
10
2022-02-08T15:41:05
2022-02-10T16:51:25
2022-02-10T16:51:25
albertvillanova
[]
Some servers do not support HTTP range requests, whereas this is required to stream some file formats (like ZIP). ~~This PR implements a workaround for those cases, by download the files locally in a temporary directory (cleaned up by the OS once the process is finished).~~ This PR raises custom error explaining ...
true
1,127,218,321
https://api.github.com/repos/huggingface/datasets/issues/3688
https://github.com/huggingface/datasets/issues/3688
3,688
Pyarrow version error
closed
3
2022-02-08T12:53:59
2022-02-09T06:35:33
2022-02-09T06:35:32
Zaker237
[ "bug" ]
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed w...
false
1,127,154,766
https://api.github.com/repos/huggingface/datasets/issues/3687
https://github.com/huggingface/datasets/issues/3687
3,687
Can't get the text data when calling to_tf_dataset
closed
6
2022-02-08T11:52:10
2023-01-19T14:55:18
2023-01-19T14:55:18
phrasenmaeher
[]
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = Defa...
false
1,127,137,290
https://api.github.com/repos/huggingface/datasets/issues/3686
https://github.com/huggingface/datasets/issues/3686
3,686
`Translation` features cannot be `flatten`ed
closed
1
2022-02-08T11:33:48
2022-03-18T17:28:13
2022-03-18T17:28:13
SBrandeis
[ "bug" ]
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to...
false
1,126,240,444
https://api.github.com/repos/huggingface/datasets/issues/3685
https://github.com/huggingface/datasets/pull/3685
3,685
Add support for `Audio` and `Image` feature in `push_to_hub`
closed
3
2022-02-07T16:47:16
2022-02-14T18:14:57
2022-02-14T18:04:58
mariosasko
[]
Add support for the `Audio` and the `Image` feature in `push_to_hub`. The idea is to remove local path information and store file content under "bytes" in the Arrow table before the push. My initial approach (https://github.com/huggingface/datasets/commit/34c652afeff9686b6b8bf4e703c84d2205d670aa) was to use a ma...
true
1,125,133,664
https://api.github.com/repos/huggingface/datasets/issues/3684
https://github.com/huggingface/datasets/pull/3684
3,684
[fix]: iwslt2017 download urls
closed
7
2022-02-06T07:56:55
2022-09-22T16:20:19
2022-09-22T16:20:18
msarmi9
[ "dataset contribution" ]
Fixes #2076.
true
1,124,458,371
https://api.github.com/repos/huggingface/datasets/issues/3683
https://github.com/huggingface/datasets/pull/3683
3,683
added told-br (brazilian hate speech) dataset
closed
2
2022-02-04T17:44:32
2022-02-07T21:14:52
2022-02-07T21:14:52
joaoaleite
[]
Hey, Adding ToLD-Br. Feel free to ask for modifications. Thanks!!
true
1,124,434,330
https://api.github.com/repos/huggingface/datasets/issues/3682
https://github.com/huggingface/datasets/pull/3682
3,682
adding told-br for toxic/abusive hatespeech detection
closed
2
2022-02-04T17:18:29
2022-02-07T03:23:24
2022-02-04T17:36:40
joaoaleite
[]
Hey, I'm adding our dataset from our paper published at AACL 2020. Feel free to ask for modifications. Thanks!
true
1,124,237,458
https://api.github.com/repos/huggingface/datasets/issues/3681
https://github.com/huggingface/datasets/pull/3681
3,681
Fix TestCommand to move dataset_infos instead of copying
closed
6
2022-02-04T14:01:52
2023-09-24T10:00:11
2023-09-24T09:59:55
albertvillanova
[]
Why do we copy instead of moving the file? CC: @lhoestq @lvwerra
true
1,124,213,416
https://api.github.com/repos/huggingface/datasets/issues/3680
https://github.com/huggingface/datasets/pull/3680
3,680
Fix TestCommand to copy dataset_infos to local dir with only data files
closed
0
2022-02-04T13:36:46
2022-02-08T10:32:55
2022-02-08T10:32:55
albertvillanova
[]
Currently this case is missed. CC: @lvwerra
true
1,124,062,133
https://api.github.com/repos/huggingface/datasets/issues/3679
https://github.com/huggingface/datasets/issues/3679
3,679
Download datasets from a private hub
closed
3
2022-02-04T10:49:06
2022-02-22T11:08:07
2022-02-22T11:08:07
juliensimon
[ "enhancement", "private-hub" ]
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local s...
false
1,123,402,426
https://api.github.com/repos/huggingface/datasets/issues/3678
https://github.com/huggingface/datasets/pull/3678
3,678
Add code example in wikipedia card
closed
0
2022-02-03T18:09:02
2022-02-21T09:14:56
2022-02-04T13:21:39
lhoestq
[]
Close #3292.
true
1,123,192,866
https://api.github.com/repos/huggingface/datasets/issues/3677
https://github.com/huggingface/datasets/issues/3677
3,677
Discovery cannot be streamed anymore
closed
2
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
severo
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first ...
false
1,123,096,362
https://api.github.com/repos/huggingface/datasets/issues/3676
https://github.com/huggingface/datasets/issues/3676
3,676
`None` replaced by `[]` after first batch in map
closed
8
2022-02-03T13:36:48
2022-10-28T13:13:20
2022-10-28T13:13:20
lhoestq
[]
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
false
1,123,078,408
https://api.github.com/repos/huggingface/datasets/issues/3675
https://github.com/huggingface/datasets/issues/3675
3,675
Add CodeContests dataset
closed
2
2022-02-03T13:20:00
2022-07-20T11:07:05
2022-07-20T11:07:05
mariosasko
[ "dataset request" ]
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-...
false
1,123,027,874
https://api.github.com/repos/huggingface/datasets/issues/3674
https://github.com/huggingface/datasets/pull/3674
3,674
Add FrugalScore metric
closed
5
2022-02-03T12:28:52
2022-02-21T15:58:44
2022-02-21T15:58:44
moussaKam
[]
This pull request add FrugalScore metric for NLG systems evaluation. FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance. Paper: https:...
true
1,123,010,520
https://api.github.com/repos/huggingface/datasets/issues/3673
https://github.com/huggingface/datasets/issues/3673
3,673
`load_dataset("snli")` is different from dataset viewer
closed
11
2022-02-03T12:10:43
2022-02-16T11:22:31
2022-02-11T17:01:21
pietrolesci
[ "bug", "dataset-viewer" ]
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
false
1,122,980,556
https://api.github.com/repos/huggingface/datasets/issues/3672
https://github.com/huggingface/datasets/pull/3672
3,672
Prioritize `module.builder_kwargs` over defaults in `TestCommand`
closed
0
2022-02-03T11:38:42
2022-02-04T12:37:20
2022-02-04T12:37:19
lvwerra
[]
This fixes a bug in the `TestCommand` where multiple kwargs for `name` were passed if it was set in both default and `module.builder_kwargs`. Example error: ```Python Traceback (most recent call last): File "create_metadata.py", line 96, in <module> main(**vars(args)) File "create_metadata.py", line 86, ...
true
1,122,864,253
https://api.github.com/repos/huggingface/datasets/issues/3671
https://github.com/huggingface/datasets/issues/3671
3,671
Give an estimate of the dataset size in DatasetInfo
open
0
2022-02-03T09:47:10
2022-02-03T09:47:10
null
severo
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the soluti...
false
1,122,439,827
https://api.github.com/repos/huggingface/datasets/issues/3670
https://github.com/huggingface/datasets/pull/3670
3,670
feat: 🎸 generate info if dataset_infos.json does not exist
closed
3
2022-02-02T22:11:56
2022-02-21T15:57:11
2022-02-21T15:57:10
severo
[]
in get_dataset_infos(). Also: add the `use_auth_token` parameter, and create get_dataset_config_info() ✅ Closes: #3013
true
1,122,335,622
https://api.github.com/repos/huggingface/datasets/issues/3669
https://github.com/huggingface/datasets/pull/3669
3,669
Common voice validated partition
closed
7
2022-02-02T20:04:43
2022-02-08T17:26:52
2022-02-08T17:23:12
shalymin-amzn
[]
This patch adds access to the 'validated' partitions of CommonVoice datasets (provided by the dataset creators but not available in the HuggingFace interface yet). As 'validated' contains significantly more data than 'train' (although it contains both test and validation, so one needs to be careful there), it can be u...
true
1,122,261,736
https://api.github.com/repos/huggingface/datasets/issues/3668
https://github.com/huggingface/datasets/issues/3668
3,668
Couldn't cast array of type string error with cast_column
closed
5
2022-02-02T18:33:29
2022-07-19T13:36:24
2022-07-19T13:36:24
R4ZZ3
[ "bug" ]
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264...
false
1,122,060,630
https://api.github.com/repos/huggingface/datasets/issues/3667
https://github.com/huggingface/datasets/pull/3667
3,667
Process .opus files with torchaudio
closed
4
2022-02-02T15:23:14
2022-02-04T15:29:38
2022-02-04T15:29:38
polinaeterna
[]
@anton-l suggested to proccess .opus files with `torchaudio` instead of `soundfile` as it's faster: ![opus](https://user-images.githubusercontent.com/16348744/152177816-2df6076c-f28b-4aef-a08d-b499b921414d.png) (moreover, I didn't manage to load .opus files with `soundfile` / `librosa` locally on any my machine an...
true
1,122,058,894
https://api.github.com/repos/huggingface/datasets/issues/3666
https://github.com/huggingface/datasets/pull/3666
3,666
process .opus files (for Multilingual Spoken Words)
closed
3
2022-02-02T15:21:48
2022-02-22T10:04:03
2022-02-22T10:03:53
polinaeterna
[]
Opus files requires `libsndfile>=1.0.30`. Add check for this version and tests. **outdated:** Add [Multillingual Spoken Words dataset](https://mlcommons.org/en/multilingual-spoken-words/) You can specify multiple languages for downloading 😌: ```python ds = load_dataset("datasets/ml_spoken_words", languages=...
true
1,121,753,385
https://api.github.com/repos/huggingface/datasets/issues/3665
https://github.com/huggingface/datasets/pull/3665
3,665
Fix MP3 resampling when a dataset's audio files have different sampling rates
closed
0
2022-02-02T10:31:45
2022-02-02T10:52:26
2022-02-02T10:52:26
lhoestq
[]
The resampler needs to be updated if the `orig_freq` doesn't match the audio file sampling rate Fix https://github.com/huggingface/datasets/issues/3662
true
1,121,233,301
https://api.github.com/repos/huggingface/datasets/issues/3664
https://github.com/huggingface/datasets/pull/3664
3,664
[WIP] Return local paths to Common Voice
closed
19
2022-02-01T21:48:27
2022-02-22T09:14:06
2022-02-22T09:14:06
anton-l
[]
Fixes https://github.com/huggingface/datasets/issues/3663 This is a proposed way of returning the old local file-based generator while keeping the new streaming generator intact. TODO: - [ ] brainstorm a bit more on https://github.com/huggingface/datasets/issues/3663 to see if we can do better - [ ] refactor th...
true
1,121,067,647
https://api.github.com/repos/huggingface/datasets/issues/3663
https://github.com/huggingface/datasets/issues/3663
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
closed
19
2022-02-01T18:40:10
2022-09-21T15:03:09
2022-09-21T14:56:22
patrickvonplaten
[ "bug" ]
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
false
1,121,024,403
https://api.github.com/repos/huggingface/datasets/issues/3662
https://github.com/huggingface/datasets/issues/3662
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
closed
6
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
lhoestq
[]
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
false
1,121,000,251
https://api.github.com/repos/huggingface/datasets/issues/3661
https://github.com/huggingface/datasets/pull/3661
3,661
Remove unnecessary 'r' arg in
closed
1
2022-02-01T17:29:27
2022-02-07T16:57:27
2022-02-07T16:02:42
bryant1410
[]
Originally from #3489
true
1,120,982,671
https://api.github.com/repos/huggingface/datasets/issues/3660
https://github.com/huggingface/datasets/pull/3660
3,660
Change HTTP links to HTTPS
open
0
2022-02-01T17:12:51
2022-09-21T15:16:32
null
bryant1410
[]
I tested the links. I also fixed some typos. Originally from #3489
true
1,120,913,672
https://api.github.com/repos/huggingface/datasets/issues/3659
https://github.com/huggingface/datasets/issues/3659
3,659
push_to_hub but preview not working
closed
1
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
thomas-happify
[ "dataset-viewer" ]
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
false
1,120,880,395
https://api.github.com/repos/huggingface/datasets/issues/3658
https://github.com/huggingface/datasets/issues/3658
3,658
Dataset viewer issue for *P3*
closed
4
2022-02-01T15:57:56
2023-09-25T12:16:21
2023-09-25T12:16:21
jeffistyping
[]
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
false
1,120,602,620
https://api.github.com/repos/huggingface/datasets/issues/3657
https://github.com/huggingface/datasets/pull/3657
3,657
Extend dataset builder for streaming in `get_dataset_split_names`
closed
4
2022-02-01T12:21:24
2022-02-03T22:49:06
2022-02-02T11:22:01
mariosasko
[]
Currently, `get_dataset_split_names` doesn't extend a builder module to support streaming, even though it uses `StreamingDownloadManager` to download data. This PR fixes that. To test the change, run the following: ```bash pip install git+https://github.com/huggingface/datasets.git@fix-get_dataset_split_names-stre...
true
1,120,510,823
https://api.github.com/repos/huggingface/datasets/issues/3656
https://github.com/huggingface/datasets/issues/3656
3,656
checksum error subjqa dataset
closed
2
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
RensDimmendaal
[ "bug" ]
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` ---...
false
1,119,801,077
https://api.github.com/repos/huggingface/datasets/issues/3655
https://github.com/huggingface/datasets/issues/3655
3,655
Pubmed dataset not reachable
closed
6
2022-01-31T18:45:47
2022-12-19T19:18:10
2022-02-14T14:15:41
abhi-mosaic
[ "bug" ]
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionEr...
false
1,119,717,475
https://api.github.com/repos/huggingface/datasets/issues/3654
https://github.com/huggingface/datasets/pull/3654
3,654
Better TQDM output
closed
1
2022-01-31T17:22:43
2022-02-03T15:55:34
2022-02-03T15:55:33
mariosasko
[]
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tq...
true
1,119,186,952
https://api.github.com/repos/huggingface/datasets/issues/3653
https://github.com/huggingface/datasets/issues/3653
3,653
`to_json` in multiprocessing fashion sometimes deadlock
open
0
2022-01-31T09:35:07
2022-01-31T09:35:07
null
thomasw21
[ "bug" ]
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs....
false
1,118,808,738
https://api.github.com/repos/huggingface/datasets/issues/3652
https://github.com/huggingface/datasets/pull/3652
3,652
sp. Columbia => Colombia
closed
2
2022-01-31T00:41:03
2022-02-09T16:55:25
2022-01-31T08:29:07
serapio
[]
"Columbia" is various places in North America. The country is "Colombia".
true
1,118,597,647
https://api.github.com/repos/huggingface/datasets/issues/3651
https://github.com/huggingface/datasets/pull/3651
3,651
Update link in wiki_bio dataset
closed
2
2022-01-30T16:28:54
2022-01-31T14:50:48
2022-01-31T08:38:09
jxmorris12
[]
Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket. @lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached some...
true
1,118,537,429
https://api.github.com/repos/huggingface/datasets/issues/3650
https://github.com/huggingface/datasets/pull/3650
3,650
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
closed
6
2022-01-30T13:23:19
2023-09-25T06:28:51
2023-09-24T16:45:48
thomasw21
[]
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in...
true
1,117,502,250
https://api.github.com/repos/huggingface/datasets/issues/3649
https://github.com/huggingface/datasets/issues/3649
3,649
Add IGLUE dataset
open
0
2022-01-28T14:59:41
2022-01-28T15:02:35
null
lewtun
[ "dataset request", "multimodal" ]
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-...
false
1,117,465,505
https://api.github.com/repos/huggingface/datasets/issues/3648
https://github.com/huggingface/datasets/pull/3648
3,648
Fix Windows CI: bump python to 3.7
closed
0
2022-01-28T14:24:54
2022-01-28T14:40:39
2022-01-28T14:40:39
lhoestq
[]
Python>=3.7 is needed to install `tokenizers` 0.11
true
1,117,383,675
https://api.github.com/repos/huggingface/datasets/issues/3647
https://github.com/huggingface/datasets/pull/3647
3,647
Fix `add_column` on datasets with indices mapping
closed
2
2022-01-28T13:06:29
2022-01-28T15:35:58
2022-01-28T15:35:58
mariosasko
[]
My initial idea was to avoid the `flatten_indices` call and reorder a new column instead, but in the end I decided to follow `concatenate_datasets` and use `flatten_indices` to avoid padding when `dataset._indices.num_rows != dataset._data.num_rows`. Fix #3599
true
1,116,544,627
https://api.github.com/repos/huggingface/datasets/issues/3646
https://github.com/huggingface/datasets/pull/3646
3,646
Fix streaming datasets that are not reset correctly
closed
1
2022-01-27T17:21:02
2022-01-28T16:34:29
2022-01-28T16:34:28
lhoestq
[]
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return...
true
1,116,541,298
https://api.github.com/repos/huggingface/datasets/issues/3645
https://github.com/huggingface/datasets/issues/3645
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
closed
0
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
lhoestq
[]
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again...
false
1,116,519,670
https://api.github.com/repos/huggingface/datasets/issues/3644
https://github.com/huggingface/datasets/issues/3644
3,644
Add a GROUP BY operator
open
14
2022-01-27T16:57:54
2025-01-28T11:39:48
null
felix-schneider
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datas...
false
1,116,417,428
https://api.github.com/repos/huggingface/datasets/issues/3643
https://github.com/huggingface/datasets/pull/3643
3,643
Fix sem_eval_2018_task_1 download location
closed
1
2022-01-27T15:45:00
2022-02-04T15:15:26
2022-02-04T15:15:26
maxpel
[]
As discussed with @lhoestq in https://github.com/huggingface/datasets/issues/3549#issuecomment-1020176931_ this is the new pull request to fix the download location.
true
1,116,306,986
https://api.github.com/repos/huggingface/datasets/issues/3642
https://github.com/huggingface/datasets/pull/3642
3,642
Fix dataset slicing with negative bounds when indices mapping is not `None`
closed
0
2022-01-27T14:45:53
2022-01-27T18:16:23
2022-01-27T18:16:22
mariosasko
[]
Fix #3611
true
1,116,284,268
https://api.github.com/repos/huggingface/datasets/issues/3641
https://github.com/huggingface/datasets/pull/3641
3,641
Fix numpy rngs when seed is None
closed
0
2022-01-27T14:29:09
2022-01-27T18:16:08
2022-01-27T18:16:07
mariosasko
[]
Fixes the NumPy RNG when `seed` is `None`. The problem becomes obvious after reading the NumPy notes on RNG (returned by `np.random.get_state()`): > The MT19937 state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position wi...
true
1,116,133,769
https://api.github.com/repos/huggingface/datasets/issues/3640
https://github.com/huggingface/datasets/issues/3640
3,640
Issues with custom dataset in Wav2Vec2
closed
1
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
peregilk
[ "bug" ]
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://us...
false
1,116,021,420
https://api.github.com/repos/huggingface/datasets/issues/3639
https://github.com/huggingface/datasets/issues/3639
3,639
same value of precision, recall, f1 score at each epoch for classification task.
closed
1
2022-01-27T10:14:16
2022-02-24T09:02:18
2022-02-24T09:02:17
Dhanachandra
[ "bug" ]
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:3...
false
1,115,725,703
https://api.github.com/repos/huggingface/datasets/issues/3638
https://github.com/huggingface/datasets/issues/3638
3,638
AutoTokenizer hash value got change after datasets.map
open
12
2022-01-27T03:19:03
2024-03-11T13:56:15
null
tshu-w
[ "bug" ]
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
false
1,115,526,438
https://api.github.com/repos/huggingface/datasets/issues/3637
https://github.com/huggingface/datasets/issues/3637
3,637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
closed
3
2022-01-26T21:38:02
2022-02-09T16:15:53
2022-02-09T16:15:53
lewtun
[ "bug" ]
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master...
false
1,115,362,702
https://api.github.com/repos/huggingface/datasets/issues/3636
https://github.com/huggingface/datasets/pull/3636
3,636
Update index.rst
closed
0
2022-01-26T18:43:09
2022-01-26T18:44:55
2022-01-26T18:44:54
VioletteLepercq
[]
null
true
1,115,333,219
https://api.github.com/repos/huggingface/datasets/issues/3635
https://github.com/huggingface/datasets/pull/3635
3,635
Make `ted_talks_iwslt` dataset streamable
closed
3
2022-01-26T18:07:56
2022-10-04T09:36:23
2022-10-03T09:44:47
mariosasko
[ "dataset contribution" ]
null
true
1,115,133,279
https://api.github.com/repos/huggingface/datasets/issues/3634
https://github.com/huggingface/datasets/issues/3634
3,634
Dataset.shuffle(seed=None) gives fixed row permutation
closed
2
2022-01-26T15:13:08
2022-01-27T18:16:07
2022-01-27T18:16:07
elisno
[ "bug" ]
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work...
false
1,115,040,174
https://api.github.com/repos/huggingface/datasets/issues/3633
https://github.com/huggingface/datasets/pull/3633
3,633
Mirror canonical datasets in prod
closed
0
2022-01-26T13:49:37
2022-01-26T13:56:21
2022-01-26T13:56:21
lhoestq
[]
Push the datasets changes to the Hub in production by setting `HF_USE_PROD=1` I also added a fix that makes the script ignore the json, csv, text, parquet and pandas dataset builders. cc @SBrandeis
true
1,115,027,185
https://api.github.com/repos/huggingface/datasets/issues/3632
https://github.com/huggingface/datasets/issues/3632
3,632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
closed
2
2022-01-26T13:35:37
2022-02-10T06:58:11
2022-02-10T06:58:11
AnzorGozalishvili
[ "bug" ]
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file ...
false
1,114,833,662
https://api.github.com/repos/huggingface/datasets/issues/3631
https://github.com/huggingface/datasets/issues/3631
3,631
Labels conflict when loading a local CSV file.
closed
1
2022-01-26T10:00:33
2022-02-11T23:02:31
2022-02-11T23:02:31
pichljan
[ "bug" ]
## Describe the bug I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_red...
false
1,114,578,625
https://api.github.com/repos/huggingface/datasets/issues/3630
https://github.com/huggingface/datasets/issues/3630
3,630
DuplicatedKeysError of NewsQA dataset
closed
1
2022-01-26T03:05:49
2022-02-14T08:37:19
2022-02-14T08:37:19
StevenTang1998
[ "dataset bug" ]
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it: ``` a = load_dataset('newsqa', data_dir='news') ``` and the following error occurred: ``` Using custom data configuration default-data_dir=news Downloading and preparing dataset newsqa/defaul...
false
1,113,971,575
https://api.github.com/repos/huggingface/datasets/issues/3629
https://github.com/huggingface/datasets/pull/3629
3,629
Fix Hub repos update when there's a new release
closed
0
2022-01-25T14:39:45
2022-01-25T14:55:46
2022-01-25T14:55:46
lhoestq
[]
It was not listing the full list of datasets correctly cc @SBrandeis this is why it failed for 1.18.0 We should be good now !
true
1,113,930,644
https://api.github.com/repos/huggingface/datasets/issues/3628
https://github.com/huggingface/datasets/issues/3628
3,628
Dataset Card Creator drops information for "Additional Information" Section
open
0
2022-01-25T14:06:17
2022-01-25T14:09:01
null
dennlinger
[ "bug" ]
First of all, the card creator is a great addition and really helpful for streamlining dataset cards! ## Describe the bug I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Addition...
false
1,113,556,837
https://api.github.com/repos/huggingface/datasets/issues/3627
https://github.com/huggingface/datasets/pull/3627
3,627
Fix host URL in The Pile datasets
closed
4
2022-01-25T08:11:28
2022-07-20T20:54:42
2022-02-14T08:40:58
albertvillanova
[]
This PR fixes the host URL in The Pile datasets, once they have mirrored their data in another server. Fix #3626.
true
1,113,534,436
https://api.github.com/repos/huggingface/datasets/issues/3626
https://github.com/huggingface/datasets/issues/3626
3,626
The Pile cannot connect to host
closed
0
2022-01-25T07:43:33
2022-02-14T08:40:58
2022-02-14T08:40:58
albertvillanova
[ "bug" ]
## Describe the bug The Pile had issues with their previous host server and have mirrored its content to another server. The new URL server should be updated.
false
1,113,017,522
https://api.github.com/repos/huggingface/datasets/issues/3625
https://github.com/huggingface/datasets/issues/3625
3,625
Add a metadata field for when source data was produced
open
5
2022-01-24T18:52:39
2022-06-28T13:54:49
null
davanstrien
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests mak...
false
1,112,835,239
https://api.github.com/repos/huggingface/datasets/issues/3623
https://github.com/huggingface/datasets/pull/3623
3,623
Extend support for streaming datasets that use os.path.relpath
closed
0
2022-01-24T16:00:52
2022-02-04T14:03:55
2022-02-04T14:03:54
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `os.path.relpath`, by patching that function. This feature will also be useful to yield the relative path of audio or image files, within an archive or parent dir. Close #3622.
true
1,112,831,661
https://api.github.com/repos/huggingface/datasets/issues/3622
https://github.com/huggingface/datasets/issues/3622
3,622
Extend support for streaming datasets that use os.path.relpath
closed
0
2022-01-24T15:58:23
2022-02-04T14:03:54
2022-02-04T14:03:54
albertvillanova
[ "enhancement" ]
Extend support for streaming datasets that use `os.path.relpath`. This feature will also be useful to yield the relative path of audio or image files.
false
1,112,720,434
https://api.github.com/repos/huggingface/datasets/issues/3621
https://github.com/huggingface/datasets/issues/3621
3,621
Consider adding `ipywidgets` as a dependency.
closed
4
2022-01-24T14:27:11
2022-02-24T09:04:36
2022-02-24T09:04:36
koaning
[ "bug" ]
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
false
1,112,677,252
https://api.github.com/repos/huggingface/datasets/issues/3620
https://github.com/huggingface/datasets/pull/3620
3,620
Add Fon language tag
closed
0
2022-01-24T13:52:26
2022-02-04T14:04:36
2022-02-04T14:04:35
albertvillanova
[]
Add Fon language tag to resources.
true
1,112,611,415
https://api.github.com/repos/huggingface/datasets/issues/3619
https://github.com/huggingface/datasets/pull/3619
3,619
fix meta in mls
closed
1
2022-01-24T12:54:38
2022-01-24T20:53:22
2022-01-24T20:53:22
polinaeterna
[]
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
true
1,112,123,365
https://api.github.com/repos/huggingface/datasets/issues/3618
https://github.com/huggingface/datasets/issues/3618
3,618
TIMIT Dataset not working with GPU
closed
3
2022-01-24T03:26:03
2023-07-25T15:20:20
2023-07-25T15:20:20
TheSeamau5
[ "bug" ]
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4...
false
1,111,938,691
https://api.github.com/repos/huggingface/datasets/issues/3617
https://github.com/huggingface/datasets/pull/3617
3,617
PR for the CFPB Consumer Complaints dataset
closed
8
2022-01-23T17:47:12
2022-02-07T21:08:31
2022-02-07T21:08:31
kayvane1
[]
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
true
1,111,587,861
https://api.github.com/repos/huggingface/datasets/issues/3616
https://github.com/huggingface/datasets/pull/3616
3,616
Make streamable the BnL Historical Newspapers dataset
closed
0
2022-01-22T14:52:36
2022-02-04T14:05:23
2022-02-04T14:05:21
albertvillanova
[]
I've refactored the code in order to make the dataset streamable and to avoid it takes too long: - I've used `iter_files` Close #3615
true
1,111,576,876
https://api.github.com/repos/huggingface/datasets/issues/3615
https://github.com/huggingface/datasets/issues/3615
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
closed
3
2022-01-22T14:12:59
2022-02-04T14:05:21
2022-02-04T14:05:21
albertvillanova
[ "bug" ]
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
false
1,110,736,657
https://api.github.com/repos/huggingface/datasets/issues/3614
https://github.com/huggingface/datasets/pull/3614
3,614
Minor fixes
closed
0
2022-01-21T17:48:44
2022-01-24T12:45:49
2022-01-24T12:45:49
mariosasko
[]
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
true
1,110,684,015
https://api.github.com/repos/huggingface/datasets/issues/3613
https://github.com/huggingface/datasets/issues/3613
3,613
Files not updating in dataset viewer
closed
2
2022-01-21T16:47:20
2022-01-22T08:13:13
2022-01-22T08:13:13
abidlabs
[ "dataset-viewer" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and...
false
1,110,506,466
https://api.github.com/repos/huggingface/datasets/issues/3612
https://github.com/huggingface/datasets/pull/3612
3,612
wikifix
closed
4
2022-01-21T14:05:11
2022-02-03T17:58:16
2022-02-03T17:58:16
apergo-ai
[]
This should get the wikipedia dataloading script back up and running - at least I hope so (tested with language ff and ii)
true
1,110,399,096
https://api.github.com/repos/huggingface/datasets/issues/3611
https://github.com/huggingface/datasets/issues/3611
3,611
Indexing bug after dataset.select()
closed
1
2022-01-21T12:09:30
2022-01-27T18:16:22
2022-01-27T18:16:22
kamalkraj
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli":...
false
1,109,777,314
https://api.github.com/repos/huggingface/datasets/issues/3610
https://github.com/huggingface/datasets/issues/3610
3,610
Checksum error when trying to load amazon_review dataset
closed
1
2022-01-20T21:20:32
2022-01-21T13:22:31
2022-01-21T13:22:31
ghost
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug I am getting the issue when trying to load dataset using ``` dataset = load_dataset("amazon_polarity") ``` ## Expected results dataset loaded ## Actual results ``` -------------------------------------...
false
1,109,579,112
https://api.github.com/repos/huggingface/datasets/issues/3609
https://github.com/huggingface/datasets/pull/3609
3,609
Fixes to pubmed dataset download function
closed
3
2022-01-20T17:31:35
2022-03-03T16:18:52
2022-03-03T14:23:35
spacemanidol
[]
Pubmed has updated its settings for 2022 and thus existing download script does not work.
true
1,109,310,981
https://api.github.com/repos/huggingface/datasets/issues/3608
https://github.com/huggingface/datasets/issues/3608
3,608
Add support for continuous metrics (RMSE, MAE)
closed
3
2022-01-20T13:35:36
2022-03-09T17:18:20
2022-03-09T17:18:20
ck37
[ "enhancement", "good first issue" ]
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP m...
false
1,109,218,370
https://api.github.com/repos/huggingface/datasets/issues/3607
https://github.com/huggingface/datasets/pull/3607
3,607
Add MIT Scene Parsing Benchmark
closed
0
2022-01-20T12:03:07
2022-02-18T12:51:01
2022-02-18T12:51:00
mariosasko
[]
Add MIT Scene Parsing Benchmark (a subset of ADE20k). TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
true
1,108,918,701
https://api.github.com/repos/huggingface/datasets/issues/3606
https://github.com/huggingface/datasets/issues/3606
3,606
audio column not saved correctly after resampling
closed
3
2022-01-20T06:37:10
2022-01-23T01:41:01
2022-01-23T01:24:14
laphang
[ "bug" ]
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected resul...
false
1,108,738,561
https://api.github.com/repos/huggingface/datasets/issues/3605
https://github.com/huggingface/datasets/pull/3605
3,605
Adding Turkic X-WMT evaluation set for machine translation
closed
5
2022-01-20T01:40:29
2022-01-31T09:50:57
2022-01-31T09:50:57
mirzakhalov
[]
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are: Azerbaijani (az) Bashkir (ba) English (en) Karakalpak (kaa) ...
true
1,108,477,316
https://api.github.com/repos/huggingface/datasets/issues/3604
https://github.com/huggingface/datasets/issues/3604
3,604
Dataset Viewer not showing Previews for Private Datasets
closed
2
2022-01-19T19:29:26
2022-09-26T08:04:43
2022-09-26T08:04:43
abidlabs
[ "enhancement", "dataset-viewer" ]
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private ...
false