id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,362,193,587
https://api.github.com/repos/huggingface/datasets/issues/4930
https://github.com/huggingface/datasets/pull/4930
4,930
Add cc-by-nc-2.0 to list of licenses
closed
5
2022-09-05T15:37:32
2022-09-06T16:43:32
2022-09-05T17:01:04
albertvillanova
[]
This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md
true
1,361,508,366
https://api.github.com/repos/huggingface/datasets/issues/4929
https://github.com/huggingface/datasets/pull/4929
4,929
Fixes a typo in loading documentation
closed
0
2022-09-05T07:18:54
2022-09-06T02:11:03
2022-09-05T13:06:38
sighingnow
[]
As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`. ![image](https://user-images.githubusercontent.com/7144772/188390445-e1f04d54-e3e3-4762-8686-63ecbe4087e5.png)
true
1,360,941,172
https://api.github.com/repos/huggingface/datasets/issues/4928
https://github.com/huggingface/datasets/pull/4928
4,928
Add ability to read-write to SQL databases.
closed
14
2022-09-03T19:09:08
2022-10-03T16:34:36
2022-10-03T16:32:28
Dref360
[]
Fixes #3094 Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy. I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional. I also recorded a Loom to showcase the feature. https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541...
true
1,360,428,139
https://api.github.com/repos/huggingface/datasets/issues/4927
https://github.com/huggingface/datasets/pull/4927
4,927
fix BLEU metric card
closed
0
2022-09-02T17:00:56
2022-09-09T16:28:15
2022-09-09T16:28:15
antoniolanza1996
[]
I've fixed some typos in BLEU metric card.
true
1,360,384,484
https://api.github.com/repos/huggingface/datasets/issues/4926
https://github.com/huggingface/datasets/pull/4926
4,926
Dataset infos in yaml
closed
6
2022-09-02T16:10:05
2024-05-04T14:52:50
2022-10-03T09:11:12
lhoestq
[ "dataset contribution" ]
To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place. To be more specific, I moved these fie...
true
1,360,007,616
https://api.github.com/repos/huggingface/datasets/issues/4925
https://github.com/huggingface/datasets/pull/4925
4,925
Add note about loading image / audio files to docs
closed
9
2022-09-02T10:31:58
2022-09-26T12:21:30
2022-09-23T13:59:07
lewtun
[]
This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure. Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447 cc @NielsRogge
true
1,358,611,513
https://api.github.com/repos/huggingface/datasets/issues/4924
https://github.com/huggingface/datasets/issues/4924
4,924
Concatenate_datasets loads everything into RAM
closed
0
2022-09-01T10:25:17
2022-09-01T11:50:54
2022-09-01T11:50:54
louisdeneve
[ "bug" ]
## Describe the bug When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance ## Steps to reproduce the bug ```...
false
1,357,735,287
https://api.github.com/repos/huggingface/datasets/issues/4923
https://github.com/huggingface/datasets/pull/4923
4,923
decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround
closed
5
2022-08-31T18:57:59
2022-11-02T11:54:33
2022-09-20T13:12:52
polinaeterna
[]
`torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works) another option would...
true
1,357,684,018
https://api.github.com/repos/huggingface/datasets/issues/4922
https://github.com/huggingface/datasets/issues/4922
4,922
I/O error on Google Colab in streaming mode
closed
0
2022-08-31T18:08:26
2022-08-31T18:15:48
2022-08-31T18:15:48
jotterbach
[ "bug" ]
## Describe the bug When trying to load a streaming dataset in Google Colab the loading fails with an I/O error ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) list(hf_ds....
false
1,357,609,003
https://api.github.com/repos/huggingface/datasets/issues/4921
https://github.com/huggingface/datasets/pull/4921
4,921
Fix missing tags in dataset cards
closed
1
2022-08-31T16:52:27
2022-09-22T14:34:11
2022-09-01T05:04:53
albertvillanova
[]
Fix missing tags in dataset cards: - eraser_multi_rc - hotpot_qa - metooma - movie_rationales - qanta - quora - quoref - race - ted_hrlr - ted_talks_iwslt This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 ...
true
1,357,564,589
https://api.github.com/repos/huggingface/datasets/issues/4920
https://github.com/huggingface/datasets/issues/4920
4,920
Unable to load local tsv files through load_dataset method
closed
1
2022-08-31T16:13:39
2022-09-01T05:31:30
2022-09-01T05:31:30
DataNoob0723
[ "bug" ]
## Describe the bug Unable to load local tsv files through load_dataset method. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug data_files = { 'train': 'train.tsv', 'test': 'test.tsv' } raw_datasets = load_dataset('tsv', data_files=data_files) ## Expected results I am p...
false
1,357,441,599
https://api.github.com/repos/huggingface/datasets/issues/4919
https://github.com/huggingface/datasets/pull/4919
4,919
feat: improve error message on Keys mismatch. closes #4917
closed
2
2022-08-31T14:41:36
2022-09-05T08:46:01
2022-09-05T08:43:33
PaulLerner
[]
Hi @lhoestq what do you think? Let me give you a code sample: ```py >>> import datasets >>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]}) >>> foo.save_to_disk('foo') # edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz' >>> datasets.load_from_disk('foo') --------------------------...
true
1,357,242,757
https://api.github.com/repos/huggingface/datasets/issues/4918
https://github.com/huggingface/datasets/issues/4918
4,918
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
closed
2
2022-08-31T12:09:07
2022-09-05T21:36:34
2022-09-05T16:32:44
finiteautomata
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines ### Description After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist. ### Owner _No response_
false
1,357,193,841
https://api.github.com/repos/huggingface/datasets/issues/4917
https://github.com/huggingface/datasets/issues/4917
4,917
Keys mismatch: make error message more informative
closed
4
2022-08-31T11:24:34
2022-09-05T08:43:38
2022-09-05T08:43:38
PaulLerner
[ "enhancement", "good first issue" ]
**Is your feature request related to a problem? Please describe.** When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like: `ValueError: Keys mismatch: between {'bar': V...
false
1,357,076,940
https://api.github.com/repos/huggingface/datasets/issues/4916
https://github.com/huggingface/datasets/issues/4916
4,916
Apache Beam unable to write the downloaded wikipedia dataset
closed
1
2022-08-31T09:39:25
2022-08-31T10:53:19
2022-08-31T10:53:19
Shilpac20
[ "bug" ]
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while s...
false
1,356,009,042
https://api.github.com/repos/huggingface/datasets/issues/4915
https://github.com/huggingface/datasets/issues/4915
4,915
FileNotFoundError while downloading wikipedia dataset for any language
open
5
2022-08-30T16:15:46
2022-12-04T22:20:33
null
Shilpac20
[ "bug" ]
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. Environment: ## Step...
false
1,355,482,624
https://api.github.com/repos/huggingface/datasets/issues/4914
https://github.com/huggingface/datasets/pull/4914
4,914
Support streaming swda dataset
closed
1
2022-08-30T09:46:28
2022-08-30T11:16:33
2022-08-30T11:14:16
albertvillanova
[]
Support streaming swda dataset.
true
1,355,232,007
https://api.github.com/repos/huggingface/datasets/issues/4913
https://github.com/huggingface/datasets/pull/4913
4,913
Add license and citation information to cosmos_qa dataset
closed
1
2022-08-30T06:23:19
2022-08-30T09:49:31
2022-08-30T09:47:35
albertvillanova
[]
This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0. This PR also updates the citation information.
true
1,355,078,864
https://api.github.com/repos/huggingface/datasets/issues/4912
https://github.com/huggingface/datasets/issues/4912
4,912
datasets map() handles all data at a stroke and takes long time
closed
7
2022-08-30T02:25:56
2023-04-06T09:43:58
2022-09-06T09:23:35
BruceStayHungry
[]
**1. Background** Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. The corresponding code: ``` with accelerator.main_process_first(): tokenized_...
false
1,354,426,978
https://api.github.com/repos/huggingface/datasets/issues/4911
https://github.com/huggingface/datasets/issues/4911
4,911
[Tests] Ensure `datasets` supports renamed repositories
open
2
2022-08-29T14:46:14
2025-06-19T06:10:52
null
lhoestq
[ "good second issue" ]
On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well. However it would be nice to have an integration test to make sure we don't break support for renamed datasets. To implement this we can use t...
false
1,354,374,328
https://api.github.com/repos/huggingface/datasets/issues/4910
https://github.com/huggingface/datasets/issues/4910
4,910
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()
open
7
2022-08-29T14:11:48
2022-09-13T11:58:46
null
bablf
[ "bug", "good first issue" ]
## Describe the bug In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz"). I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix...
false
1,353,997,788
https://api.github.com/repos/huggingface/datasets/issues/4909
https://github.com/huggingface/datasets/pull/4909
4,909
Update GLUE evaluation metadata
closed
1
2022-08-29T09:43:44
2022-08-29T14:53:29
2022-08-29T14:51:18
lewtun
[]
This PR updates the evaluation metadata for GLUE to: * Include defaults for all configs except `ax` (which only has a `test` split with no known labels) * Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private) * Fix the `task_id` for some existing defaults ...
true
1,353,995,574
https://api.github.com/repos/huggingface/datasets/issues/4908
https://github.com/huggingface/datasets/pull/4908
4,908
Fix missing tags in dataset cards
closed
1
2022-08-29T09:41:53
2022-09-22T14:35:56
2022-08-29T16:13:07
albertvillanova
[]
Fix missing tags in dataset cards: - asnq - clue - common_gen - cosmos_qa - guardian_authorship - hindi_discourse - py_ast - x_stance This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896
true
1,353,808,348
https://api.github.com/repos/huggingface/datasets/issues/4907
https://github.com/huggingface/datasets/issues/4907
4,907
None Type error for swda datasets
closed
3
2022-08-29T07:05:20
2022-08-30T14:43:41
2022-08-30T14:43:41
hannan72
[ "bug" ]
## Describe the bug I got `'NoneType' object is not callable` error while calling the swda datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("swda") ``` ## Expected results Run without error ## Environment info <!-- You can run the command `datase...
false
1,353,223,925
https://api.github.com/repos/huggingface/datasets/issues/4906
https://github.com/huggingface/datasets/issues/4906
4,906
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
closed
7
2022-08-28T02:23:24
2024-11-16T08:59:17
2022-10-03T12:22:50
OPterminator
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. Not able to import datasets ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import os os.environ["WANDB_API_KEY"] = "0" ## to silence warning import numpy as np import random import sklearn import matplotlib.p...
false
1,353,002,837
https://api.github.com/repos/huggingface/datasets/issues/4904
https://github.com/huggingface/datasets/pull/4904
4,904
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config
closed
2
2022-08-27T10:04:57
2022-08-30T10:06:21
2022-08-30T10:03:25
sanchit-gandhi
[]
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61 These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`. How...
true
1,352,539,075
https://api.github.com/repos/huggingface/datasets/issues/4903
https://github.com/huggingface/datasets/pull/4903
4,903
Fix CI reporting
closed
1
2022-08-26T17:16:30
2022-08-26T17:49:33
2022-08-26T17:46:59
albertvillanova
[]
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary. This PR fixes a regression introduced by: - #4845 This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
true
1,352,469,196
https://api.github.com/repos/huggingface/datasets/issues/4902
https://github.com/huggingface/datasets/issues/4902
4,902
Name the default config `default`
closed
1
2022-08-26T16:16:22
2023-07-24T21:15:31
2023-07-24T21:15:31
severo
[ "enhancement", "question" ]
Currently, if a dataset has no configuration, a default configuration is created from the dataset name. For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`. It might be easier...
false
1,352,438,915
https://api.github.com/repos/huggingface/datasets/issues/4901
https://github.com/huggingface/datasets/pull/4901
4,901
Raise ManualDownloadError from get_dataset_config_info
closed
1
2022-08-26T15:45:56
2022-08-30T10:42:21
2022-08-30T10:40:04
albertvillanova
[]
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download. Related to: - #4898 CC: @severo
true
1,352,405,855
https://api.github.com/repos/huggingface/datasets/issues/4900
https://github.com/huggingface/datasets/issues/4900
4,900
Dataset Viewer issue for asaxena1990/Dummy_dataset
closed
3
2022-08-26T15:15:44
2023-07-24T15:42:09
2023-07-24T15:42:09
ankurcl
[]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,352,031,286
https://api.github.com/repos/huggingface/datasets/issues/4899
https://github.com/huggingface/datasets/pull/4899
4,899
Re-add code and und language tags
closed
1
2022-08-26T09:48:57
2022-08-26T10:27:18
2022-08-26T10:24:20
albertvillanova
[]
This PR fixes the removal of 2 language tags done by: - #4882 The tags are: - "code": this is not a IANA tag but needed - "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af - used in "mc4" and "udhr" datasets
true
1,351,851,254
https://api.github.com/repos/huggingface/datasets/issues/4898
https://github.com/huggingface/datasets/issues/4898
4,898
Dataset Viewer issue for timit_asr
closed
5
2022-08-26T07:12:05
2022-10-03T12:40:28
2022-10-03T12:40:27
InayatUllah932
[]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,351,784,727
https://api.github.com/repos/huggingface/datasets/issues/4897
https://github.com/huggingface/datasets/issues/4897
4,897
datasets generate large arrow file
closed
2
2022-08-26T05:51:16
2022-09-18T05:07:52
2022-09-18T05:07:52
jax11235
[ "bug" ]
Checking the large file in disk, and found the large cache file in the cifar10 data directory: ![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png) As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be so...
false
1,351,180,409
https://api.github.com/repos/huggingface/datasets/issues/4896
https://github.com/huggingface/datasets/pull/4896
4,896
Fix missing tags in dataset cards
closed
1
2022-08-25T16:41:43
2022-09-22T14:37:16
2022-08-26T04:41:48
albertvillanova
[]
Fix missing tags in dataset cards: - anli - coarse_discourse - commonsense_qa - cos_e - ilist - lc_quad - web_questions - xsum This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891
true
1,350,798,527
https://api.github.com/repos/huggingface/datasets/issues/4895
https://github.com/huggingface/datasets/issues/4895
4,895
load_dataset method returns Unknown split "validation" even if this dir exists
closed
18
2022-08-25T12:11:00
2024-03-26T16:47:48
2022-09-29T08:07:50
SamSamhuns
[ "bug" ]
## Describe the bug The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path. The data directories are as follows and a...
false
1,350,667,270
https://api.github.com/repos/huggingface/datasets/issues/4894
https://github.com/huggingface/datasets/pull/4894
4,894
Add citation information to makhzan dataset
closed
1
2022-08-25T10:16:40
2022-08-30T06:21:54
2022-08-25T13:19:41
albertvillanova
[]
This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information: - https://github.com/zeerakahmed/makhzan/issues/43
true
1,350,655,674
https://api.github.com/repos/huggingface/datasets/issues/4893
https://github.com/huggingface/datasets/issues/4893
4,893
Oversampling strategy for iterable datasets in `interleave_datasets`
closed
9
2022-08-25T10:06:55
2022-10-03T12:37:46
2022-10-03T12:37:46
lhoestq
[ "good second issue" ]
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects. It would be nice to expand `interleave_datasets` for iterable ...
false
1,350,636,499
https://api.github.com/repos/huggingface/datasets/issues/4892
https://github.com/huggingface/datasets/pull/4892
4,892
Add citation to ro_sts and ro_sts_parallel datasets
closed
1
2022-08-25T09:51:06
2022-08-25T10:49:56
2022-08-25T10:49:56
albertvillanova
[]
This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information: - https://github.com/dumitrescustefan/RO-STS/issues/4
true
1,350,589,813
https://api.github.com/repos/huggingface/datasets/issues/4891
https://github.com/huggingface/datasets/pull/4891
4,891
Fix missing tags in dataset cards
closed
0
2022-08-25T09:14:17
2022-09-22T14:39:02
2022-08-25T13:43:34
albertvillanova
[]
Fix missing tags in dataset cards: - aslg_pc12 - librispeech_lm - mwsc - opus100 - qasc - quail - squadshifts - winograd_wsc This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833
true
1,350,578,029
https://api.github.com/repos/huggingface/datasets/issues/4890
https://github.com/huggingface/datasets/pull/4890
4,890
add Dataset.from_list
closed
2
2022-08-25T09:05:58
2022-09-02T10:22:59
2022-09-02T10:20:33
sanderland
[]
As discussed in #4885 I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict. However, it seems the constructor takes care of filling info when it is empty. ``` if info.features is None: info.features = Features( { col: generate_from_arro...
true
1,349,758,525
https://api.github.com/repos/huggingface/datasets/issues/4889
https://github.com/huggingface/datasets/issues/4889
4,889
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
closed
5
2022-08-24T16:54:43
2023-03-02T15:33:05
2023-03-02T15:33:04
patrickvonplaten
[ "bug" ]
## Describe the bug When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749 ## Steps to reproduce the bug If you run the following code once with `torchaudio==0.11.0+cu102` and `torc...
false
1,349,447,521
https://api.github.com/repos/huggingface/datasets/issues/4888
https://github.com/huggingface/datasets/issues/4888
4,888
Dataset Viewer issue for subjqa
closed
2
2022-08-24T13:26:20
2022-09-08T08:23:42
2022-09-08T08:23:42
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/subjqa ### Description Getting the following error for this dataset: ``` Status code: 500 Exception: Status500Error Message: 2 or more items returned, instead of 1 ``` Not sure what's causing it though 🤔 ### Owner Yes
false
1,349,426,693
https://api.github.com/repos/huggingface/datasets/issues/4887
https://github.com/huggingface/datasets/pull/4887
4,887
Add "cc-by-nc-sa-2.0" to list of licenses
closed
2
2022-08-24T13:11:49
2022-08-26T10:31:32
2022-08-26T10:29:20
osanseviero
[]
Datasets side of https://github.com/huggingface/hub-docs/pull/285
true
1,349,285,569
https://api.github.com/repos/huggingface/datasets/issues/4886
https://github.com/huggingface/datasets/issues/4886
4,886
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
open
9
2022-08-24T11:24:21
2023-02-02T02:40:53
null
JeanKaddour
[ "bug" ]
## Describe the bug Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('huggan/CelebA-HQ') ``` ## Expected results See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#...
false
1,349,181,448
https://api.github.com/repos/huggingface/datasets/issues/4885
https://github.com/huggingface/datasets/issues/4885
4,885
Create dataset from list of dicts
closed
3
2022-08-24T10:01:24
2022-09-08T16:02:52
2022-09-08T16:02:52
sanderland
[ "enhancement" ]
I often find myself with data from a variety of sources, and a list of dicts is very common among these. However, converting this to a Dataset is a little awkward, requiring either ```Dataset.from_pandas(pd.DataFrame(formatted_training_data))``` Which can error out on some more exotic values as 2-d arrays for reas...
false
1,349,105,946
https://api.github.com/repos/huggingface/datasets/issues/4884
https://github.com/huggingface/datasets/pull/4884
4,884
Fix documentation card of math_qa dataset
closed
1
2022-08-24T09:00:56
2022-08-24T11:33:17
2022-08-24T11:33:16
albertvillanova
[]
Fix documentation card of math_qa dataset.
true
1,349,083,235
https://api.github.com/repos/huggingface/datasets/issues/4883
https://github.com/huggingface/datasets/issues/4883
4,883
With dataloader RSS memory consumed by HF datasets monotonically increases
open
44
2022-08-24T08:42:54
2024-01-23T12:42:40
null
apsdehal
[ "bug" ]
## Describe the bug When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant. ## Steps to reproduce the bug Run and observe the output of this snippet which logs RSS memory. ```python import psutil import os from transf...
false
1,348,913,665
https://api.github.com/repos/huggingface/datasets/issues/4882
https://github.com/huggingface/datasets/pull/4882
4,882
Fix language tags resource file
closed
1
2022-08-24T06:06:01
2022-08-24T13:58:33
2022-08-24T13:58:30
albertvillanova
[]
This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08). This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See: - #4753
true
1,348,495,777
https://api.github.com/repos/huggingface/datasets/issues/4881
https://github.com/huggingface/datasets/issues/4881
4,881
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)
open
49
2022-08-23T20:14:24
2024-04-22T15:57:28
null
alexis-michaud
[ "enhancement" ]
**The problem:** Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial. Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datase...
false
1,348,452,776
https://api.github.com/repos/huggingface/datasets/issues/4880
https://github.com/huggingface/datasets/pull/4880
4,880
Added names of less-studied languages
closed
2
2022-08-23T19:32:38
2022-08-24T12:52:46
2022-08-24T12:52:46
BenjaminGalliot
[]
Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets.
true
1,348,346,407
https://api.github.com/repos/huggingface/datasets/issues/4879
https://github.com/huggingface/datasets/pull/4879
4,879
Fix Citation Information section in dataset cards
closed
1
2022-08-23T18:06:43
2022-09-27T14:04:45
2022-08-24T04:09:07
albertvillanova
[]
Fix Citation Information section in dataset cards: - cc_news - conllpp - datacommons_factcheck - gnad10 - id_panl_bppt - jigsaw_toxicity_pred - kinnews_kirnews - kor_sarcasm - makhzan - reasoning_bg - ro_sts - ro_sts_parallel - sanskrit_classic - telugu_news - thaiqa_squad - wiki_movies This PR parti...
true
1,348,270,141
https://api.github.com/repos/huggingface/datasets/issues/4878
https://github.com/huggingface/datasets/issues/4878
4,878
[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file`
closed
1
2022-08-23T17:09:55
2022-09-13T14:00:06
2022-09-13T14:00:05
severo
[ "help wanted", "question" ]
In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon) See https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169 It's used here: https://github.com/huggingfac...
false
1,348,246,755
https://api.github.com/repos/huggingface/datasets/issues/4877
https://github.com/huggingface/datasets/pull/4877
4,877
Fix documentation card of covid_qa_castorini dataset
closed
1
2022-08-23T16:52:33
2022-08-23T18:05:01
2022-08-23T18:05:00
albertvillanova
[]
Fix documentation card of covid_qa_castorini dataset.
true
1,348,202,678
https://api.github.com/repos/huggingface/datasets/issues/4876
https://github.com/huggingface/datasets/issues/4876
4,876
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`
closed
15
2022-08-23T16:16:41
2022-10-03T09:11:13
2022-10-03T09:11:13
lhoestq
[]
Currently there are two places to find metadata for datasets: - datasets_infos.json, which contains **per dataset config** - description - citation - license - splits and sizes - checksums of the data files - feature types - and more - YAML tags, which contain - license - language - trai...
false
1,348,095,686
https://api.github.com/repos/huggingface/datasets/issues/4875
https://github.com/huggingface/datasets/issues/4875
4,875
`_resolve_features` ignores the token
open
11
2022-08-23T14:57:36
2022-10-17T13:45:47
null
severo
[]
## Describe the bug When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `...
false
1,347,618,197
https://api.github.com/repos/huggingface/datasets/issues/4874
https://github.com/huggingface/datasets/pull/4874
4,874
[docs] Some tiny doc tweaks
closed
1
2022-08-23T09:19:40
2022-08-24T17:27:57
2022-08-24T17:27:56
julien-c
[]
null
true
1,347,592,022
https://api.github.com/repos/huggingface/datasets/issues/4873
https://github.com/huggingface/datasets/issues/4873
4,873
Multiple dataloader memory error
open
3
2022-08-23T08:59:50
2023-01-26T02:01:11
null
cyk1337
[ "bug" ]
For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)` It causes the memory error when generating batches. Any solutions to it? ```bash File "/home/xxx/...
false
1,347,180,765
https://api.github.com/repos/huggingface/datasets/issues/4872
https://github.com/huggingface/datasets/pull/4872
4,872
Docs for creating an audio dataset
closed
6
2022-08-23T01:07:09
2022-09-22T17:19:13
2022-09-21T10:27:04
stevhliu
[ "documentation" ]
This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂
true
1,346,703,568
https://api.github.com/repos/huggingface/datasets/issues/4871
https://github.com/huggingface/datasets/pull/4871
4,871
Fix: wmt datasets - fix CWMT zh subsets
closed
1
2022-08-22T16:42:09
2022-08-23T10:00:20
2022-08-23T10:00:19
lhoestq
[]
Fix https://github.com/huggingface/datasets/issues/4575 TODO: run `datasets-cli test`: - [x] wmt17 - [x] wmt18 - [x] wmt19
true
1,346,160,498
https://api.github.com/repos/huggingface/datasets/issues/4870
https://github.com/huggingface/datasets/pull/4870
4,870
audio folder check CI
closed
1
2022-08-22T10:15:53
2022-11-02T11:54:35
2022-08-22T12:19:40
polinaeterna
[]
null
true
1,345,513,758
https://api.github.com/repos/huggingface/datasets/issues/4869
https://github.com/huggingface/datasets/pull/4869
4,869
Fix typos in documentation
closed
1
2022-08-21T15:10:03
2022-08-22T09:25:39
2022-08-22T09:09:58
fl-lo
[]
null
true
1,345,191,322
https://api.github.com/repos/huggingface/datasets/issues/4868
https://github.com/huggingface/datasets/pull/4868
4,868
adding mafand to datasets
closed
6
2022-08-20T15:26:14
2022-08-22T11:00:50
2022-08-22T08:52:23
dadelani
[ "wontfix" ]
I'm addding the MAFAND dataset by Masakhane based on the paper/repository below: Paper: https://aclanthology.org/2022.naacl-main.223/ Code: https://github.com/masakhane-io/lafand-mt Please, help merge this Everything works except for creating dummy data file
true
1,344,982,646
https://api.github.com/repos/huggingface/datasets/issues/4867
https://github.com/huggingface/datasets/pull/4867
4,867
Complete tags of superglue dataset card
closed
1
2022-08-19T23:44:39
2022-08-22T09:14:03
2022-08-22T08:58:31
richarddwang
[]
Related to #4479 .
true
1,344,809,132
https://api.github.com/repos/huggingface/datasets/issues/4866
https://github.com/huggingface/datasets/pull/4866
4,866
amend docstring for dunder
open
1
2022-08-19T19:09:15
2022-09-09T16:33:11
null
schafsam
[]
display dunder method in docsting with underlines an not bold markdown.
true
1,344,552,626
https://api.github.com/repos/huggingface/datasets/issues/4865
https://github.com/huggingface/datasets/issues/4865
4,865
Dataset Viewer issue for MoritzLaurer/multilingual_nli
closed
4
2022-08-19T14:55:20
2022-08-22T14:47:14
2022-08-22T06:13:20
MoritzLaurer
[ "dataset-viewer" ]
### Link _No response_ ### Description I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli It displays the error: ``` Status code: 400 Exception: Status400Error Message: The dataset...
false
1,344,410,043
https://api.github.com/repos/huggingface/datasets/issues/4864
https://github.com/huggingface/datasets/issues/4864
4,864
Allow pathlib PoxisPath in Dataset.read_json
open
7
2022-08-19T12:59:17
2025-04-11T17:22:48
null
changjonathanc
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** ``` from pathlib import Path from datasets import Dataset ds = Dataset.read_json(Path('data.json')) ``` causes an error ``` AttributeError: 'PosixPath' object has no attribute 'decode' ``` **Describe the solution you'd like** It should be...
false
1,343,737,668
https://api.github.com/repos/huggingface/datasets/issues/4863
https://github.com/huggingface/datasets/issues/4863
4,863
TFDS wiki_dialog dataset to Huggingface dataset
closed
4
2022-08-18T23:06:30
2022-08-22T09:41:45
2022-08-22T05:18:53
djaym7
[ "dataset request" ]
## Adding a Dataset - **Name:** *Wiki_dialog* - **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A - **Paper: https://arxiv.org/abs/2205.09073 - **Data: https://github.com/google-research/dialog-inpainting - **Motivation:** *Research and Development on ...
false
1,343,464,699
https://api.github.com/repos/huggingface/datasets/issues/4862
https://github.com/huggingface/datasets/issues/4862
4,862
Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code
closed
5
2022-08-18T18:36:14
2022-08-31T09:25:08
2022-08-31T09:25:08
yana-xuyan
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug # The dataset function is as follows: from pathlib import Path from typing import Dict, List, Tuple import datasets import pandas as pd _CITATION = """\ """...
false
1,343,260,220
https://api.github.com/repos/huggingface/datasets/issues/4861
https://github.com/huggingface/datasets/issues/4861
4,861
Using disk for memory with the method `from_dict`
open
1
2022-08-18T15:18:18
2023-01-26T18:36:28
null
HugoLaurencon
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM ...
false
1,342,311,540
https://api.github.com/repos/huggingface/datasets/issues/4860
https://github.com/huggingface/datasets/pull/4860
4,860
Add collection3 dataset
closed
7
2022-08-17T21:31:42
2022-08-23T20:02:45
2022-08-22T09:08:59
pefimov
[ "wontfix" ]
null
true
1,342,231,016
https://api.github.com/repos/huggingface/datasets/issues/4859
https://github.com/huggingface/datasets/issues/4859
4,859
can't install using conda on Windows 10
open
0
2022-08-17T19:57:37
2022-08-17T19:57:37
null
xoffey
[ "bug" ]
## Describe the bug I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip. ## Steps to reproduce the bug conda install -c huggingface -c conda-forge datasets ## Expected results Should have indicated successful installation. ## Actual results Solving environ...
false
1,340,859,853
https://api.github.com/repos/huggingface/datasets/issues/4858
https://github.com/huggingface/datasets/issues/4858
4,858
map() function removes columns when input_columns is not None
closed
3
2022-08-16T20:42:30
2022-09-22T13:55:24
2022-09-22T13:55:24
pramodith
[ "bug" ]
## Describe the bug The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"a" : [...
false
1,340,397,153
https://api.github.com/repos/huggingface/datasets/issues/4857
https://github.com/huggingface/datasets/issues/4857
4,857
No preprocessed wikipedia is working on huggingface/datasets
closed
2
2022-08-16T13:55:33
2022-08-17T13:35:08
2022-08-17T13:35:08
aninrusimha
[ "bug" ]
## Describe the bug 20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface https://huggingface.co/datasets/wikipedia https://dumps.wikimedia.org/enwiki/
false
1,339,779,957
https://api.github.com/repos/huggingface/datasets/issues/4856
https://github.com/huggingface/datasets/issues/4856
4,856
file missing when load_dataset with openwebtext on windows
closed
1
2022-08-16T04:04:22
2023-01-04T03:39:12
2023-01-04T03:39:12
xi-loong
[ "bug" ]
## Describe the bug 0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_da...
false
1,339,699,975
https://api.github.com/repos/huggingface/datasets/issues/4855
https://github.com/huggingface/datasets/issues/4855
4,855
Dataset Viewer issue for super_glue
closed
1
2022-08-16T01:34:56
2022-08-22T10:08:01
2022-08-22T10:07:45
wzsxxa
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/super_glue ### Description can't view super_glue dataset on the web page ### Owner _No response_
false
1,339,456,490
https://api.github.com/repos/huggingface/datasets/issues/4853
https://github.com/huggingface/datasets/pull/4853
4,853
Fix bug and checksums in exams dataset
closed
1
2022-08-15T20:17:57
2022-08-16T06:43:57
2022-08-16T06:29:06
albertvillanova
[]
Fix #4852.
true
1,339,450,991
https://api.github.com/repos/huggingface/datasets/issues/4852
https://github.com/huggingface/datasets/issues/4852
4,852
Bug in multilingual_with_para config of exams dataset and checksums error
closed
2
2022-08-15T20:14:52
2022-09-16T09:50:55
2022-08-16T06:29:07
albertvillanova
[ "bug" ]
## Describe the bug There is a bug for "multilingual_with_para" config in exams dataset: ```python ds = load_dataset("./datasets/exams", split="train") ``` raises: ``` KeyError: 'choices' ``` Moreover, there is a NonMatchingChecksumError: ``` NonMatchingChecksumError: Checksums didn't match for dataset so...
false
1,339,085,917
https://api.github.com/repos/huggingface/datasets/issues/4851
https://github.com/huggingface/datasets/pull/4851
4,851
Fix license tag and Source Data section in billsum dataset card
closed
2
2022-08-15T14:37:00
2022-08-22T13:56:24
2022-08-22T13:40:59
kashif
[]
Fixed the data source and license fields
true
1,338,702,306
https://api.github.com/repos/huggingface/datasets/issues/4850
https://github.com/huggingface/datasets/pull/4850
4,850
Fix test of _get_extraction_protocol for TAR files
closed
1
2022-08-15T08:37:58
2022-08-15T09:42:56
2022-08-15T09:28:46
albertvillanova
[]
While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true ``` XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_e...
true
1,338,273,900
https://api.github.com/repos/huggingface/datasets/issues/4849
https://github.com/huggingface/datasets/pull/4849
4,849
1.18.x
closed
0
2022-08-14T15:09:19
2022-08-14T15:10:02
2022-08-14T15:10:02
Mr-Robot-001
[]
null
true
1,338,271,833
https://api.github.com/repos/huggingface/datasets/issues/4848
https://github.com/huggingface/datasets/pull/4848
4,848
a
closed
0
2022-08-14T15:01:16
2022-08-14T15:09:59
2022-08-14T15:09:59
Mr-Robot-001
[]
null
true
1,338,270,636
https://api.github.com/repos/huggingface/datasets/issues/4847
https://github.com/huggingface/datasets/pull/4847
4,847
Test win ci
closed
0
2022-08-14T14:57:00
2023-09-24T10:04:13
2022-08-14T14:57:45
Mr-Robot-001
[]
aa
true
1,337,979,897
https://api.github.com/repos/huggingface/datasets/issues/4846
https://github.com/huggingface/datasets/pull/4846
4,846
Update documentation card of miam dataset
closed
4
2022-08-13T14:38:55
2022-08-17T00:50:04
2022-08-14T10:26:08
PierreColombo
[]
Hi ! Paper has been published at EMNLP.
true
1,337,928,283
https://api.github.com/repos/huggingface/datasets/issues/4845
https://github.com/huggingface/datasets/pull/4845
4,845
Mark CI tests as xfail if Hub HTTP error
closed
1
2022-08-13T10:45:11
2022-08-23T04:57:12
2022-08-23T04:42:26
albertvillanova
[]
In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors. This PR: - marks tests as xfailed only if the Hub raises a 500 error for: - test_upstream_hub - makes pytest report the xfailed/xpa...
true
1,337,878,249
https://api.github.com/repos/huggingface/datasets/issues/4844
https://github.com/huggingface/datasets/pull/4844
4,844
Add 'val' to VALIDATION_KEYWORDS.
closed
5
2022-08-13T06:49:41
2022-08-30T10:17:35
2022-08-30T10:14:54
akt42
[]
This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well. I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think ...
true
1,337,668,699
https://api.github.com/repos/huggingface/datasets/issues/4843
https://github.com/huggingface/datasets/pull/4843
4,843
Fix typo in streaming docs
closed
1
2022-08-12T20:18:21
2022-08-14T11:43:30
2022-08-14T11:02:09
flozi00
[]
null
true
1,337,527,764
https://api.github.com/repos/huggingface/datasets/issues/4842
https://github.com/huggingface/datasets/pull/4842
4,842
Update stackexchange license
closed
1
2022-08-12T17:39:06
2022-08-14T10:43:18
2022-08-14T10:28:49
cakiki
[]
The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https://stackoverflow.com/help/licensing
true
1,337,401,243
https://api.github.com/repos/huggingface/datasets/issues/4841
https://github.com/huggingface/datasets/pull/4841
4,841
Update ted_talks_iwslt license to include ND
closed
1
2022-08-12T16:14:52
2022-08-14T11:15:22
2022-08-14T11:00:22
cakiki
[]
Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community"
true
1,337,342,672
https://api.github.com/repos/huggingface/datasets/issues/4840
https://github.com/huggingface/datasets/issues/4840
4,840
Dataset Viewer issue for darragh/demo_data_raw3
open
5
2022-08-12T15:22:58
2022-09-08T07:55:44
null
severo
[]
### Link https://huggingface.co/datasets/darragh/demo_data_raw3 ### Description ``` Exception: ValueError Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent. ``` reported by @NielsRogge ### Owner No
false
1,337,206,377
https://api.github.com/repos/huggingface/datasets/issues/4839
https://github.com/huggingface/datasets/issues/4839
4,839
ImageFolder dataset builder does not read the validation data set if it is named as "val"
closed
1
2022-08-12T13:26:00
2022-08-30T10:14:55
2022-08-30T10:14:55
akt42
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca9940...
false
1,337,194,918
https://api.github.com/repos/huggingface/datasets/issues/4838
https://github.com/huggingface/datasets/pull/4838
4,838
Fix documentation card of adv_glue dataset
closed
2
2022-08-12T13:15:26
2022-08-15T10:17:14
2022-08-15T10:02:11
albertvillanova
[]
Fix documentation card of adv_glue dataset.
true
1,337,079,723
https://api.github.com/repos/huggingface/datasets/issues/4837
https://github.com/huggingface/datasets/pull/4837
4,837
Add support for CSV metadata files to ImageFolder
closed
4
2022-08-12T11:19:18
2022-08-31T12:01:27
2022-08-31T11:59:07
mariosasko
[]
Fix #4814
true
1,337,067,632
https://api.github.com/repos/huggingface/datasets/issues/4836
https://github.com/huggingface/datasets/issues/4836
4,836
Is it possible to pass multiple links to a split in load script?
open
0
2022-08-12T11:06:11
2022-08-12T11:06:11
null
sadrasabouri
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I a...
false
1,336,994,835
https://api.github.com/repos/huggingface/datasets/issues/4835
https://github.com/huggingface/datasets/pull/4835
4,835
Fix documentation card of ethos dataset
closed
1
2022-08-12T09:51:06
2022-08-12T13:13:55
2022-08-12T12:59:39
albertvillanova
[]
Fix documentation card of ethos dataset.
true
1,336,993,511
https://api.github.com/repos/huggingface/datasets/issues/4834
https://github.com/huggingface/datasets/pull/4834
4,834
Fix documentation card of recipe_nlg dataset
closed
1
2022-08-12T09:49:39
2022-08-12T11:28:18
2022-08-12T11:13:40
albertvillanova
[]
Fix documentation card of recipe_nlg dataset
true
1,336,946,965
https://api.github.com/repos/huggingface/datasets/issues/4833
https://github.com/huggingface/datasets/pull/4833
4,833
Fix missing tags in dataset cards
closed
1
2022-08-12T09:04:52
2022-09-22T14:41:23
2022-08-12T09:45:55
albertvillanova
[]
Fix missing tags in dataset cards: - boolq - break_data - definite_pronoun_resolution - emo - kor_nli - pg19 - quartz - sciq - squad_es - wmt14 - wmt15 - wmt16 - wmt17 - wmt18 - wmt19 - wmt_t2t This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task...
true
1,336,727,389
https://api.github.com/repos/huggingface/datasets/issues/4832
https://github.com/huggingface/datasets/pull/4832
4,832
Fix tags in dataset cards
closed
2
2022-08-12T04:11:23
2022-08-12T04:41:55
2022-08-12T04:27:24
albertvillanova
[]
Fix wrong tags in dataset cards.
true
1,336,199,643
https://api.github.com/repos/huggingface/datasets/issues/4831
https://github.com/huggingface/datasets/pull/4831
4,831
Add oversampling strategies to interleave datasets
closed
5
2022-08-11T16:24:51
2023-07-11T15:57:48
2022-08-24T16:46:07
ylacombe
[]
Hello everyone, Here is a proposal to improve `interleave_datasets` function. Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list. I have myself encountered t...
true
1,336,177,937
https://api.github.com/repos/huggingface/datasets/issues/4830
https://github.com/huggingface/datasets/pull/4830
4,830
Fix task tags in dataset cards
closed
2
2022-08-11T16:06:06
2022-08-11T16:37:27
2022-08-11T16:23:00
albertvillanova
[]
null
true
1,336,068,068
https://api.github.com/repos/huggingface/datasets/issues/4829
https://github.com/huggingface/datasets/issues/4829
4,829
Misalignment between card tag validation and docs
open
2
2022-08-11T14:44:45
2023-07-21T15:38:02
null
albertvillanova
[ "bug" ]
## Describe the bug As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284 the validation of the dataset card tags is not aligned with its documentation: e.g. - implementation: `license: List[str]` - docs: `license: Union[str, List[str]]` They should be aligned. ...
false