id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,157,505,826
https://api.github.com/repos/huggingface/datasets/issues/3806
https://github.com/huggingface/datasets/pull/3806
3,806
Fix Spanish data file URL in wiki_lingua dataset
closed
0
2022-03-02T17:43:42
2022-03-03T08:38:17
2022-03-03T08:38:16
albertvillanova
[]
This PR fixes the URL for Spanish data file. Previously, Spanish had the same URL as Vietnamese data file.
true
1,157,454,884
https://api.github.com/repos/huggingface/datasets/issues/3805
https://github.com/huggingface/datasets/pull/3805
3,805
Remove decode: true for image feature in head_qa
closed
0
2022-03-02T16:58:34
2022-03-07T12:13:36
2022-03-07T12:13:35
craffel
[]
This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it.
true
1,157,297,278
https://api.github.com/repos/huggingface/datasets/issues/3804
https://github.com/huggingface/datasets/issues/3804
3,804
Text builder with custom separator line boundaries
open
6
2022-03-02T14:50:16
2022-03-16T15:53:59
null
cronoik
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line bound...
false
1,157,271,679
https://api.github.com/repos/huggingface/datasets/issues/3803
https://github.com/huggingface/datasets/pull/3803
3,803
Remove deprecated methods/params (preparation for v2.0)
closed
0
2022-03-02T14:29:12
2022-03-02T14:53:21
2022-03-02T14:53:21
mariosasko
[]
This PR removes the following deprecated methos/params: * `Dataset.cast_`/`DatasetDict.cast_` * `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_` * `Dataset.remove_columns_`/`DatasetDict.remove_columns_` * `Dataset.rename_columns_`/`DatasetDict.rename_columns_` * `prepare_module` * param...
true
1,157,009,964
https://api.github.com/repos/huggingface/datasets/issues/3802
https://github.com/huggingface/datasets/pull/3802
3,802
Release of FairLex dataset
closed
11
2022-03-02T10:40:18
2022-03-02T15:21:10
2022-03-02T15:18:54
iliaschalkidis
[]
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing** We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Cou...
true
1,155,649,279
https://api.github.com/repos/huggingface/datasets/issues/3801
https://github.com/huggingface/datasets/pull/3801
3,801
[Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters
closed
1
2022-03-01T18:06:43
2022-03-07T16:30:30
2022-03-07T16:30:29
lhoestq
[]
Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing. In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0** In particular, `Dataset.ma...
true
1,155,620,761
https://api.github.com/repos/huggingface/datasets/issues/3800
https://github.com/huggingface/datasets/pull/3800
3,800
Added computer vision tasks
closed
0
2022-03-01T17:37:46
2022-03-04T07:15:55
2022-03-04T07:15:55
merveenoyan
[]
Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks.
true
1,155,356,102
https://api.github.com/repos/huggingface/datasets/issues/3799
https://github.com/huggingface/datasets/pull/3799
3,799
Xtreme-S Metrics
closed
3
2022-03-01T13:42:28
2022-03-16T14:40:29
2022-03-16T14:40:26
patrickvonplaten
[]
**Added datasets (TODO)**: - [x] MLS - [x] Covost2 - [x] Minds-14 - [x] Voxpopuli - [x] FLoRes (need data) **Metrics**: Done
true
1,154,411,066
https://api.github.com/repos/huggingface/datasets/issues/3798
https://github.com/huggingface/datasets/pull/3798
3,798
Fix error message in CSV loader for newer Pandas versions
closed
0
2022-02-28T18:24:10
2022-02-28T18:51:39
2022-02-28T18:51:38
mariosasko
[]
Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this: ```python csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f ``` CC: @...
true
1,154,383,063
https://api.github.com/repos/huggingface/datasets/issues/3797
https://github.com/huggingface/datasets/pull/3797
3,797
Reddit dataset card contribution
closed
0
2022-02-28T17:53:18
2023-03-09T22:08:58
2022-03-01T12:58:57
anna-kay
[]
Description tags for webis-tldr-17 added.
true
1,154,298,629
https://api.github.com/repos/huggingface/datasets/issues/3796
https://github.com/huggingface/datasets/pull/3796
3,796
Skip checksum computation if `ignore_verifications` is `True`
closed
0
2022-02-28T16:28:45
2022-02-28T17:03:46
2022-02-28T17:03:46
mariosasko
[]
This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance)
true
1,153,261,281
https://api.github.com/repos/huggingface/datasets/issues/3795
https://github.com/huggingface/datasets/issues/3795
3,795
can not flatten natural_questions dataset
closed
2
2022-02-27T13:57:40
2022-03-21T14:36:12
2022-03-21T14:36:12
Hannibal046
[ "bug" ]
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/datase...
false
1,153,185,343
https://api.github.com/repos/huggingface/datasets/issues/3794
https://github.com/huggingface/datasets/pull/3794
3,794
Add Mahalanobis distance metric
closed
0
2022-02-27T10:56:31
2022-03-02T14:46:15
2022-03-02T14:46:15
JoaoLages
[]
Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P. In this PR I implement the metric in a simple way with the help of numpy only. Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can mak...
true
1,150,974,950
https://api.github.com/repos/huggingface/datasets/issues/3793
https://github.com/huggingface/datasets/pull/3793
3,793
Docs new UI actions no self hosted
closed
8
2022-02-25T23:48:55
2022-03-01T15:55:29
2022-03-01T15:55:28
LysandreJik
[]
Removes the need to have a self-hosted runner for the dev documentation
true
1,150,812,404
https://api.github.com/repos/huggingface/datasets/issues/3792
https://github.com/huggingface/datasets/issues/3792
3,792
Checksums didn't match for dataset source
closed
26
2022-02-25T19:55:09
2024-03-13T12:25:08
2022-02-28T08:44:18
rafikg
[ "dataset-viewer" ]
## Dataset viewer issue for 'wiki_lingua*' **Link:** *link to the dataset viewer page* `data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]") ` *short description of the issue* ``` [NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.co...
false
1,150,733,475
https://api.github.com/repos/huggingface/datasets/issues/3791
https://github.com/huggingface/datasets/pull/3791
3,791
Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem
closed
0
2022-02-25T18:26:35
2022-03-01T13:10:43
2022-03-01T13:10:42
mariosasko
[]
As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Addition...
true
1,150,646,899
https://api.github.com/repos/huggingface/datasets/issues/3790
https://github.com/huggingface/datasets/pull/3790
3,790
Add doc builder scripts
closed
3
2022-02-25T16:38:47
2022-03-01T15:55:42
2022-03-01T15:55:41
lhoestq
[]
I added the three scripts: - build_dev_documentation.yml - build_documentation.yml - delete_dev_documentation.yml I got them from `transformers` and did a few changes: - I removed the `transformers`-specific dependencies - I changed all the paths to be "datasets" instead of "transformers" - I passed the `--lib...
true
1,150,587,404
https://api.github.com/repos/huggingface/datasets/issues/3789
https://github.com/huggingface/datasets/pull/3789
3,789
Add URL and ID fields to Wikipedia dataset
closed
3
2022-02-25T15:34:37
2022-03-04T08:24:24
2022-03-04T08:24:23
albertvillanova
[]
This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using. About the conversion from title to URL, I found that apart from replacing blanks with underscores, som...
true
1,150,375,720
https://api.github.com/repos/huggingface/datasets/issues/3788
https://github.com/huggingface/datasets/issues/3788
3,788
Only-data dataset loaded unexpectedly as validation split
open
7
2022-02-25T12:11:39
2022-02-28T11:22:22
null
albertvillanova
[ "bug" ]
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
false
1,150,235,569
https://api.github.com/repos/huggingface/datasets/issues/3787
https://github.com/huggingface/datasets/pull/3787
3,787
Fix Google Drive URL to avoid Virus scan warning
closed
3
2022-02-25T09:35:12
2022-03-04T20:43:32
2022-02-25T11:56:35
albertvillanova
[]
This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs. Fix #3786, fix #3784.
true
1,150,233,067
https://api.github.com/repos/huggingface/datasets/issues/3786
https://github.com/huggingface/datasets/issues/3786
3,786
Bug downloading Virus scan warning page from Google Drive URLs
closed
1
2022-02-25T09:32:23
2022-03-03T09:25:59
2022-02-25T11:56:35
albertvillanova
[ "bug" ]
## Describe the bug Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself. See: - #3758 - #3773 - #3784
false
1,150,069,801
https://api.github.com/repos/huggingface/datasets/issues/3785
https://github.com/huggingface/datasets/pull/3785
3,785
Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset)
closed
8
2022-02-25T05:48:57
2022-03-03T16:43:47
2022-03-03T14:03:37
AngadSethi
[]
This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets. So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ The new link now looks li...
true
1,150,057,955
https://api.github.com/repos/huggingface/datasets/issues/3784
https://github.com/huggingface/datasets/issues/3784
3,784
Unable to Download CNN-Dailymail Dataset
closed
4
2022-02-25T05:24:47
2022-03-03T14:05:17
2022-03-03T14:05:17
AngadSethi
[ "bug" ]
## Describe the bug I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening: - The dataset sits in Google Drive, and both the CNN and DM datasets are large. - Google is unable to scan the folder for viruses, **so the link which would originally download the dat...
false
1,149,256,744
https://api.github.com/repos/huggingface/datasets/issues/3783
https://github.com/huggingface/datasets/pull/3783
3,783
Support passing str to iter_files
closed
1
2022-02-24T12:58:15
2022-02-24T16:01:40
2022-02-24T16:01:40
albertvillanova
[]
null
true
1,148,994,022
https://api.github.com/repos/huggingface/datasets/issues/3782
https://github.com/huggingface/datasets/pull/3782
3,782
Error of writing with different schema, due to nonpreservation of nullability
closed
1
2022-02-24T08:23:07
2022-03-03T14:54:39
2022-03-03T14:54:39
richarddwang
[]
## 1. Case ``` dataset.map( batched=True, disable_nullable=True, ) ``` will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516 `pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema` ...
true
1,148,599,680
https://api.github.com/repos/huggingface/datasets/issues/3781
https://github.com/huggingface/datasets/pull/3781
3,781
Reddit dataset card additions
closed
1
2022-02-23T21:29:16
2022-02-28T18:00:40
2022-02-28T11:21:14
anna-kay
[]
The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38 It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well. The task at which t...
true
1,148,186,272
https://api.github.com/repos/huggingface/datasets/issues/3780
https://github.com/huggingface/datasets/pull/3780
3,780
Add ElkarHizketak v1.0 dataset
closed
1
2022-02-23T14:44:17
2022-03-04T19:04:29
2022-03-04T19:04:29
antxa
[]
null
true
1,148,050,636
https://api.github.com/repos/huggingface/datasets/issues/3779
https://github.com/huggingface/datasets/pull/3779
3,779
Update manual download URL in newsroom dataset
closed
0
2022-02-23T12:49:07
2022-02-23T13:26:41
2022-02-23T13:26:40
albertvillanova
[]
Fix #3778.
true
1,147,898,946
https://api.github.com/repos/huggingface/datasets/issues/3778
https://github.com/huggingface/datasets/issues/3778
3,778
Not be able to download dataset - "Newsroom"
closed
2
2022-02-23T10:15:50
2022-02-23T17:05:04
2022-02-23T13:26:40
Darshan2104
[ "dataset bug" ]
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google driv...
false
1,147,232,875
https://api.github.com/repos/huggingface/datasets/issues/3777
https://github.com/huggingface/datasets/pull/3777
3,777
Start removing canonical datasets logic
closed
3
2022-02-22T18:23:30
2022-02-24T15:04:37
2022-02-24T15:04:36
lhoestq
[]
I updated the source code and the documentation to start removing the "canonical datasets" logic. Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly. ### Changes - the documentation about dataset ...
true
1,146,932,871
https://api.github.com/repos/huggingface/datasets/issues/3776
https://github.com/huggingface/datasets/issues/3776
3,776
Allow download only some files from the Wikipedia dataset
open
1
2022-02-22T13:46:41
2022-02-22T14:50:02
null
jvanz
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb). **Describe the solution you'd like** I...
false
1,146,849,454
https://api.github.com/repos/huggingface/datasets/issues/3775
https://github.com/huggingface/datasets/pull/3775
3,775
Update gigaword card and info
closed
3
2022-02-22T12:27:16
2022-02-28T11:35:24
2022-02-28T11:35:24
mariosasko
[]
Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999
true
1,146,843,177
https://api.github.com/repos/huggingface/datasets/issues/3774
https://github.com/huggingface/datasets/pull/3774
3,774
Fix reddit_tifu data URL
closed
0
2022-02-22T12:21:15
2022-02-22T12:38:45
2022-02-22T12:38:44
albertvillanova
[]
Fix #3773.
true
1,146,758,335
https://api.github.com/repos/huggingface/datasets/issues/3773
https://github.com/huggingface/datasets/issues/3773
3,773
Checksum mismatch for the reddit_tifu dataset
closed
4
2022-02-22T10:57:07
2022-02-25T19:27:49
2022-02-22T12:38:44
anna-kay
[ "bug" ]
## Describe the bug A checksum occurs when downloading the reddit_tifu data (both long & short). ## Steps to reproduce the bug reddit_tifu_dataset = load_dataset('reddit_tifu', 'long') ## Expected results The expected result is for the dataset to be downloaded and cached locally. ## Actual results File "...
false
1,146,718,630
https://api.github.com/repos/huggingface/datasets/issues/3772
https://github.com/huggingface/datasets/pull/3772
3,772
Fix: dataset name is stored in keys
closed
0
2022-02-22T10:20:37
2022-02-22T11:08:34
2022-02-22T11:08:33
thomasw21
[]
null
true
1,146,561,140
https://api.github.com/repos/huggingface/datasets/issues/3771
https://github.com/huggingface/datasets/pull/3771
3,771
Fix DuplicatedKeysError on msr_sqa dataset
closed
0
2022-02-22T07:44:24
2022-02-22T08:12:40
2022-02-22T08:12:39
albertvillanova
[]
Fix #3770.
true
1,146,336,667
https://api.github.com/repos/huggingface/datasets/issues/3770
https://github.com/huggingface/datasets/issues/3770
3,770
DuplicatedKeysError on msr_sqa dataset
closed
1
2022-02-22T00:43:33
2022-02-22T08:12:39
2022-02-22T08:12:39
kolk
[]
### Describe the bug Failure to generate dataset msr_sqa because of duplicate keys. ### Steps to reproduce the bug ``` from datasets import load_dataset load_dataset("msr_sqa") ``` ### Expected results The examples keys should be unique. **Actual results** ``` >>> load_dataset("msr_sqa") Downloading: 6...
false
1,146,258,023
https://api.github.com/repos/huggingface/datasets/issues/3769
https://github.com/huggingface/datasets/issues/3769
3,769
`dataset = dataset.map()` causes faiss index lost
open
3
2022-02-21T21:59:23
2022-06-27T14:56:29
null
Oaklight
[ "bug" ]
## Describe the bug assigning the resulted dataset to original dataset causes lost of the faiss index ## Steps to reproduce the bug `my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure ```python self.dataset.add_faiss_index('embeddings') self.dataset.list_indexes() # ['embeddin...
false
1,146,102,442
https://api.github.com/repos/huggingface/datasets/issues/3768
https://github.com/huggingface/datasets/pull/3768
3,768
Fix HfFileSystem docstring
closed
0
2022-02-21T18:14:40
2022-02-22T09:13:03
2022-02-22T09:13:02
lhoestq
[]
null
true
1,146,036,648
https://api.github.com/repos/huggingface/datasets/issues/3767
https://github.com/huggingface/datasets/pull/3767
3,767
Expose method and fix param
closed
0
2022-02-21T16:57:47
2022-02-22T08:35:03
2022-02-22T08:35:02
severo
[]
A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670
true
1,145,829,289
https://api.github.com/repos/huggingface/datasets/issues/3766
https://github.com/huggingface/datasets/pull/3766
3,766
Fix head_qa data URL
closed
0
2022-02-21T13:52:50
2022-02-21T14:39:20
2022-02-21T14:39:19
albertvillanova
[]
Fix #3758.
true
1,145,126,881
https://api.github.com/repos/huggingface/datasets/issues/3765
https://github.com/huggingface/datasets/pull/3765
3,765
Update URL for tagging app
closed
1
2022-02-20T20:34:31
2022-02-20T20:36:10
2022-02-20T20:36:06
lewtun
[]
This PR updates the URL for the tagging app to be the one on Spaces.
true
1,145,107,050
https://api.github.com/repos/huggingface/datasets/issues/3764
https://github.com/huggingface/datasets/issues/3764
3,764
!
closed
0
2022-02-20T19:05:43
2022-02-21T08:55:58
2022-02-21T08:55:58
LesiaFedorenko
[ "dataset-viewer" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,145,099,878
https://api.github.com/repos/huggingface/datasets/issues/3763
https://github.com/huggingface/datasets/issues/3763
3,763
It's not possible download `20200501.pt` dataset
closed
2
2022-02-20T18:34:58
2022-02-21T12:06:12
2022-02-21T09:25:06
jvanz
[ "bug" ]
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect t...
false
1,144,849,557
https://api.github.com/repos/huggingface/datasets/issues/3762
https://github.com/huggingface/datasets/issues/3762
3,762
`Dataset.class_encode` should support custom class names
closed
3
2022-02-19T21:21:45
2022-02-21T12:16:35
2022-02-21T12:16:35
Dref360
[ "enhancement" ]
I can make a PR, just wanted approval before starting. **Is your feature request related to a problem? Please describe.** It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing. https://github.com/huggingface/datasets/blob/master/sr...
false
1,144,830,702
https://api.github.com/repos/huggingface/datasets/issues/3761
https://github.com/huggingface/datasets/issues/3761
3,761
Know your data for HF hub
closed
1
2022-02-19T19:48:47
2022-02-21T14:15:23
2022-02-21T14:15:23
Muhtasham
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues. **Describe the solution you'd like** Something like https://knowyourdata.withgoogle.com/ for HF hub
false
1,144,804,558
https://api.github.com/repos/huggingface/datasets/issues/3760
https://github.com/huggingface/datasets/issues/3760
3,760
Unable to view the Gradio flagged call back dataset
closed
5
2022-02-19T17:45:08
2022-03-22T07:12:11
2022-03-22T07:12:11
kingabzpro
[ "dataset-viewer" ]
## Dataset viewer issue for '*savtadepth-flags*' **Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)* *with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://h...
false
1,143,400,770
https://api.github.com/repos/huggingface/datasets/issues/3759
https://github.com/huggingface/datasets/pull/3759
3,759
Rename GenerateMode to DownloadMode
closed
1
2022-02-18T16:53:53
2022-02-22T13:57:24
2022-02-22T12:22:52
albertvillanova
[]
This PR: - Renames `GenerateMode` to `DownloadMode` - Implements `DeprecatedEnum` - Deprecates `GenerateMode` Close #769.
true
1,143,366,393
https://api.github.com/repos/huggingface/datasets/issues/3758
https://github.com/huggingface/datasets/issues/3758
3,758
head_qa file missing
closed
2
2022-02-18T16:32:43
2022-02-28T14:29:18
2022-02-21T14:39:19
severo
[ "bug" ]
## Describe the bug A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json) ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("head_qa", name="en") ``` ## Expec...
false
1,143,300,880
https://api.github.com/repos/huggingface/datasets/issues/3757
https://github.com/huggingface/datasets/pull/3757
3,757
Add perplexity to metrics
closed
2
2022-02-18T15:52:23
2022-02-25T17:13:34
2022-02-25T17:13:34
emibaylor
[]
Adding perplexity metric This code differs from the code in [this](https://huggingface.co/docs/transformers/perplexity) HF blog post because the blogpost code fails in at least the following circumstances: - returns nans whenever the stride = 1 - hits a runtime error when the stride is significantly larger than th...
true
1,143,273,825
https://api.github.com/repos/huggingface/datasets/issues/3756
https://github.com/huggingface/datasets/issues/3756
3,756
Images get decoded when using `map()` with `input_columns` argument on a dataset
closed
2
2022-02-18T15:35:38
2022-12-13T16:59:06
2022-12-13T16:59:06
kklemon
[ "bug" ]
## Describe the bug The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances. However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image ...
false
1,143,032,961
https://api.github.com/repos/huggingface/datasets/issues/3755
https://github.com/huggingface/datasets/issues/3755
3,755
Cannot preview dataset
closed
3
2022-02-18T13:06:45
2022-02-19T14:30:28
2022-02-18T15:41:33
frascuchon
[ "dataset-viewer" ]
## Dataset viewer issue for '*rubrix/news*' **Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page* Cannot see the dataset preview: ``` Status code: 400 Exception: Status400Error Message: Not found. Cache is waiting to be refreshed. ``` Am I the one who added thi...
false
1,142,886,536
https://api.github.com/repos/huggingface/datasets/issues/3754
https://github.com/huggingface/datasets/issues/3754
3,754
Overflowing indices in `select`
closed
2
2022-02-18T11:30:52
2022-02-18T11:38:23
2022-02-18T11:38:23
lvwerra
[ "bug" ]
## Describe the bug The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"test": [1,2,3]}) ds = ds.select(range(5)) print(ds) p...
false
1,142,821,144
https://api.github.com/repos/huggingface/datasets/issues/3753
https://github.com/huggingface/datasets/issues/3753
3,753
Expanding streaming capabilities
open
8
2022-02-18T10:45:41
2025-03-19T14:50:14
null
lvwerra
[ "enhancement" ]
Some ideas for a few features that could be useful when working with large datasets in streaming mode. ## `filter` for `IterableDataset` Adding filtering to streaming datasets would be useful in several scenarios: - filter a dataset with many languages for a subset of languages - filter a dataset for specific li...
false
1,142,627,889
https://api.github.com/repos/huggingface/datasets/issues/3752
https://github.com/huggingface/datasets/pull/3752
3,752
Update metadata JSON for cats_vs_dogs dataset
closed
0
2022-02-18T08:32:53
2022-02-18T14:56:12
2022-02-18T14:56:11
albertvillanova
[]
Note that the number of examples in the train split was already fixed in the dataset card. Fix #3750.
true
1,142,609,327
https://api.github.com/repos/huggingface/datasets/issues/3751
https://github.com/huggingface/datasets/pull/3751
3,751
Fix typo in train split name
closed
0
2022-02-18T08:18:04
2022-02-18T14:28:52
2022-02-18T14:28:52
albertvillanova
[]
In the README guide (and consequently in many datasets) there was a typo in the train split name: ``` | Tain | Valid | Test | ``` This PR: - fixes the typo in the train split name - fixes the column alignment of the split tables in the README guide and in all datasets.
true
1,142,408,331
https://api.github.com/repos/huggingface/datasets/issues/3750
https://github.com/huggingface/datasets/issues/3750
3,750
`NonMatchingSplitsSizesError` for cats_vs_dogs dataset
closed
1
2022-02-18T05:46:39
2022-02-18T14:56:11
2022-02-18T14:56:11
jaketae
[ "bug" ]
## Describe the bug Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results Loading is successful. ## Actual results ``` NonMatchingSplitsSiz...
false
1,142,156,678
https://api.github.com/repos/huggingface/datasets/issues/3749
https://github.com/huggingface/datasets/pull/3749
3,749
Add tqdm arguments
closed
6
2022-02-18T01:34:46
2022-03-08T09:38:48
2022-03-08T09:38:48
penguinwang96825
[]
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
true
1,142,128,763
https://api.github.com/repos/huggingface/datasets/issues/3748
https://github.com/huggingface/datasets/pull/3748
3,748
Add tqdm arguments
closed
0
2022-02-18T00:47:55
2022-02-18T00:59:15
2022-02-18T00:59:15
penguinwang96825
[]
In this PR, there are two changes. 1. It is able to show the progress bar by adding the length of the iterator. 2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library.
true
1,141,688,854
https://api.github.com/repos/huggingface/datasets/issues/3747
https://github.com/huggingface/datasets/issues/3747
3,747
Passing invalid subset should throw an error
open
0
2022-02-17T18:16:11
2022-02-17T18:16:11
null
jxmorris12
[ "bug" ]
## Describe the bug Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('rotten_tomatoes', 'asdfasdfa') ``` ## Expected results This should break, since ...
false
1,141,612,810
https://api.github.com/repos/huggingface/datasets/issues/3746
https://github.com/huggingface/datasets/pull/3746
3,746
Use the same seed to shuffle shards and metadata in streaming mode
closed
0
2022-02-17T17:06:31
2022-02-23T15:00:59
2022-02-23T15:00:58
lhoestq
[]
When shuffling in streaming mode, those two entangled lists are shuffled independently. In this PR I changed this to shuffle the lists of same length with the exact same seed, in order for the files and metadata to still be aligned. ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename...
true
1,141,520,953
https://api.github.com/repos/huggingface/datasets/issues/3745
https://github.com/huggingface/datasets/pull/3745
3,745
Add mIoU metric
closed
3
2022-02-17T15:52:17
2022-03-08T13:20:26
2022-03-08T13:20:26
NielsRogge
[]
This PR adds the mean Intersection-over-Union metric to the library, useful for tasks like semantic segmentation. It is entirely based on mmseg's [implementation](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/core/evaluation/metrics.py). I've removed any PyTorch dependency, and rely on Numpy only...
true
1,141,461,165
https://api.github.com/repos/huggingface/datasets/issues/3744
https://github.com/huggingface/datasets/issues/3744
3,744
Better shards shuffling in streaming mode
closed
0
2022-02-17T15:07:21
2022-02-23T15:00:58
2022-02-23T15:00:58
lhoestq
[ "enhancement", "streaming" ]
Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`: ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename in all_files], "me...
false
1,141,176,011
https://api.github.com/repos/huggingface/datasets/issues/3743
https://github.com/huggingface/datasets/pull/3743
3,743
initial monash time series forecasting repository
closed
3
2022-02-17T10:51:31
2022-03-21T09:54:41
2022-03-21T09:50:16
kashif
[]
null
true
1,141,174,549
https://api.github.com/repos/huggingface/datasets/issues/3742
https://github.com/huggingface/datasets/pull/3742
3,742
Fix ValueError message formatting in int2str
closed
0
2022-02-17T10:50:08
2022-02-17T15:32:02
2022-02-17T15:32:02
aaakulchyk
[]
Hi! I bumped into this particular `ValueError` during my work (because an instance of `np.int64` was passed instead of regular Python `int`), and so I had to `print(type(values))` myself. Apparently, it's just the missing `f` to make message an f-string. It ain't much for a contribution, but it's honest work. Hop...
true
1,141,132,649
https://api.github.com/repos/huggingface/datasets/issues/3741
https://github.com/huggingface/datasets/pull/3741
3,741
Rm sphinx doc
closed
0
2022-02-17T10:11:37
2022-02-17T10:15:17
2022-02-17T10:15:12
mishig25
[]
Checklist - [x] Update circle ci yaml - [x] Delete sphinx static & python files in docs dir - [x] Update readme in docs dir - [ ] Update docs config in setup.py
true
1,140,720,739
https://api.github.com/repos/huggingface/datasets/issues/3740
https://github.com/huggingface/datasets/pull/3740
3,740
Support streaming for pubmed
closed
3
2022-02-17T00:18:22
2022-02-18T14:42:13
2022-02-18T14:42:13
abhi-mosaic
[]
This PR makes some minor changes to the `pubmed` dataset to allow for `streaming=True`. Fixes #3739. Basically, I followed the C4 dataset which works in streaming mode as an example, and made the following changes: * Change URL prefix from `ftp://` to `https://` * Explicilty `open` the filename and pass the XML ...
true
1,140,329,189
https://api.github.com/repos/huggingface/datasets/issues/3739
https://github.com/huggingface/datasets/issues/3739
3,739
Pubmed dataset does not work in streaming mode
closed
1
2022-02-16T17:13:37
2022-02-18T14:42:13
2022-02-18T14:42:13
abhi-mosaic
[ "bug" ]
## Describe the bug Trying to use the `pubmed` dataset with `streaming=True` fails. ## Steps to reproduce the bug ```python import datasets pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True) print (next(iter(pubmed_train))) ``` ## Expected results I would expect to see the first ...
false
1,140,164,253
https://api.github.com/repos/huggingface/datasets/issues/3738
https://github.com/huggingface/datasets/issues/3738
3,738
For data-only datasets, streaming and non-streaming don't behave the same
open
9
2022-02-16T15:20:57
2022-02-21T14:24:55
null
severo
[ "bug" ]
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files. In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys: ```python import datasets as ds iterable_dataset = ds.load_dataset("huggingface/transformers-metadat...
false
1,140,148,050
https://api.github.com/repos/huggingface/datasets/issues/3737
https://github.com/huggingface/datasets/pull/3737
3,737
Make RedCaps streamable
closed
0
2022-02-16T15:12:23
2022-02-16T15:28:38
2022-02-16T15:28:37
mariosasko
[]
Make RedCaps streamable. @lhoestq Using `data/redcaps_v1.0_annotations.zip` as a download URL gives an error locally when running `datasets-cli test` (will investigate this another time)
true
1,140,134,483
https://api.github.com/repos/huggingface/datasets/issues/3736
https://github.com/huggingface/datasets/pull/3736
3,736
Local paths in common voice
closed
2
2022-02-16T15:01:29
2022-09-21T14:58:38
2022-02-22T09:13:43
lhoestq
[]
Continuation of https://github.com/huggingface/datasets/pull/3664: - pass the `streaming` parameter to _split_generator - update @anton-l's code to use this parameter for `common_voice` - add a comment to explain why we use `download_and_extract` in non-streaming and `iter_archive` in streaming Now the `common_...
true
1,140,087,891
https://api.github.com/repos/huggingface/datasets/issues/3735
https://github.com/huggingface/datasets/issues/3735
3,735
Performance of `datasets` at scale
open
6
2022-02-16T14:23:32
2024-06-27T01:17:48
null
lvwerra
[]
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The da...
false
1,140,050,336
https://api.github.com/repos/huggingface/datasets/issues/3734
https://github.com/huggingface/datasets/pull/3734
3,734
Fix bugs in NewsQA dataset
closed
0
2022-02-16T13:51:28
2022-02-17T07:54:26
2022-02-17T07:54:25
albertvillanova
[]
Fix #3733.
true
1,140,011,378
https://api.github.com/repos/huggingface/datasets/issues/3733
https://github.com/huggingface/datasets/issues/3733
3,733
Bugs in NewsQA dataset
closed
0
2022-02-16T13:17:37
2022-02-17T07:54:25
2022-02-17T07:54:25
albertvillanova
[ "bug" ]
## Describe the bug NewsQA dataset has the following bugs: - the field `validated_answers` is an exact copy of the field `answers` but with the addition of `'count': [0]` to each dict - the field `badQuestion` does not appear in `answers` nor `validated_answers` ## Steps to reproduce the bug By inspecting the da...
false
1,140,004,022
https://api.github.com/repos/huggingface/datasets/issues/3732
https://github.com/huggingface/datasets/pull/3732
3,732
Support streaming in size estimation function in `push_to_hub`
closed
2
2022-02-16T13:10:48
2022-02-21T18:18:45
2022-02-21T18:18:44
mariosasko
[]
This PR adds the streamable version of `os.path.getsize` (`fsspec` can return `None`, so we fall back to `fs.open` to make it more robust) to account for possible streamable paths in the nested `extra_nbytes_visitor` function inside `push_to_hub`.
true
1,139,626,362
https://api.github.com/repos/huggingface/datasets/issues/3731
https://github.com/huggingface/datasets/pull/3731
3,731
Fix Multi-News dataset metadata and card
closed
0
2022-02-16T07:14:57
2022-02-16T08:48:47
2022-02-16T08:48:47
albertvillanova
[]
Fix #3730.
true
1,139,545,613
https://api.github.com/repos/huggingface/datasets/issues/3730
https://github.com/huggingface/datasets/issues/3730
3,730
Checksum Error when loading multi-news dataset
closed
1
2022-02-16T05:11:08
2022-02-16T20:05:06
2022-02-16T08:48:46
byw2
[ "bug" ]
## Describe the bug When using the load_dataset function from datasets module to load the Multi-News dataset, does not load the dataset but throws Checksum Error instead. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("multi_news") ``` ## Expected results ...
false
1,139,398,442
https://api.github.com/repos/huggingface/datasets/issues/3729
https://github.com/huggingface/datasets/issues/3729
3,729
Wrong number of examples when loading a text dataset
closed
2
2022-02-16T01:13:31
2022-03-15T16:16:09
2022-03-15T16:16:09
kg-nlp
[ "bug" ]
## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming...
false
1,139,303,614
https://api.github.com/repos/huggingface/datasets/issues/3728
https://github.com/huggingface/datasets/issues/3728
3,728
VoxPopuli
closed
1
2022-02-15T23:04:55
2022-02-16T18:49:12
2022-02-16T18:49:12
VictorSanh
[ "dataset request" ]
## Adding a Dataset - **Name:** VoxPopuli - **Description:** A Large-Scale Multilingual Speech Corpus - **Paper:** https://arxiv.org/pdf/2101.00390.pdf - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** one of the largest (if not the largest) multilingual speech corpus: 400K hours of multi...
false
1,138,979,732
https://api.github.com/repos/huggingface/datasets/issues/3727
https://github.com/huggingface/datasets/pull/3727
3,727
Patch all module attributes in its namespace
closed
0
2022-02-15T17:12:27
2022-02-17T17:06:18
2022-02-17T17:06:17
albertvillanova
[]
When patching module attributes, only those defined in its `__all__` variable were considered by default (only falling back to `__dict__` if `__all__` was None). However those are only a subset of all the module attributes in its namespace (`__dict__` variable). This PR fixes the problem of modules that have non-...
true
1,138,870,362
https://api.github.com/repos/huggingface/datasets/issues/3726
https://github.com/huggingface/datasets/pull/3726
3,726
Use config pandas version in CSV dataset builder
closed
0
2022-02-15T15:47:49
2022-02-15T16:55:45
2022-02-15T16:55:44
albertvillanova
[]
Fix #3724.
true
1,138,835,625
https://api.github.com/repos/huggingface/datasets/issues/3725
https://github.com/huggingface/datasets/pull/3725
3,725
Pin pandas to avoid bug in streaming mode
closed
0
2022-02-15T15:21:00
2022-02-15T15:52:38
2022-02-15T15:52:37
albertvillanova
[]
Temporarily pin pandas version to avoid bug in streaming mode (patching no longer works). Related to #3724.
true
1,138,827,681
https://api.github.com/repos/huggingface/datasets/issues/3724
https://github.com/huggingface/datasets/issues/3724
3,724
Bug while streaming CSV dataset with pandas 1.4
closed
0
2022-02-15T15:16:19
2022-02-15T16:55:44
2022-02-15T16:55:44
albertvillanova
[ "bug" ]
## Describe the bug If we upgrade to pandas `1.4`, the patching of the pandas module is no longer working ``` AttributeError: '_PatchedModuleObj' object has no attribute '__version__' ``` ## Steps to reproduce the bug ``` pip install pandas==1.4 ``` ```python from datasets import load_dataset ds = load_dat...
false
1,138,789,493
https://api.github.com/repos/huggingface/datasets/issues/3723
https://github.com/huggingface/datasets/pull/3723
3,723
Fix flatten of complex feature types
closed
2
2022-02-15T14:45:33
2022-03-18T17:32:26
2022-03-18T17:28:14
mariosasko
[]
Fix `flatten` for the following feature types: Image/Audio, Translation, and TranslationVariableLanguages. Inspired by `cast`/`table_cast`, I've introduced a `table_flatten` function to handle the Image/Audio types. CC: @SBrandeis Fix #3686.
true
1,138,770,211
https://api.github.com/repos/huggingface/datasets/issues/3722
https://github.com/huggingface/datasets/pull/3722
3,722
added electricity load diagram dataset
closed
0
2022-02-15T14:29:29
2022-02-16T18:53:21
2022-02-16T18:48:07
kashif
[]
Initial Electricity Load Diagram time series dataset.
true
1,137,617,108
https://api.github.com/repos/huggingface/datasets/issues/3721
https://github.com/huggingface/datasets/pull/3721
3,721
Multi-GPU support for `FaissIndex`
closed
5
2022-02-14T17:26:51
2022-03-07T16:28:57
2022-03-07T16:28:56
rentruewang
[]
Per #3716 , current implementation does not take into consideration that `faiss` can run on multiple GPUs. In this commit, I provided multi-GPU support for `FaissIndex` by modifying the device management in `IndexableMixin.add_faiss_index` and `FaissIndex.load`. Now users are able to pass in 1. a positive intege...
true
1,137,537,080
https://api.github.com/repos/huggingface/datasets/issues/3720
https://github.com/huggingface/datasets/issues/3720
3,720
Builder Configuration Update Required on Common Voice Dataset
closed
7
2022-02-14T16:21:41
2024-04-28T18:03:08
2024-04-28T18:03:08
aasem
[ "bug" ]
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
false
1,137,237,622
https://api.github.com/repos/huggingface/datasets/issues/3719
https://github.com/huggingface/datasets/pull/3719
3,719
Check if indices values in `Dataset.select` are within bounds
closed
0
2022-02-14T12:31:41
2022-02-14T19:19:22
2022-02-14T19:19:22
mariosasko
[]
Fix #3707 Instead of reusing `_check_valid_index_key` from `datasets.formatting`, I defined a new function to provide a more meaningful error message.
true
1,137,196,388
https://api.github.com/repos/huggingface/datasets/issues/3718
https://github.com/huggingface/datasets/pull/3718
3,718
Fix Evidence Infer Treatment dataset
closed
0
2022-02-14T11:58:07
2022-02-14T13:21:45
2022-02-14T13:21:44
albertvillanova
[]
This PR: - fixes a bug in the script, by removing an unnamed column with the row index: fix KeyError - fix the metadata JSON, by adding both configurations (1.1 and 2.0): fix ExpectedMoreDownloadedFiles - updates the dataset card Fix #3515.
true
1,137,183,015
https://api.github.com/repos/huggingface/datasets/issues/3717
https://github.com/huggingface/datasets/issues/3717
3,717
wrong condition in `Features ClassLabel encode_example`
closed
1
2022-02-14T11:44:35
2022-02-14T15:09:36
2022-02-14T15:07:43
Tudyx
[ "bug" ]
## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}") ``` ## Expected results The `not - 1` co...
false
1,136,831,092
https://api.github.com/repos/huggingface/datasets/issues/3716
https://github.com/huggingface/datasets/issues/3716
3,716
`FaissIndex` to support multiple GPU and `custom_index`
closed
2
2022-02-14T06:21:43
2022-03-07T16:28:56
2022-03-07T16:28:56
rentruewang
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Currently, because `device` is of the type `int | None`, to leverage `faiss-gpu`'s multi-gpu support, you need to create a `custom_index`. However, if using a `custom_index` created by e.g. `faiss.index_cpu_to_all_gpus`, then `FaissIndex.save` does not ...
false
1,136,107,879
https://api.github.com/repos/huggingface/datasets/issues/3715
https://github.com/huggingface/datasets/pull/3715
3,715
Fix bugs in msr_sqa dataset
closed
5
2022-02-13T16:37:30
2022-10-03T09:10:02
2022-10-03T09:08:06
Timothyxxx
[ "dataset contribution" ]
The last version has many problems, 1) Errors in table load-in. Split by a single comma instead of using pandas is wrong. 2) id reduplicated in _generate_examples function. 3) Missing information of history questions which make it hard to use. I fix it refer to https://github.com/HKUNLP/UnifiedSKG. And we test ...
true
1,136,105,530
https://api.github.com/repos/huggingface/datasets/issues/3714
https://github.com/huggingface/datasets/issues/3714
3,714
tatoeba_mt: File not found error and key error
closed
1
2022-02-13T16:35:45
2022-02-13T20:44:04
2022-02-13T20:44:04
jorgtied
[ "dataset-viewer" ]
## Dataset viewer issue for 'tatoeba_mt' **Link:** https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt My data loader script does not seem to work. The files are part of the local repository but cannot be found. An example where it should work is the subset for "afr-eng". Another problem is that I do not ...
false
1,135,692,572
https://api.github.com/repos/huggingface/datasets/issues/3713
https://github.com/huggingface/datasets/pull/3713
3,713
Rm sphinx doc
closed
2
2022-02-13T11:26:31
2022-02-17T10:18:46
2022-02-17T10:12:09
mishig25
[]
Checklist - [x] Update circle ci yaml - [x] Delete sphinx static & python files in docs dir - [x] Update readme in docs dir - [ ] Update docs config in setup.py
true
1,134,252,505
https://api.github.com/repos/huggingface/datasets/issues/3712
https://github.com/huggingface/datasets/pull/3712
3,712
Fix the error of msr_sqa dataset
closed
0
2022-02-12T16:27:54
2022-02-13T11:21:05
2022-02-13T11:21:05
Timothyxxx
[]
Fix the error of _load_table_data function in msr_sqa dataset, it is wrong to use comma to split each row.
true
1,134,050,545
https://api.github.com/repos/huggingface/datasets/issues/3711
https://github.com/huggingface/datasets/pull/3711
3,711
Fix the error of _load_table_data function in msr_sqa dataset
closed
0
2022-02-12T13:20:53
2022-02-12T13:30:43
2022-02-12T13:30:43
Timothyxxx
[]
The _load_table_data function from the last version is wrong, it is wrong to use comma to split each row.
true
1,133,955,393
https://api.github.com/repos/huggingface/datasets/issues/3710
https://github.com/huggingface/datasets/pull/3710
3,710
Fix CI code quality issue
closed
0
2022-02-12T12:05:39
2022-02-12T12:58:05
2022-02-12T12:58:04
albertvillanova
[]
Fix CI code quality issue introduced by #3695.
true
1,132,997,904
https://api.github.com/repos/huggingface/datasets/issues/3709
https://github.com/huggingface/datasets/pull/3709
3,709
Set base path to hub url for canonical datasets
closed
1
2022-02-11T19:23:20
2022-02-16T14:02:28
2022-02-16T14:02:27
lhoestq
[]
This should allow canonical datasets to use relative paths to download data files from the Hub cc @polinaeterna this will be useful if we have audio datasets that are canonical and for which you'd like to host data files
true
1,132,968,402
https://api.github.com/repos/huggingface/datasets/issues/3708
https://github.com/huggingface/datasets/issues/3708
3,708
Loading JSON gets stuck with many workers/threads
open
8
2022-02-11T18:50:48
2023-06-16T11:24:12
null
lvwerra
[ "bug" ]
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from dat...
false
1,132,741,903
https://api.github.com/repos/huggingface/datasets/issues/3707
https://github.com/huggingface/datasets/issues/3707
3,707
`.select`: unexpected behavior with `indices`
closed
2
2022-02-11T15:20:01
2022-02-14T19:19:21
2022-02-14T19:19:21
gabegma
[ "bug" ]
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": [...
false