id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,013,266,373
https://api.github.com/repos/huggingface/datasets/issues/2996
https://github.com/huggingface/datasets/pull/2996
2,996
Remove all query parameters when extracting protocol
closed
4
2021-10-01T12:05:34
2021-10-04T08:48:13
2021-10-04T08:48:13
albertvillanova
[]
Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,...
true
1,013,143,868
https://api.github.com/repos/huggingface/datasets/issues/2995
https://github.com/huggingface/datasets/pull/2995
2,995
Fix trivia_qa unfiltered
closed
1
2021-10-01T09:53:43
2021-10-01T10:04:11
2021-10-01T10:04:10
lhoestq
[]
Fix https://github.com/huggingface/datasets/issues/2993
true
1,013,000,475
https://api.github.com/repos/huggingface/datasets/issues/2994
https://github.com/huggingface/datasets/pull/2994
2,994
Fix loading compressed CSV without streaming
closed
0
2021-10-01T07:28:59
2021-10-01T15:53:16
2021-10-01T15:53:16
albertvillanova
[]
When implementing support to stream CSV files (https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782), a regression was introduced preventing loading compressed CSV files in non-streaming mode. This PR fixes it, a...
true
1,012,702,665
https://api.github.com/repos/huggingface/datasets/issues/2993
https://github.com/huggingface/datasets/issues/2993
2,993
Can't download `trivia_qa/unfiltered`
closed
3
2021-09-30T23:00:18
2021-10-01T19:07:23
2021-10-01T19:07:22
VictorSanh
[ "bug" ]
## Describe the bug For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough... ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("trivia_qa", "unfiltered") Downloading and preparing data...
false
1,012,325,594
https://api.github.com/repos/huggingface/datasets/issues/2992
https://github.com/huggingface/datasets/pull/2992
2,992
Fix f1 metric with None average
closed
0
2021-09-30T15:31:57
2021-10-01T14:17:39
2021-10-01T14:17:38
albertvillanova
[]
Fix #2979.
true
1,012,174,823
https://api.github.com/repos/huggingface/datasets/issues/2991
https://github.com/huggingface/datasets/issues/2991
2,991
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
open
0
2021-09-30T13:22:01
2021-09-30T13:22:01
null
SaulLu
[ "enhancement" ]
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method. This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-h...
false
1,012,097,418
https://api.github.com/repos/huggingface/datasets/issues/2990
https://github.com/huggingface/datasets/pull/2990
2,990
Make Dataset.map accept list of np.array
closed
0
2021-09-30T12:08:54
2021-10-01T13:57:46
2021-10-01T13:57:46
albertvillanova
[]
Fix #2987.
true
1,011,220,375
https://api.github.com/repos/huggingface/datasets/issues/2989
https://github.com/huggingface/datasets/pull/2989
2,989
Add CommonLanguage
closed
0
2021-09-29T17:21:30
2021-10-01T17:36:39
2021-10-01T17:00:03
anton-l
[]
This PR adds the Common Language dataset (https://zenodo.org/record/5036977) The dataset is intended for language-identification speech classifiers and is already used by models on the Hub: * https://huggingface.co/speechbrain/lang-id-commonlanguage_ecapa * https://huggingface.co/anton-l/wav2vec2-base-langid cc @...
true
1,011,148,017
https://api.github.com/repos/huggingface/datasets/issues/2988
https://github.com/huggingface/datasets/issues/2988
2,988
IndexError: Invalid key: 14 is out of bounds for size 0
closed
13
2021-09-29T16:04:24
2022-04-10T14:49:49
2022-04-10T14:49:49
dorost1234
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is wor...
false
1,011,026,141
https://api.github.com/repos/huggingface/datasets/issues/2987
https://github.com/huggingface/datasets/issues/2987
2,987
ArrowInvalid: Can only convert 1-dimensional array values
closed
1
2021-09-29T14:18:52
2021-10-01T13:57:45
2021-10-01T13:57:45
NielsRogge
[ "bug" ]
## Describe the bug For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset: ``` def preprocess_data(examples): images = [Image.open(path).conve...
false
1,010,792,783
https://api.github.com/repos/huggingface/datasets/issues/2986
https://github.com/huggingface/datasets/pull/2986
2,986
Refac module factory + avoid etag requests for hub datasets
closed
6
2021-09-29T10:42:00
2021-10-11T11:05:53
2021-10-11T11:05:52
lhoestq
[]
## Refactor the module factory When trying to extend the `data_files` logic to avoid doing unnecessary ETag requests, I noticed that the module preparation mechanism needed a refactor: - the function was 600 lines long - it was not readable - it contained many different cases that made it complex to maintain - i...
true
1,010,500,433
https://api.github.com/repos/huggingface/datasets/issues/2985
https://github.com/huggingface/datasets/pull/2985
2,985
add new dataset kan_hope
closed
0
2021-09-29T05:20:28
2021-10-01T16:55:19
2021-10-01T16:55:19
adeepH
[]
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Task:** *Binary Text Classification* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/kan_hope/tree/main/dataset* - **Motivation:** *The dataset ...
true
1,010,484,326
https://api.github.com/repos/huggingface/datasets/issues/2984
https://github.com/huggingface/datasets/issues/2984
2,984
Exceeded maximum rows when reading large files
closed
1
2021-09-29T04:49:22
2021-10-12T06:05:42
2021-10-12T06:05:42
zijwang
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error. ## Steps to reproduce the bug ```python dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a ...
false
1,010,263,058
https://api.github.com/repos/huggingface/datasets/issues/2983
https://github.com/huggingface/datasets/pull/2983
2,983
added SwissJudgmentPrediction dataset
closed
0
2021-09-28T22:17:56
2021-10-01T16:03:05
2021-10-01T16:03:05
JoelNiklaus
[]
null
true
1,010,118,418
https://api.github.com/repos/huggingface/datasets/issues/2982
https://github.com/huggingface/datasets/pull/2982
2,982
Add the Math Aptitude Test of Heuristics dataset.
closed
0
2021-09-28T19:18:37
2021-10-01T19:51:23
2021-10-01T12:21:00
hacobe
[]
null
true
1,009,969,310
https://api.github.com/repos/huggingface/datasets/issues/2981
https://github.com/huggingface/datasets/pull/2981
2,981
add wit dataset
closed
5
2021-09-28T16:34:49
2022-05-05T14:26:41
2022-05-05T14:26:41
nateraw
[]
Resolves #2902 based on conversation there - would also close #2810. Open to suggestions/help 😀 CC @hassiahk @lhoestq @yjernite
true
1,009,873,482
https://api.github.com/repos/huggingface/datasets/issues/2980
https://github.com/huggingface/datasets/issues/2980
2,980
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
open
3
2021-09-28T15:04:36
2021-09-29T17:25:14
null
cdleong
[ "dataset request" ]
## Adding a Dataset - **Name:** *SLR25* - **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data* - **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. * - **Data:** *Currently ...
false
1,009,634,147
https://api.github.com/repos/huggingface/datasets/issues/2979
https://github.com/huggingface/datasets/issues/2979
2,979
ValueError when computing f1 metric with average None
closed
1
2021-09-28T11:34:53
2021-10-01T14:17:38
2021-10-01T14:17:38
asofiaoliveira
[ "bug" ]
## Describe the bug When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py: ```python return { ...
false
1,009,521,419
https://api.github.com/repos/huggingface/datasets/issues/2978
https://github.com/huggingface/datasets/issues/2978
2,978
Run CI tests against non-production server
open
2
2021-09-28T09:41:26
2021-09-28T15:23:50
null
albertvillanova
[]
Currently, the CI test suite performs requests to the HF production server. As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
false
1,009,378,692
https://api.github.com/repos/huggingface/datasets/issues/2977
https://github.com/huggingface/datasets/issues/2977
2,977
Impossible to load compressed csv
closed
1
2021-09-28T07:18:54
2021-10-01T15:53:16
2021-10-01T15:53:15
Valahaar
[ "bug" ]
## Describe the bug It is not possible to load from a compressed csv anymore. ## Steps to reproduce the bug ```python load_dataset('csv', data_files=['/path/to/csv.bz2']) ``` ## Problem and possible solution This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets...
false
1,008,647,889
https://api.github.com/repos/huggingface/datasets/issues/2976
https://github.com/huggingface/datasets/issues/2976
2,976
Can't load dataset
closed
4
2021-09-27T21:38:14
2024-04-08T03:27:29
2021-09-28T06:53:01
mskovalova
[ "bug" ]
I'm trying to load a wikitext dataset ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext") ``` ValueError: Config name is missing. Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1'] Example of usage: `load_d...
false
1,008,444,654
https://api.github.com/repos/huggingface/datasets/issues/2975
https://github.com/huggingface/datasets/pull/2975
2,975
ignore dummy folder and dataset_infos.json
closed
0
2021-09-27T18:09:03
2021-09-29T09:45:38
2021-09-29T09:05:38
Ishan-Kumar2
[]
Fixes #2877 Added the `dataset_infos.json` to the ignored files list and also added check to ignore files which have parent directory as `dummy`. Let me know if it is correct. Thanks :)
true
1,008,247,787
https://api.github.com/repos/huggingface/datasets/issues/2974
https://github.com/huggingface/datasets/pull/2974
2,974
Actually disable dummy labels by default
closed
0
2021-09-27T14:50:20
2021-09-29T09:04:42
2021-09-29T09:04:41
Rocketknight1
[]
So I might have just changed the docstring instead of the actual default argument value and not realized. @lhoestq I'm sorry >.>
true
1,007,894,592
https://api.github.com/repos/huggingface/datasets/issues/2973
https://github.com/huggingface/datasets/pull/2973
2,973
Fix JSON metadata of masakhaner dataset
closed
0
2021-09-27T09:09:08
2021-09-27T12:59:59
2021-09-27T12:59:59
albertvillanova
[]
Fix #2971.
true
1,007,808,714
https://api.github.com/repos/huggingface/datasets/issues/2972
https://github.com/huggingface/datasets/issues/2972
2,972
OSError: Not enough disk space.
closed
6
2021-09-27T07:41:22
2024-12-04T02:56:19
2021-09-28T06:43:15
qqaatw
[ "bug" ]
## Describe the bug I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough spac...
false
1,007,696,522
https://api.github.com/repos/huggingface/datasets/issues/2971
https://github.com/huggingface/datasets/issues/2971
2,971
masakhaner dataset load problem
closed
1
2021-09-27T04:59:07
2021-09-27T12:59:59
2021-09-27T12:59:59
huu4ontocord
[ "bug" ]
## Describe the bug Masakhaner dataset is not loading ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("masakhaner",'amh') ``` ## Expected results Expected the return of a dataset ## Actual results ``` NonMatchingSplitsSizesError Traceback (mo...
false
1,007,340,089
https://api.github.com/repos/huggingface/datasets/issues/2970
https://github.com/huggingface/datasets/issues/2970
2,970
Magnet’s
closed
0
2021-09-26T09:50:29
2021-09-26T10:38:59
2021-09-26T10:38:59
rcacho172
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
false
1,007,217,867
https://api.github.com/repos/huggingface/datasets/issues/2969
https://github.com/huggingface/datasets/issues/2969
2,969
medical-dialog error
closed
3
2021-09-25T23:08:44
2024-01-08T09:55:12
2021-10-11T07:46:42
smeyerhot
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_d...
false
1,007,209,488
https://api.github.com/repos/huggingface/datasets/issues/2968
https://github.com/huggingface/datasets/issues/2968
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
closed
9
2021-09-25T22:18:39
2021-10-07T22:47:42
2021-10-07T22:47:26
LysandreJik
[ "bug" ]
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folder...
false
1,007,194,837
https://api.github.com/repos/huggingface/datasets/issues/2967
https://github.com/huggingface/datasets/issues/2967
2,967
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
closed
0
2021-09-25T20:58:15
2021-10-03T20:34:22
2021-10-03T20:34:22
WadeYin9712
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets? **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A **Additional context** This is Da Yin at UCLA. Recentl...
false
1,007,142,233
https://api.github.com/repos/huggingface/datasets/issues/2966
https://github.com/huggingface/datasets/pull/2966
2,966
Upload greek-legal-code dataset
closed
1
2021-09-25T16:52:15
2021-10-13T13:37:30
2021-10-13T13:37:30
christospi
[]
null
true
1,007,084,153
https://api.github.com/repos/huggingface/datasets/issues/2965
https://github.com/huggingface/datasets/issues/2965
2,965
Invalid download URL of WMT17 `zh-en` data
closed
1
2021-09-25T13:17:32
2022-08-31T06:47:11
2022-08-31T06:47:10
Ririkoo
[ "bug", "dataset bug" ]
## Describe the bug Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wmt17','zh-en') ``` ## Expected results ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/pa...
false
1,006,605,904
https://api.github.com/repos/huggingface/datasets/issues/2964
https://github.com/huggingface/datasets/issues/2964
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
closed
1
2021-09-24T15:55:21
2024-02-16T10:14:35
2021-09-25T08:06:07
alvarobartt
[ "bug" ]
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if re...
false
1,006,588,605
https://api.github.com/repos/huggingface/datasets/issues/2963
https://github.com/huggingface/datasets/issues/2963
2,963
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
closed
0
2021-09-24T15:35:11
2021-09-24T15:38:24
2021-09-24T15:38:24
keloemma
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError( TypeError: Provi...
false
1,006,557,666
https://api.github.com/repos/huggingface/datasets/issues/2962
https://github.com/huggingface/datasets/issues/2962
2,962
Enable splits during streaming the dataset
open
1
2021-09-24T15:01:29
2025-07-17T04:53:20
null
merveenoyan
[ "enhancement" ]
## Describe the Problem I'd like to stream only a specific percentage or part of the dataset. I want to do splitting when I'm streaming dataset as well. ## Solution Enabling splits when `streaming = True` as well. `e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)` ## Alternativ...
false
1,006,453,781
https://api.github.com/repos/huggingface/datasets/issues/2961
https://github.com/huggingface/datasets/pull/2961
2,961
Fix CI doc build
closed
0
2021-09-24T13:13:28
2021-09-24T13:18:07
2021-09-24T13:18:07
albertvillanova
[]
Pin `fsspec`. Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1' Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
true
1,006,222,850
https://api.github.com/repos/huggingface/datasets/issues/2960
https://github.com/huggingface/datasets/pull/2960
2,960
Support pandas 1.3 new `read_csv` parameters
closed
0
2021-09-24T08:37:24
2021-09-24T11:22:31
2021-09-24T11:22:30
SBrandeis
[]
Support two new arguments introduced in pandas v1.3.0: - `encoding_errors` - `on_bad_lines` `read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
true
1,005,547,632
https://api.github.com/repos/huggingface/datasets/issues/2959
https://github.com/huggingface/datasets/pull/2959
2,959
Added computer vision tasks
closed
5
2021-09-23T15:07:27
2022-03-01T17:41:51
2022-03-01T17:41:51
merveenoyan
[]
Added various image processing/computer vision tasks.
true
1,005,144,601
https://api.github.com/repos/huggingface/datasets/issues/2958
https://github.com/huggingface/datasets/pull/2958
2,958
Add security policy to the project
closed
0
2021-09-23T08:20:55
2021-10-21T15:16:44
2021-10-21T15:16:43
albertvillanova
[]
Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository Close #2953.
true
1,004,868,337
https://api.github.com/repos/huggingface/datasets/issues/2957
https://github.com/huggingface/datasets/issues/2957
2,957
MultiWOZ Dataset NonMatchingChecksumError
closed
1
2021-09-22T23:45:00
2022-03-15T16:07:02
2022-03-15T16:07:02
bradyneal
[ "bug" ]
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = loa...
false
1,004,306,367
https://api.github.com/repos/huggingface/datasets/issues/2956
https://github.com/huggingface/datasets/issues/2956
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
open
1
2021-09-22T13:34:32
2023-08-31T16:49:01
null
SaulLu
[ "bug" ]
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.g...
false
1,003,999,469
https://api.github.com/repos/huggingface/datasets/issues/2955
https://github.com/huggingface/datasets/pull/2955
2,955
Update legacy Python image for CI tests in Linux
closed
1
2021-09-22T08:25:27
2021-09-24T10:36:05
2021-09-24T10:36:05
albertvillanova
[]
Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights: - Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to fas...
true
1,003,904,803
https://api.github.com/repos/huggingface/datasets/issues/2954
https://github.com/huggingface/datasets/pull/2954
2,954
Run tests in parallel
closed
2
2021-09-22T07:00:44
2021-09-28T06:55:51
2021-09-28T06:55:51
albertvillanova
[]
Run CI tests in parallel to speed up the test suite. Speed up results: - Linux: from `7m 30s` to `5m 32s` - Windows: from `13m 52s` to `11m 10s`
true
1,002,766,517
https://api.github.com/repos/huggingface/datasets/issues/2953
https://github.com/huggingface/datasets/issues/2953
2,953
Trying to get in touch regarding a security issue
closed
1
2021-09-21T15:58:13
2021-10-21T15:16:43
2021-10-21T15:16:43
JamieSlome
[]
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-rep...
false
1,002,704,096
https://api.github.com/repos/huggingface/datasets/issues/2952
https://github.com/huggingface/datasets/pull/2952
2,952
Fix missing conda deps
closed
0
2021-09-21T15:23:01
2021-09-22T04:39:59
2021-09-21T15:30:44
lhoestq
[]
`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail. Fix #2932.
true
1,001,267,888
https://api.github.com/repos/huggingface/datasets/issues/2951
https://github.com/huggingface/datasets/pull/2951
2,951
Dummy labels no longer on by default in `to_tf_dataset`
closed
2
2021-09-20T18:26:59
2021-09-21T14:00:57
2021-09-21T10:14:32
Rocketknight1
[]
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
true
1,001,085,353
https://api.github.com/repos/huggingface/datasets/issues/2950
https://github.com/huggingface/datasets/pull/2950
2,950
Fix fn kwargs in filter
closed
0
2021-09-20T15:10:26
2021-09-20T16:22:59
2021-09-20T15:28:01
lhoestq
[]
#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927 I fixed that and added a test to make sure it doesn't happen again (for either map or filter) Fix #2927
true
1,001,026,680
https://api.github.com/repos/huggingface/datasets/issues/2949
https://github.com/huggingface/datasets/pull/2949
2,949
Introduce web and wiki config in triviaqa dataset
closed
3
2021-09-20T14:17:23
2021-10-05T13:20:52
2021-10-01T15:39:29
shirte
[]
The TriviaQA paper suggests that the two subsets (Wikipedia and Web) should be treated differently. There are also different leaderboards for the two sets on CodaLab. For that reason, introduce additional builder configs in the trivia_qa dataset.
true
1,000,844,077
https://api.github.com/repos/huggingface/datasets/issues/2948
https://github.com/huggingface/datasets/pull/2948
2,948
Fix minor URL format in scitldr dataset
closed
0
2021-09-20T11:11:32
2021-09-20T13:18:28
2021-09-20T13:18:28
albertvillanova
[]
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
true
1,000,798,338
https://api.github.com/repos/huggingface/datasets/issues/2947
https://github.com/huggingface/datasets/pull/2947
2,947
Don't use old, incompatible cache for the new `filter`
closed
0
2021-09-20T10:18:59
2021-09-20T16:25:09
2021-09-20T13:43:02
lhoestq
[]
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into...
true
1,000,754,824
https://api.github.com/repos/huggingface/datasets/issues/2946
https://github.com/huggingface/datasets/pull/2946
2,946
Update meteor score from nltk update
closed
0
2021-09-20T09:28:46
2021-09-20T09:35:59
2021-09-20T09:35:59
lhoestq
[]
It looks like there were issues in NLTK on the way the METEOR score was computed. A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values. I updated the score of the example in the docs
true
1,000,624,883
https://api.github.com/repos/huggingface/datasets/issues/2945
https://github.com/huggingface/datasets/issues/2945
2,945
Protect master branch
closed
2
2021-09-20T06:47:01
2021-09-20T12:01:27
2021-09-20T12:00:16
albertvillanova
[ "enhancement" ]
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propo...
false
1,000,544,370
https://api.github.com/repos/huggingface/datasets/issues/2944
https://github.com/huggingface/datasets/issues/2944
2,944
Add `remove_columns` to `IterableDataset `
closed
1
2021-09-20T04:01:00
2021-10-08T15:31:53
2021-10-08T15:31:53
changjonathanc
[ "enhancement", "good first issue" ]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'I...
false
1,000,355,115
https://api.github.com/repos/huggingface/datasets/issues/2943
https://github.com/huggingface/datasets/issues/2943
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
closed
6
2021-09-19T16:16:37
2021-09-20T16:25:43
2021-09-20T16:25:42
anton-l
[ "bug" ]
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
false
1,000,309,765
https://api.github.com/repos/huggingface/datasets/issues/2942
https://github.com/huggingface/datasets/pull/2942
2,942
Add SEDE dataset
closed
4
2021-09-19T13:11:24
2021-09-24T10:39:55
2021-09-24T10:39:54
Hazoom
[]
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
true
1,000,000,711
https://api.github.com/repos/huggingface/datasets/issues/2941
https://github.com/huggingface/datasets/issues/2941
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
open
1
2021-09-18T10:39:13
2022-01-19T14:10:07
null
ayaka14732
[ "bug", "dataset bug" ]
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num...
false
999,680,796
https://api.github.com/repos/huggingface/datasets/issues/2940
https://github.com/huggingface/datasets/pull/2940
2,940
add swedish_medical_ner dataset
closed
0
2021-09-17T20:03:05
2021-10-05T12:13:34
2021-10-05T12:13:33
bwang482
[]
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
true
999,639,630
https://api.github.com/repos/huggingface/datasets/issues/2939
https://github.com/huggingface/datasets/pull/2939
2,939
MENYO-20k repo has moved, updating URL
closed
0
2021-09-17T19:01:54
2021-09-21T15:31:37
2021-09-21T15:31:36
cdleong
[]
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
true
999,552,263
https://api.github.com/repos/huggingface/datasets/issues/2938
https://github.com/huggingface/datasets/pull/2938
2,938
Take namespace into account in caching
closed
7
2021-09-17T16:57:33
2021-12-17T10:52:18
2021-09-29T13:01:31
lhoestq
[]
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I...
true
999,548,277
https://api.github.com/repos/huggingface/datasets/issues/2937
https://github.com/huggingface/datasets/issues/2937
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
closed
4
2021-09-17T16:52:10
2022-08-24T13:09:08
2022-08-24T13:09:08
daqieq
[ "bug" ]
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any er...
false
999,521,647
https://api.github.com/repos/huggingface/datasets/issues/2936
https://github.com/huggingface/datasets/pull/2936
2,936
Check that array is not Float as nan != nan
closed
0
2021-09-17T16:16:41
2021-09-21T09:39:05
2021-09-21T09:39:04
Iwontbecreative
[]
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
true
999,518,469
https://api.github.com/repos/huggingface/datasets/issues/2935
https://github.com/huggingface/datasets/pull/2935
2,935
Add Jigsaw unintended Bias
closed
3
2021-09-17T16:12:31
2021-09-24T10:41:52
2021-09-24T10:41:52
Iwontbecreative
[]
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
true
999,477,413
https://api.github.com/repos/huggingface/datasets/issues/2934
https://github.com/huggingface/datasets/issues/2934
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
closed
2
2021-09-17T15:26:53
2021-10-13T09:03:23
2021-10-13T09:03:23
lhoestq
[ "bug" ]
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one refe...
false
999,392,566
https://api.github.com/repos/huggingface/datasets/issues/2933
https://github.com/huggingface/datasets/pull/2933
2,933
Replace script_version with revision
closed
1
2021-09-17T14:04:39
2021-09-20T09:52:10
2021-09-20T09:52:10
albertvillanova
[]
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are ...
true
999,317,750
https://api.github.com/repos/huggingface/datasets/issues/2932
https://github.com/huggingface/datasets/issues/2932
2,932
Conda build fails
closed
2
2021-09-17T12:49:22
2021-09-21T15:31:10
2021-09-21T15:31:10
albertvillanova
[ "bug" ]
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
false
998,326,359
https://api.github.com/repos/huggingface/datasets/issues/2931
https://github.com/huggingface/datasets/pull/2931
2,931
Fix bug in to_tf_dataset
closed
1
2021-09-16T15:08:03
2021-09-16T17:01:38
2021-09-16T17:01:37
Rocketknight1
[]
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
true
998,154,311
https://api.github.com/repos/huggingface/datasets/issues/2930
https://github.com/huggingface/datasets/issues/2930
2,930
Mutable columns argument breaks set_format
closed
1
2021-09-16T12:27:22
2021-09-16T13:50:53
2021-09-16T13:50:53
Rocketknight1
[ "bug" ]
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] datas...
false
997,960,024
https://api.github.com/repos/huggingface/datasets/issues/2929
https://github.com/huggingface/datasets/pull/2929
2,929
Add regression test for null Sequence
closed
0
2021-09-16T08:58:33
2021-09-17T08:23:59
2021-09-17T08:23:59
albertvillanova
[]
Relates to #2892 and #2900.
true
997,941,506
https://api.github.com/repos/huggingface/datasets/issues/2928
https://github.com/huggingface/datasets/pull/2928
2,928
Update BibTeX entry
closed
0
2021-09-16T08:39:20
2021-09-16T12:35:34
2021-09-16T12:35:34
albertvillanova
[]
Update BibTeX entry.
true
997,654,680
https://api.github.com/repos/huggingface/datasets/issues/2927
https://github.com/huggingface/datasets/issues/2927
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
closed
2
2021-09-16T01:14:02
2021-09-20T16:23:22
2021-09-20T16:23:21
timothyjlaurent
[ "bug" ]
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[st...
false
997,463,277
https://api.github.com/repos/huggingface/datasets/issues/2926
https://github.com/huggingface/datasets/issues/2926
2,926
Error when downloading datasets to non-traditional cache directories
open
1
2021-09-15T19:59:46
2021-11-24T21:42:31
null
dar-tau
[ "bug" ]
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual...
false
997,407,034
https://api.github.com/repos/huggingface/datasets/issues/2925
https://github.com/huggingface/datasets/pull/2925
2,925
Add tutorial for no-code dataset upload
closed
3
2021-09-15T18:54:42
2021-09-27T17:51:55
2021-09-27T17:51:55
stevhliu
[ "documentation" ]
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dat...
true
997,378,113
https://api.github.com/repos/huggingface/datasets/issues/2924
https://github.com/huggingface/datasets/issues/2924
2,924
"File name too long" error for file locks
closed
12
2021-09-15T18:16:50
2023-12-08T13:39:51
2021-10-29T09:42:24
gar1t
[ "bug" ]
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc...
false
997,351,590
https://api.github.com/repos/huggingface/datasets/issues/2923
https://github.com/huggingface/datasets/issues/2923
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
closed
1
2021-09-15T17:44:38
2022-04-12T10:09:40
2022-04-12T10:09:39
severo
[ "bug", "dataset-viewer" ]
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an err...
false
997,332,662
https://api.github.com/repos/huggingface/datasets/issues/2922
https://github.com/huggingface/datasets/pull/2922
2,922
Fix conversion of multidim arrays in list to arrow
closed
0
2021-09-15T17:21:36
2021-09-15T17:22:52
2021-09-15T17:21:45
lhoestq
[]
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow ...
true
997,325,424
https://api.github.com/repos/huggingface/datasets/issues/2921
https://github.com/huggingface/datasets/issues/2921
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
closed
0
2021-09-15T17:12:11
2021-09-15T17:21:45
2021-09-15T17:21:45
lhoestq
[]
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <mod...
false
997,323,014
https://api.github.com/repos/huggingface/datasets/issues/2920
https://github.com/huggingface/datasets/pull/2920
2,920
Fix unwanted tqdm bar when accessing examples
closed
0
2021-09-15T17:09:11
2021-09-15T17:18:24
2021-09-15T17:18:24
lhoestq
[]
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
true
997,127,487
https://api.github.com/repos/huggingface/datasets/issues/2919
https://github.com/huggingface/datasets/issues/2919
2,919
Unwanted progress bars when accessing examples
closed
1
2021-09-15T14:05:10
2021-09-15T17:21:49
2021-09-15T17:18:23
lhoestq
[]
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") ...
false
997,063,347
https://api.github.com/repos/huggingface/datasets/issues/2918
https://github.com/huggingface/datasets/issues/2918
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
closed
3
2021-09-15T13:06:07
2021-12-01T08:15:00
2021-12-01T08:15:00
SBrandeis
[ "bug", "streaming" ]
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_...
false
997,041,658
https://api.github.com/repos/huggingface/datasets/issues/2917
https://github.com/huggingface/datasets/issues/2917
2,917
windows download abnormal
closed
3
2021-09-15T12:45:35
2021-09-16T17:17:48
2021-09-16T17:17:48
wei1826676931
[ "bug" ]
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-43...
false
997,003,661
https://api.github.com/repos/huggingface/datasets/issues/2916
https://github.com/huggingface/datasets/pull/2916
2,916
Add OpenAI's pass@k code evaluation metric
closed
4
2021-09-15T12:05:43
2021-11-12T14:19:51
2021-11-12T14:19:50
lvwerra
[]
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references`...
true
996,870,071
https://api.github.com/repos/huggingface/datasets/issues/2915
https://github.com/huggingface/datasets/pull/2915
2,915
Fix fsspec AbstractFileSystem access
closed
0
2021-09-15T09:39:20
2021-09-15T11:35:24
2021-09-15T11:35:24
pierre-godard
[]
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
true
996,770,168
https://api.github.com/repos/huggingface/datasets/issues/2914
https://github.com/huggingface/datasets/issues/2914
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
closed
1
2021-09-15T07:54:06
2021-09-15T16:49:17
2021-09-15T16:49:16
pierre-godard
[ "bug" ]
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesys...
false
996,436,368
https://api.github.com/repos/huggingface/datasets/issues/2913
https://github.com/huggingface/datasets/issues/2913
2,913
timit_asr dataset only includes one text phrase
closed
2
2021-09-14T21:06:07
2021-09-15T08:05:19
2021-09-15T08:05:18
margotwagner
[ "bug" ]
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis...
false
996,256,005
https://api.github.com/repos/huggingface/datasets/issues/2912
https://github.com/huggingface/datasets/pull/2912
2,912
Update link to Blog in docs footer
closed
0
2021-09-14T17:23:14
2021-09-15T07:59:23
2021-09-15T07:59:23
albertvillanova
[]
Update link.
true
996,202,598
https://api.github.com/repos/huggingface/datasets/issues/2911
https://github.com/huggingface/datasets/pull/2911
2,911
Fix exception chaining
closed
0
2021-09-14T16:19:29
2021-09-16T15:04:44
2021-09-16T15:04:44
albertvillanova
[]
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
true
996,149,632
https://api.github.com/repos/huggingface/datasets/issues/2910
https://github.com/huggingface/datasets/pull/2910
2,910
feat: 🎸 pass additional arguments to get private configs + info
closed
1
2021-09-14T15:24:19
2021-09-15T16:19:09
2021-09-15T16:19:06
severo
[]
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
true
996,002,180
https://api.github.com/repos/huggingface/datasets/issues/2909
https://github.com/huggingface/datasets/pull/2909
2,909
fix anli splits
closed
0
2021-09-14T13:10:35
2021-10-13T11:27:49
2021-10-13T11:27:49
zaidalyafeai
[]
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
true
995,970,612
https://api.github.com/repos/huggingface/datasets/issues/2908
https://github.com/huggingface/datasets/pull/2908
2,908
Update Zenodo metadata with creator names and affiliation
closed
0
2021-09-14T12:39:37
2021-09-14T14:29:25
2021-09-14T14:29:25
albertvillanova
[]
This PR helps in prefilling author data when automatically generating the DOI after each release.
true
995,968,152
https://api.github.com/repos/huggingface/datasets/issues/2907
https://github.com/huggingface/datasets/pull/2907
2,907
add story_cloze dataset
closed
1
2021-09-14T12:36:53
2021-10-08T21:41:42
2021-10-08T21:41:41
zaidalyafeai
[]
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
true
995,962,905
https://api.github.com/repos/huggingface/datasets/issues/2906
https://github.com/huggingface/datasets/pull/2906
2,906
feat: 🎸 add a function to get a dataset config's split names
closed
1
2021-09-14T12:31:22
2021-10-04T09:55:38
2021-10-04T09:55:37
severo
[]
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct? -> no: reverted - [x] Should I add a section in https://github.com/huggingface/datasets/blo...
true
995,843,964
https://api.github.com/repos/huggingface/datasets/issues/2905
https://github.com/huggingface/datasets/pull/2905
2,905
Update BibTeX entry
closed
0
2021-09-14T10:16:17
2021-09-14T12:25:37
2021-09-14T12:25:37
albertvillanova
[]
Update BibTeX entry.
true
995,814,222
https://api.github.com/repos/huggingface/datasets/issues/2904
https://github.com/huggingface/datasets/issues/2904
2,904
FORCE_REDOWNLOAD does not work
open
3
2021-09-14T09:45:26
2021-10-06T09:37:19
null
anoopkatti
[ "bug" ]
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default...
false
995,715,191
https://api.github.com/repos/huggingface/datasets/issues/2903
https://github.com/huggingface/datasets/pull/2903
2,903
Fix xpathopen to accept positional arguments
closed
1
2021-09-14T08:02:50
2021-09-14T08:51:21
2021-09-14T08:40:47
albertvillanova
[]
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
true
995,254,216
https://api.github.com/repos/huggingface/datasets/issues/2902
https://github.com/huggingface/datasets/issues/2902
2,902
Add WIT Dataset
closed
6
2021-09-13T19:38:49
2024-10-02T15:37:48
2022-06-01T17:28:40
nateraw
[ "dataset request" ]
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (e...
false
995,232,844
https://api.github.com/repos/huggingface/datasets/issues/2901
https://github.com/huggingface/datasets/issues/2901
2,901
Incompatibility with pytest
closed
1
2021-09-13T19:12:17
2021-09-14T08:40:47
2021-09-14T08:40:47
severo
[ "bug" ]
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pyt...
false
994,922,580
https://api.github.com/repos/huggingface/datasets/issues/2900
https://github.com/huggingface/datasets/pull/2900
2,900
Fix null sequence encoding
closed
0
2021-09-13T13:55:08
2021-09-13T14:17:43
2021-09-13T14:17:42
lhoestq
[]
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
true
994,082,432
https://api.github.com/repos/huggingface/datasets/issues/2899
https://github.com/huggingface/datasets/issues/2899
2,899
Dataset
closed
0
2021-09-12T07:38:53
2021-09-12T16:12:15
2021-09-12T16:12:15
rcacho172
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
false
994,032,814
https://api.github.com/repos/huggingface/datasets/issues/2898
https://github.com/huggingface/datasets/issues/2898
2,898
Hug emoji
closed
0
2021-09-12T03:27:51
2021-09-12T16:13:13
2021-09-12T16:13:13
Jackg-08
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
false
993,798,386
https://api.github.com/repos/huggingface/datasets/issues/2897
https://github.com/huggingface/datasets/pull/2897
2,897
Add OpenAI's HumanEval dataset
closed
1
2021-09-11T09:37:47
2021-09-16T15:02:11
2021-09-16T15:02:11
lvwerra
[]
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
true