id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
1,028,210,790
https://api.github.com/repos/huggingface/datasets/issues/3098
https://github.com/huggingface/datasets/pull/3098
3,098
Push to hub capabilities for `Dataset` and `DatasetDict`
closed
9
2021-10-17T04:12:44
2021-12-08T16:04:50
2021-11-24T11:25:36
LysandreJik
[]
This PR implements a `push_to_hub` method on `Dataset` and `DatasetDict`. This does not currently work in `IterableDatasetDict` nor `IterableDataset` as those are simple dicts and I would like your opinion on how you would like to implement this before going ahead and doing it. This implementation needs to be used w...
true
1,027,750,811
https://api.github.com/repos/huggingface/datasets/issues/3097
https://github.com/huggingface/datasets/issues/3097
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
closed
1
2021-10-15T19:34:38
2021-10-18T07:51:54
2021-10-18T07:51:54
VictorSanh
[ "bug" ]
## Describe the bug I keep runnig into a fsspec ModuleNotFound error ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_infos 2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudar...
false
1,027,535,685
https://api.github.com/repos/huggingface/datasets/issues/3096
https://github.com/huggingface/datasets/pull/3096
3,096
Fix Audio feature mp3 resampling
closed
0
2021-10-15T15:05:19
2021-10-15T15:38:30
2021-10-15T15:38:30
albertvillanova
[]
Issue #3095 is related to mp3 resampling, not to `cast_column`. This PR fixes Audio feature mp3 resampling. Fix #3095.
true
1,027,453,146
https://api.github.com/repos/huggingface/datasets/issues/3095
https://github.com/huggingface/datasets/issues/3095
3,095
`cast_column` makes audio decoding fail
closed
2
2021-10-15T13:36:58
2023-04-07T09:43:20
2021-10-15T15:38:30
patrickvonplaten
[ "bug" ]
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) pr...
false
1,027,328,633
https://api.github.com/repos/huggingface/datasets/issues/3094
https://github.com/huggingface/datasets/issues/3094
3,094
Support loading a dataset from SQLite files
closed
2
2021-10-15T10:58:41
2022-10-03T16:32:29
2022-10-03T16:32:29
albertvillanova
[ "enhancement", "good second issue" ]
As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files.
false
1,027,262,124
https://api.github.com/repos/huggingface/datasets/issues/3093
https://github.com/huggingface/datasets/issues/3093
3,093
Error loading json dataset with multiple splits if keys in nested dicts have a different order
closed
2
2021-10-15T09:33:25
2022-04-10T14:06:29
2022-04-10T14:06:29
dthulke
[ "bug" ]
## Describe the bug Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below. If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fin...
false
1,027,260,383
https://api.github.com/repos/huggingface/datasets/issues/3092
https://github.com/huggingface/datasets/pull/3092
3,092
Fix JNLBA dataset
closed
2
2021-10-15T09:31:14
2022-07-10T14:36:49
2021-10-22T08:23:57
bhavitvyamalik
[]
As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link. I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAIL...
true
1,027,251,530
https://api.github.com/repos/huggingface/datasets/issues/3091
https://github.com/huggingface/datasets/issues/3091
3,091
`blog_authorship_corpus` is broken
closed
3
2021-10-15T09:20:40
2021-10-19T13:06:10
2021-10-19T12:50:39
fdtomasi
[ "bug" ]
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ...
false
1,027,100,371
https://api.github.com/repos/huggingface/datasets/issues/3090
https://github.com/huggingface/datasets/pull/3090
3,090
Update BibTeX entry
closed
0
2021-10-15T05:39:27
2021-10-15T07:35:57
2021-10-15T07:35:57
albertvillanova
[]
Update BibTeX entry.
true
1,026,973,360
https://api.github.com/repos/huggingface/datasets/issues/3089
https://github.com/huggingface/datasets/issues/3089
3,089
JNLPBA Dataset
closed
2
2021-10-15T01:16:02
2021-10-22T08:23:57
2021-10-22T08:23:57
sciarrilli
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in ...
false
1,026,920,369
https://api.github.com/repos/huggingface/datasets/issues/3088
https://github.com/huggingface/datasets/pull/3088
3,088
Use template column_mapping to transmit_format instead of template features
closed
1
2021-10-14T23:49:40
2021-10-15T14:40:05
2021-10-15T10:11:04
mariosasko
[]
Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping. Fix #3087 TODO: - [x] Add a test
true
1,026,780,469
https://api.github.com/repos/huggingface/datasets/issues/3087
https://github.com/huggingface/datasets/issues/3087
3,087
Removing label column in a text classification dataset yields to errors
closed
0
2021-10-14T20:12:50
2021-10-15T10:11:04
2021-10-15T10:11:04
sgugger
[ "bug" ]
## Describe the bug This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error. To reproduce: ```py from datasets import load_dataset from transformers import AutoTokenizer raw_da...
false
1,026,481,905
https://api.github.com/repos/huggingface/datasets/issues/3086
https://github.com/huggingface/datasets/pull/3086
3,086
Remove _resampler from Audio fields
closed
0
2021-10-14T14:38:50
2021-10-14T15:13:41
2021-10-14T15:13:40
albertvillanova
[]
The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached. This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`. Fix #3083.
true
1,026,467,384
https://api.github.com/repos/huggingface/datasets/issues/3085
https://github.com/huggingface/datasets/pull/3085
3,085
Fixes to `to_tf_dataset`
closed
2
2021-10-14T14:25:56
2021-10-21T15:05:29
2021-10-21T15:05:28
Rocketknight1
[]
null
true
1,026,428,992
https://api.github.com/repos/huggingface/datasets/issues/3084
https://github.com/huggingface/datasets/issues/3084
3,084
VisibleDeprecationWarning when using `set_format("numpy")`
closed
1
2021-10-14T13:53:01
2021-10-22T16:04:14
2021-10-22T16:04:14
Rocketknight1
[ "bug" ]
Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return tokenizer(dataset['premise']) tokenized_datasets = dataset....
false
1,026,397,062
https://api.github.com/repos/huggingface/datasets/issues/3083
https://github.com/huggingface/datasets/issues/3083
3,083
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
closed
0
2021-10-14T13:23:53
2021-10-14T15:13:40
2021-10-14T15:13:40
albertvillanova
[ "bug" ]
## Describe the bug As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError. ## Steps to reproduce the bug ```python from datasets import load_dataset # load first time works ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") # ...
false
1,026,388,994
https://api.github.com/repos/huggingface/datasets/issues/3082
https://github.com/huggingface/datasets/pull/3082
3,082
Fix error related to huggingface_hub timeout parameter
closed
0
2021-10-14T13:17:47
2021-10-14T14:39:52
2021-10-14T14:39:51
albertvillanova
[]
The `huggingface_hub` package added the parameter `timeout` from version 0.0.19. This PR bumps this minimal version. Fix #3080.
true
1,026,383,749
https://api.github.com/repos/huggingface/datasets/issues/3081
https://github.com/huggingface/datasets/pull/3081
3,081
[Audio datasets] Adapting all audio datasets
closed
4
2021-10-14T13:13:45
2021-10-15T12:52:03
2021-10-15T12:22:33
patrickvonplaten
[]
This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets: - Librispeech - Timit - Common Voice - AMI - ... (others I'm forgetting now) The PR is curently blocked because the following leads to a problem: ```python from dataset...
true
1,026,380,626
https://api.github.com/repos/huggingface/datasets/issues/3080
https://github.com/huggingface/datasets/issues/3080
3,080
Error related to timeout keyword argument
closed
0
2021-10-14T13:10:58
2021-10-14T14:39:51
2021-10-14T14:39:51
albertvillanova
[ "bug" ]
## Describe the bug As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: dataset_info() got ...
false
1,026,150,362
https://api.github.com/repos/huggingface/datasets/issues/3077
https://github.com/huggingface/datasets/pull/3077
3,077
Fix loading a metric with internal import
closed
0
2021-10-14T09:06:58
2021-10-14T09:14:56
2021-10-14T09:14:55
albertvillanova
[]
After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports. This PR adds a new test case and fixes this bug. Fix #3076. CC: @sgugger @merveenoyan
true
1,026,113,484
https://api.github.com/repos/huggingface/datasets/issues/3076
https://github.com/huggingface/datasets/issues/3076
3,076
Error when loading a metric
closed
0
2021-10-14T08:29:27
2021-10-14T09:14:55
2021-10-14T09:14:55
albertvillanova
[ "bug" ]
## Describe the bug As reported by @sgugger, after last release, exception is thrown when loading a metric. ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("squad_v2") ``` ## Actual results ``` FileNotFoundError Traceback (most recent ...
false
1,026,103,388
https://api.github.com/repos/huggingface/datasets/issues/3075
https://github.com/huggingface/datasets/pull/3075
3,075
Updates LexGLUE and MultiEURLEX README.md files
closed
0
2021-10-14T08:19:16
2021-10-18T10:13:40
2021-10-18T10:13:40
iliaschalkidis
[]
Updates LexGLUE and MultiEURLEX README.md files - Fix leaderboard in LexGLUE. - Fix an error in the CaseHOLD data example. - Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website.
true
1,025,940,085
https://api.github.com/repos/huggingface/datasets/issues/3074
https://github.com/huggingface/datasets/pull/3074
3,074
add XCSR dataset
closed
2
2021-10-14T04:39:59
2021-11-08T13:52:36
2021-11-08T13:52:36
yangxqiao
[]
Hi, I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :) I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and ...
true
1,025,718,469
https://api.github.com/repos/huggingface/datasets/issues/3073
https://github.com/huggingface/datasets/issues/3073
3,073
Import error installing with ppc64le
closed
1
2021-10-13T21:37:23
2021-10-14T16:35:46
2021-10-14T16:33:28
gcervantes8
[ "bug" ]
## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for...
false
1,025,233,152
https://api.github.com/repos/huggingface/datasets/issues/3072
https://github.com/huggingface/datasets/pull/3072
3,072
Fix pathlib patches for streaming
closed
0
2021-10-13T13:11:15
2021-10-13T13:31:05
2021-10-13T13:31:05
lhoestq
[]
Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time) `counter` now works in both streaming and non-streaming mode. And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well Note : the patches should only affect the datasets...
true
1,024,893,493
https://api.github.com/repos/huggingface/datasets/issues/3071
https://github.com/huggingface/datasets/issues/3071
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
closed
1
2021-10-13T07:32:10
2021-10-13T08:27:04
2021-10-13T08:27:03
zixiliuUSC
[ "dataset request" ]
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](ht...
false
1,024,856,745
https://api.github.com/repos/huggingface/datasets/issues/3070
https://github.com/huggingface/datasets/pull/3070
3,070
Fix Windows CI with FileNotFoundError when stting up s3_base fixture
closed
1
2021-10-13T06:49:01
2021-10-13T08:55:13
2021-10-13T06:49:48
albertvillanova
[]
Fix #3069.
true
1,024,818,680
https://api.github.com/repos/huggingface/datasets/issues/3069
https://github.com/huggingface/datasets/issues/3069
3,069
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
closed
0
2021-10-13T05:52:26
2021-10-13T08:05:49
2021-10-13T06:49:48
albertvillanova
[ "bug" ]
## Describe the bug After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321 Error summary: ``` ERROR tes...
false
1,024,681,264
https://api.github.com/repos/huggingface/datasets/issues/3068
https://github.com/huggingface/datasets/pull/3068
3,068
feat: increase streaming retry config
closed
1
2021-10-13T02:00:50
2021-10-13T09:25:56
2021-10-13T09:25:54
borisdayma
[]
Increase streaming config parameters: * retry interval set to 5 seconds * max retries set to 20 (so 1mn 40s)
true
1,024,023,185
https://api.github.com/repos/huggingface/datasets/issues/3067
https://github.com/huggingface/datasets/pull/3067
3,067
add story_cloze
closed
4
2021-10-12T16:36:53
2021-10-13T13:48:13
2021-10-13T13:48:13
zaidalyafeai
[]
null
true
1,024,005,311
https://api.github.com/repos/huggingface/datasets/issues/3066
https://github.com/huggingface/datasets/pull/3066
3,066
Add iter_archive
closed
0
2021-10-12T16:17:16
2022-09-21T14:10:10
2021-10-18T09:12:46
lhoestq
[]
Added the `iter_archive` method for the StreamingDownloadManager. It was already implemented in the regular DownloadManager. Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829 I also updated the `food101` dataset as an example. Any image/audio data...
true
1,023,951,322
https://api.github.com/repos/huggingface/datasets/issues/3065
https://github.com/huggingface/datasets/pull/3065
3,065
Fix test command after refac
closed
0
2021-10-12T15:23:30
2021-10-12T15:28:47
2021-10-12T15:28:46
lhoestq
[]
Fix the `datasets-cli` test command after the `prepare_module` change in #2986
true
1,023,900,075
https://api.github.com/repos/huggingface/datasets/issues/3064
https://github.com/huggingface/datasets/issues/3064
3,064
Make `interleave_datasets` more robust
open
3
2021-10-12T14:34:53
2022-07-30T08:47:26
null
sbmaruf
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration. It creates new problems in calculation...
false
1,023,588,297
https://api.github.com/repos/huggingface/datasets/issues/3063
https://github.com/huggingface/datasets/issues/3063
3,063
Windows CI is unable to test streaming properly because of SSL issues
closed
2
2021-10-12T09:33:40
2022-08-24T14:59:29
2022-08-24T14:59:29
lhoestq
[ "streaming" ]
In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443 The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works. to rep...
false
1,023,209,592
https://api.github.com/repos/huggingface/datasets/issues/3062
https://github.com/huggingface/datasets/pull/3062
3,062
Update summary on PyPi beyond NLP
closed
0
2021-10-11T23:27:46
2021-10-13T08:55:54
2021-10-13T08:55:54
thomwolf
[]
More than just NLP now
true
1,023,103,119
https://api.github.com/repos/huggingface/datasets/issues/3061
https://github.com/huggingface/datasets/issues/3061
3,061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
open
2
2021-10-11T20:49:49
2021-10-22T09:34:10
null
BenoitDalFerro
[ "enhancement" ]
**A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions. **Describe alternatives you've considered** By the way is there not a way to directl...
false
1,022,936,396
https://api.github.com/repos/huggingface/datasets/issues/3060
https://github.com/huggingface/datasets/issues/3060
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
closed
2
2021-10-11T17:05:27
2021-10-28T05:52:21
2021-10-28T05:52:21
RylanSchaeffer
[ "bug" ]
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `datas...
false
1,022,620,057
https://api.github.com/repos/huggingface/datasets/issues/3059
https://github.com/huggingface/datasets/pull/3059
3,059
Fix task reloading from cache
closed
0
2021-10-11T12:03:04
2021-10-11T12:23:39
2021-10-11T12:23:39
lhoestq
[]
When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed. This PR f...
true
1,022,612,664
https://api.github.com/repos/huggingface/datasets/issues/3058
https://github.com/huggingface/datasets/issues/3058
3,058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
closed
2
2021-10-11T11:54:59
2022-01-19T14:03:49
2022-01-19T14:03:49
hobbitlzy
[ "bug" ]
## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipe...
false
1,022,508,315
https://api.github.com/repos/huggingface/datasets/issues/3057
https://github.com/huggingface/datasets/issues/3057
3,057
Error in per class precision computation
closed
1
2021-10-11T10:05:19
2021-10-11T10:17:44
2021-10-11T10:16:16
tidhamecha2
[ "bug" ]
## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric precision_metric = load_metric("...
false
1,022,345,564
https://api.github.com/repos/huggingface/datasets/issues/3056
https://github.com/huggingface/datasets/pull/3056
3,056
Fix meteor metric for version >= 3.6.4
closed
0
2021-10-11T07:11:44
2021-10-11T07:29:20
2021-10-11T07:29:19
albertvillanova
[]
After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change). This PR fixes this issue, while maintaining compatibility with older versions.
true
1,022,319,238
https://api.github.com/repos/huggingface/datasets/issues/3055
https://github.com/huggingface/datasets/issues/3055
3,055
CI test suite fails after meteor metric update
closed
0
2021-10-11T06:37:12
2021-10-11T07:30:31
2021-10-11T07:30:31
albertvillanova
[ "bug" ]
## Describe the bug CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010 Stack trace: ``` ___________________ LocalMetricTest.test_load_metric_meteor ____________________ [gw1] linux -- Python 3.6.15 /home/circleci/.pye...
false
1,022,108,186
https://api.github.com/repos/huggingface/datasets/issues/3054
https://github.com/huggingface/datasets/pull/3054
3,054
Update Biosses
closed
0
2021-10-10T22:25:12
2021-10-13T09:04:27
2021-10-13T09:04:27
bwang482
[]
Fix variable naming
true
1,022,076,905
https://api.github.com/repos/huggingface/datasets/issues/3053
https://github.com/huggingface/datasets/issues/3053
3,053
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
closed
5
2021-10-10T19:55:21
2023-02-24T14:02:20
2023-02-24T14:02:20
davidbau
[ "bug" ]
## Describe the bug When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type` ## Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset('the_pile_openwebtext2') ``` ## Expected results Should download the dataset...
false
1,021,944,435
https://api.github.com/repos/huggingface/datasets/issues/3052
https://github.com/huggingface/datasets/issues/3052
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
closed
1
2021-10-10T10:31:36
2021-10-11T10:57:09
2021-10-11T10:56:36
BenoitDalFerro
[ "bug" ]
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfec...
false
1,021,852,234
https://api.github.com/repos/huggingface/datasets/issues/3051
https://github.com/huggingface/datasets/issues/3051
3,051
Non-Matching Checksum Error with crd3 dataset
closed
2
2021-10-10T01:32:43
2022-03-15T15:54:26
2022-03-15T15:54:26
RylanSchaeffer
[ "bug" ]
## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ## Actual results A non-matching checksum err...
false
1,021,772,622
https://api.github.com/repos/huggingface/datasets/issues/3050
https://github.com/huggingface/datasets/pull/3050
3,050
Fix streaming: catch Timeout error
closed
5
2021-10-09T18:19:20
2021-10-12T15:28:18
2021-10-11T09:35:38
borisdayma
[]
Catches Timeout error during streaming. fix #3049
true
1,021,770,008
https://api.github.com/repos/huggingface/datasets/issues/3049
https://github.com/huggingface/datasets/issues/3049
3,049
TimeoutError during streaming
closed
0
2021-10-09T18:06:51
2021-10-11T09:35:38
2021-10-11T09:35:38
borisdayma
[ "bug" ]
## Describe the bug I got a TimeoutError after streaming for about 10h. ## Steps to reproduce the bug Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear. ## Expected results This error was not expected in the code which considers only `ClientError` but...
false
1,021,765,661
https://api.github.com/repos/huggingface/datasets/issues/3048
https://github.com/huggingface/datasets/issues/3048
3,048
Identify which shard data belongs to
open
1
2021-10-09T17:46:35
2021-10-09T20:24:17
null
borisdayma
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-...
false
1,021,360,616
https://api.github.com/repos/huggingface/datasets/issues/3047
https://github.com/huggingface/datasets/issues/3047
3,047
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
closed
1
2021-10-08T18:23:11
2021-11-03T17:13:08
2021-11-03T17:13:08
sgugger
[ "bug" ]
## Describe the bug Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle). Create a dataset for masled-language modeling from the IMDB dataset. ```python from datasets ...
false
1,021,021,368
https://api.github.com/repos/huggingface/datasets/issues/3046
https://github.com/huggingface/datasets/pull/3046
3,046
Fix MedDialog metadata JSON
closed
0
2021-10-08T12:04:40
2021-10-11T07:46:43
2021-10-11T07:46:42
albertvillanova
[]
Fix #2969.
true
1,020,968,704
https://api.github.com/repos/huggingface/datasets/issues/3045
https://github.com/huggingface/datasets/pull/3045
3,045
Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044
closed
8
2021-10-08T10:59:21
2021-10-21T16:58:32
2021-10-21T14:22:44
vlievin
[]
Fix #3044 1. A rough unit test that fails without the fix. It probably doesn't comply with your code standards, but that just to draft the idea. 2. A one liner fix
true
1,020,869,778
https://api.github.com/repos/huggingface/datasets/issues/3044
https://github.com/huggingface/datasets/issues/3044
3,044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
open
4
2021-10-08T09:07:10
2025-03-04T07:16:00
null
vlievin
[ "bug" ]
## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fingerprint`. This means that the dataset will be mapped with the function for each and every call, w...
false
1,020,252,114
https://api.github.com/repos/huggingface/datasets/issues/3043
https://github.com/huggingface/datasets/issues/3043
3,043
Add PASS dataset
closed
0
2021-10-07T16:43:43
2022-01-20T16:50:47
2022-01-20T16:50:47
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** PASS - **Description:** An ImageNet replacement for self-supervised pretraining without humans - **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS Instructions to add a new dataset can be found [here](https://github.com/huggingface/dataset...
false
1,020,047,289
https://api.github.com/repos/huggingface/datasets/issues/3042
https://github.com/huggingface/datasets/pull/3042
3,042
Improving elasticsearch integration
open
1
2021-10-07T13:28:35
2022-07-06T15:19:48
null
ggdupont
[]
- adding murmurhash signature to sample in index - adding optional credentials for remote elasticsearch server - enabling sample update in index - upgrade the elasticsearch 7.10.1 python client - adding ElasticsearchBulider to instantiate a dataset from an index and a filtering query
true
1,018,911,385
https://api.github.com/repos/huggingface/datasets/issues/3041
https://github.com/huggingface/datasets/pull/3041
3,041
Load private data files + use glob on ZIP archives for json/csv/etc. module inference
closed
4
2021-10-06T18:16:36
2021-10-12T15:25:48
2021-10-12T15:25:46
lhoestq
[]
As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved. #2986 did a refactor of the data files resolver. I added authentication to it. I also improved it to glob inside ZIP archives to look for json/...
true
1,018,782,475
https://api.github.com/repos/huggingface/datasets/issues/3040
https://github.com/huggingface/datasets/issues/3040
3,040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
closed
5
2021-10-06T17:08:47
2021-11-02T15:41:08
2021-11-02T15:41:08
patrickvonplaten
[ "bug" ]
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
false
1,018,219,800
https://api.github.com/repos/huggingface/datasets/issues/3039
https://github.com/huggingface/datasets/pull/3039
3,039
Add sberquad dataset
closed
0
2021-10-06T12:32:02
2021-10-13T10:19:11
2021-10-13T10:16:04
Alenush
[]
null
true
1,018,113,499
https://api.github.com/repos/huggingface/datasets/issues/3038
https://github.com/huggingface/datasets/pull/3038
3,038
add sberquad dataset
closed
0
2021-10-06T11:33:39
2021-10-06T11:58:01
2021-10-06T11:58:01
Alenush
[]
null
true
1,018,091,919
https://api.github.com/repos/huggingface/datasets/issues/3037
https://github.com/huggingface/datasets/pull/3037
3,037
SberQuad
closed
0
2021-10-06T11:21:08
2021-10-06T11:33:08
2021-10-06T11:33:08
Alenush
[]
null
true
1,017,687,944
https://api.github.com/repos/huggingface/datasets/issues/3036
https://github.com/huggingface/datasets/issues/3036
3,036
Protect master branch to force contributions via Pull Requests
closed
3
2021-10-06T07:34:17
2021-10-07T06:51:47
2021-10-07T06:49:52
albertvillanova
[ "enhancement" ]
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution - The Pull...
false
1,016,770,071
https://api.github.com/repos/huggingface/datasets/issues/3035
https://github.com/huggingface/datasets/issues/3035
3,035
`load_dataset` does not work with uploaded arrow file
open
2
2021-10-05T20:15:10
2021-10-06T17:01:37
null
patrickvonplaten
[ "enhancement" ]
## Describe the bug I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format. The dataset can correctly be loaded when doing: ```bash git lfs install git clone https://huggingface.co/datasets/ami-wav2vec2/a...
false
1,016,759,202
https://api.github.com/repos/huggingface/datasets/issues/3034
https://github.com/huggingface/datasets/issues/3034
3,034
Errors loading dataset using fs = a gcsfs.GCSFileSystem
open
0
2021-10-05T20:07:08
2021-10-05T20:26:39
null
dconatha
[ "bug" ]
## Describe the bug Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here... Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you...
false
1,016,619,572
https://api.github.com/repos/huggingface/datasets/issues/3033
https://github.com/huggingface/datasets/pull/3033
3,033
Actual "proper" install of ruamel.yaml in the windows CI
closed
0
2021-10-05T17:52:07
2021-10-05T17:54:57
2021-10-05T17:54:57
lhoestq
[]
It was impossible to update the package directly with `pip`. Indeed it was installed with `distutils` which prevents `pip` or `conda` to uninstall it. I had to `rm` a directory from the `site-packages` python directory, and then do `pip install ruamel.yaml` It's not that "proper" but I couldn't find better soluti...
true
1,016,488,475
https://api.github.com/repos/huggingface/datasets/issues/3032
https://github.com/huggingface/datasets/issues/3032
3,032
Error when loading private dataset with "data_files" arg
closed
1
2021-10-05T15:46:27
2021-10-12T15:26:22
2021-10-12T15:25:46
borisdayma
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. Private datasets with no loading script can't be loaded using `data_files` parameter. ## Steps to reproduce the bug ```python from datasets import load_dataset data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"} d...
false
1,016,458,496
https://api.github.com/repos/huggingface/datasets/issues/3031
https://github.com/huggingface/datasets/pull/3031
3,031
Align tqdm control with cache control
closed
1
2021-10-05T15:18:49
2021-10-18T15:00:21
2021-10-18T14:59:30
mariosasko
[]
Currently, once disabled with `disable_progress_bar`, progress bars cannot be re-enabled again. To overcome this limitation, this PR introduces the `set_progress_bar_enabled` function that accepts a boolean indicating whether to display progress bars. The goal is to provide a similar API to the existing cache control A...
true
1,016,435,324
https://api.github.com/repos/huggingface/datasets/issues/3030
https://github.com/huggingface/datasets/pull/3030
3,030
Add `remove_columns` to `IterableDataset`
closed
4
2021-10-05T14:58:33
2021-10-08T15:33:15
2021-10-08T15:31:53
changjonathanc
[]
Fixes #2944 WIP * Not tested yet. * We might want to allow batched remove for efficiency. @lhoestq Do you think it should have `batched=` and `batch_size=`?
true
1,016,389,901
https://api.github.com/repos/huggingface/datasets/issues/3029
https://github.com/huggingface/datasets/pull/3029
3,029
Use standard open-domain validation split in nq_open
closed
2
2021-10-05T14:19:27
2021-10-05T14:56:46
2021-10-05T14:56:45
craffel
[]
The nq_open dataset originally drew the validation set from this file: https://github.com/google-research-datasets/natural-questions/blob/master/nq_open/NQ-open.efficientqa.dev.1.1.sample.jsonl However, that's the dev set used specifically and only for the efficientqa competition, and it's not the same dev set as is ...
true
1,016,230,272
https://api.github.com/repos/huggingface/datasets/issues/3028
https://github.com/huggingface/datasets/pull/3028
3,028
Properly install ruamel-yaml for windows CI
closed
3
2021-10-05T11:51:15
2021-10-05T14:02:12
2021-10-05T11:51:22
lhoestq
[]
null
true
1,016,150,117
https://api.github.com/repos/huggingface/datasets/issues/3027
https://github.com/huggingface/datasets/issues/3027
3,027
Resolve data_files by split name
closed
3
2021-10-05T10:24:36
2021-11-05T17:49:58
2021-11-05T17:49:57
lhoestq
[]
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── test.csv ``` Currently it returns ...
false
1,016,067,794
https://api.github.com/repos/huggingface/datasets/issues/3026
https://github.com/huggingface/datasets/pull/3026
3,026
added arxiv paper inswiss_judgment_prediction dataset card
closed
0
2021-10-05T09:02:01
2021-10-08T16:01:44
2021-10-08T16:01:24
JoelNiklaus
[]
null
true
1,016,061,222
https://api.github.com/repos/huggingface/datasets/issues/3025
https://github.com/huggingface/datasets/pull/3025
3,025
Fix Windows test suite
closed
0
2021-10-05T08:55:22
2021-10-05T09:58:28
2021-10-05T09:58:27
albertvillanova
[]
Try a hotfix to restore Windows test suite. Fix #3024.
true
1,016,052,911
https://api.github.com/repos/huggingface/datasets/issues/3024
https://github.com/huggingface/datasets/issues/3024
3,024
Windows test suite fails
closed
0
2021-10-05T08:46:46
2021-10-05T09:58:27
2021-10-05T09:58:27
albertvillanova
[ "bug" ]
## Describe the bug There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206 ``` ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we can...
false
1,015,923,031
https://api.github.com/repos/huggingface/datasets/issues/3023
https://github.com/huggingface/datasets/pull/3023
3,023
Fix typo
closed
0
2021-10-05T06:06:11
2021-10-05T11:56:55
2021-10-05T11:56:55
qqaatw
[]
null
true
1,015,750,221
https://api.github.com/repos/huggingface/datasets/issues/3022
https://github.com/huggingface/datasets/pull/3022
3,022
MeDAL dataset: Add further description and update download URL
closed
4
2021-10-05T00:13:28
2021-10-13T09:03:09
2021-10-13T09:03:09
xhluca
[]
Added more details in the following sections: * Dataset Structure * Data Instances * Data Splits * Source Data * Annotations * Discussions of Biases * LIcensing Information
true
1,015,444,094
https://api.github.com/repos/huggingface/datasets/issues/3021
https://github.com/huggingface/datasets/pull/3021
3,021
Support loading dataset from multiple zipped CSV data files
closed
0
2021-10-04T17:33:57
2021-10-06T08:36:46
2021-10-06T08:36:45
albertvillanova
[]
Fix partially #3018. CC: @lewtun
true
1,015,406,105
https://api.github.com/repos/huggingface/datasets/issues/3020
https://github.com/huggingface/datasets/pull/3020
3,020
Add a metric for the MATH dataset (competition_math).
closed
4
2021-10-04T16:52:16
2021-10-22T10:29:31
2021-10-22T10:29:31
hacobe
[]
This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}").
true
1,015,339,983
https://api.github.com/repos/huggingface/datasets/issues/3019
https://github.com/huggingface/datasets/pull/3019
3,019
Fix filter leaking
closed
0
2021-10-04T15:42:58
2022-06-03T08:28:14
2021-10-05T08:33:07
lhoestq
[]
If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering...
true
1,015,311,877
https://api.github.com/repos/huggingface/datasets/issues/3018
https://github.com/huggingface/datasets/issues/3018
3,018
Support multiple zipped CSV data files
open
3
2021-10-04T15:16:59
2021-10-05T14:32:57
null
albertvillanova
[ "enhancement" ]
As requested by @lewtun, support loading multiple zipped CSV data files. ```python from datasets import load_dataset url = "https://domain.org/filename.zip" data_files = {"train": "train_filename.csv", "test": "test_filename.csv"} dataset = load_dataset("csv", data_dir=url, data_files=data_files) ```
false
1,015,215,528
https://api.github.com/repos/huggingface/datasets/issues/3017
https://github.com/huggingface/datasets/pull/3017
3,017
Remove unused parameter in xdirname
closed
0
2021-10-04T13:55:53
2021-10-05T11:37:01
2021-10-05T11:37:00
albertvillanova
[]
Minor fix to remove unused args `*p` in `xdirname`.
true
1,015,208,654
https://api.github.com/repos/huggingface/datasets/issues/3016
https://github.com/huggingface/datasets/pull/3016
3,016
Fix Windows paths in LJ Speech dataset
closed
0
2021-10-04T13:49:37
2021-10-04T15:23:05
2021-10-04T15:23:04
albertvillanova
[]
Minor fix in LJ Speech dataset for Windows pathname component separator. Related to #1878.
true
1,015,130,845
https://api.github.com/repos/huggingface/datasets/issues/3015
https://github.com/huggingface/datasets/pull/3015
3,015
Extend support for streaming datasets that use glob.glob
closed
0
2021-10-04T12:42:37
2021-10-05T13:46:39
2021-10-05T13:46:38
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`. Related to #2880, #2876, #2874
true
1,015,070,751
https://api.github.com/repos/huggingface/datasets/issues/3014
https://github.com/huggingface/datasets/pull/3014
3,014
Fix Windows path in MATH dataset
closed
0
2021-10-04T11:41:07
2021-10-04T12:46:44
2021-10-04T12:46:44
albertvillanova
[]
Minor fix in MATH dataset for Windows pathname component separator. Related to #2982.
true
1,014,960,419
https://api.github.com/repos/huggingface/datasets/issues/3013
https://github.com/huggingface/datasets/issues/3013
3,013
Improve `get_dataset_infos`?
closed
1
2021-10-04T09:47:04
2022-02-21T15:57:10
2022-02-21T15:57:10
severo
[ "question", "dataset-viewer" ]
Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info: ``` >>> from datasets import get_dataset_infos >>> get_dataset_infos('wit') {} ``` While it's totally possible to get it (regenerate it) with: ``` >>> from datasets import load_dataset_b...
false
1,014,958,931
https://api.github.com/repos/huggingface/datasets/issues/3012
https://github.com/huggingface/datasets/pull/3012
3,012
Replace item with float in metrics
closed
0
2021-10-04T09:45:28
2021-10-04T11:30:34
2021-10-04T11:30:33
albertvillanova
[]
As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster. Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`. Related to #3001.
true
1,014,935,713
https://api.github.com/repos/huggingface/datasets/issues/3011
https://github.com/huggingface/datasets/issues/3011
3,011
load_dataset_builder should error if "name" does not exist?
open
1
2021-10-04T09:20:46
2022-09-20T13:05:07
null
severo
[ "bug", "dataset-viewer" ]
``` import datasets as ds builder = ds.load_dataset_builder('sent_comp', name="doesnotexist") builder.info.config_name ``` returns ``` 'doesnotexist' ``` Shouldn't it raise an error instead? For this dataset, the only valid values for `name` should be: `"default"` or `None` (ie. argument not passed)
false
1,014,918,470
https://api.github.com/repos/huggingface/datasets/issues/3010
https://github.com/huggingface/datasets/issues/3010
3,010
Chain filtering is leaking
closed
4
2021-10-04T09:04:55
2022-06-01T17:36:44
2022-06-01T17:36:44
DrMatters
[ "bug" ]
## Describe the bug As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'. These samples show that filtering behavior diverges from what's expected when chaining filterings. On sample 2 the second...
false
1,014,868,235
https://api.github.com/repos/huggingface/datasets/issues/3009
https://github.com/huggingface/datasets/pull/3009
3,009
Fix Windows paths in SUPERB benchmark datasets
closed
0
2021-10-04T08:13:49
2021-10-04T13:43:25
2021-10-04T13:43:25
albertvillanova
[]
Minor fix in SUPERB benchmark datasets for Windows pathname component separator. Related to #2884, #2783 and #2619.
true
1,014,849,163
https://api.github.com/repos/huggingface/datasets/issues/3008
https://github.com/huggingface/datasets/pull/3008
3,008
Fix precision/recall metrics with None average
closed
0
2021-10-04T07:54:15
2021-10-04T09:29:37
2021-10-04T09:29:36
albertvillanova
[]
Related to issue #2979 and PR #2992.
true
1,014,775,450
https://api.github.com/repos/huggingface/datasets/issues/3007
https://github.com/huggingface/datasets/pull/3007
3,007
Correct a typo
closed
0
2021-10-04T06:15:47
2021-10-04T09:27:57
2021-10-04T09:27:57
Yann21
[]
null
true
1,014,770,821
https://api.github.com/repos/huggingface/datasets/issues/3006
https://github.com/huggingface/datasets/pull/3006
3,006
Fix Windows paths in CommonLanguage dataset
closed
0
2021-10-04T06:08:58
2021-10-04T09:07:58
2021-10-04T09:07:58
albertvillanova
[]
Minor fix in CommonLanguage dataset for Windows pathname component separator. Related to #2989.
true
1,014,615,420
https://api.github.com/repos/huggingface/datasets/issues/3005
https://github.com/huggingface/datasets/issues/3005
3,005
DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument
closed
2
2021-10-04T00:49:29
2021-10-11T10:18:01
2021-10-04T08:46:13
DrMatters
[ "bug" ]
## Describe the bug The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument ## Steps to reproduce the bug ```python import datasets example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}}) def filter_value(example, value): return example['a'] == value...
false
1,014,336,617
https://api.github.com/repos/huggingface/datasets/issues/3004
https://github.com/huggingface/datasets/pull/3004
3,004
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.
closed
4
2021-10-03T10:03:25
2021-10-13T13:37:02
2021-10-13T13:37:01
iliaschalkidis
[]
Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we i...
true
1,014,137,933
https://api.github.com/repos/huggingface/datasets/issues/3003
https://github.com/huggingface/datasets/pull/3003
3,003
common_language: Fix license in README.md
closed
0
2021-10-02T18:47:37
2021-10-04T09:27:01
2021-10-04T09:27:01
jimregan
[]
...it's correct elsewhere
true
1,014,120,524
https://api.github.com/repos/huggingface/datasets/issues/3002
https://github.com/huggingface/datasets/pull/3002
3,002
Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset
closed
2
2021-10-02T17:44:09
2021-10-13T11:48:00
2021-10-13T09:03:23
mariosasko
[]
This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally: * renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency * correctly indents `TensorflowDatasetMixin`'s docstring * repl...
true
1,014,024,982
https://api.github.com/repos/huggingface/datasets/issues/3001
https://github.com/huggingface/datasets/pull/3001
3,001
Fix cast to Python scalar in Matthews Correlation metric
closed
0
2021-10-02T11:44:59
2021-10-04T09:54:04
2021-10-04T09:26:12
mariosasko
[]
This PR is motivated by issue #2964. The Matthews Correlation metric relies on sklearn's `matthews_corrcoef` function to compute the result. This function returns either `float` or `np.float64` (see the [source](https://github.com/scikit-learn/scikit-learn/blob/844b4be24d20fc42cc13b957374c718956a0db39/sklearn/metric...
true
1,013,613,219
https://api.github.com/repos/huggingface/datasets/issues/3000
https://github.com/huggingface/datasets/pull/3000
3,000
Fix json loader when conversion not implemented
closed
2
2021-10-01T17:47:22
2021-10-01T18:05:00
2021-10-01T17:54:23
lhoestq
[]
Sometimes the arrow json parser fails if the `block_size` is too small and returns an `ArrowNotImplementedError: JSON conversion to struct...` error. By increasing the block size it makes it work again. Hopefully it should help with https://github.com/huggingface/datasets/issues/2799 I tried with the file ment...
true
1,013,536,933
https://api.github.com/repos/huggingface/datasets/issues/2999
https://github.com/huggingface/datasets/pull/2999
2,999
Set trivia_qa writer batch size
closed
0
2021-10-01T16:23:26
2021-10-01T16:34:55
2021-10-01T16:34:55
lhoestq
[]
Save some RAM when generating trivia_qa
true
1,013,372,871
https://api.github.com/repos/huggingface/datasets/issues/2998
https://github.com/huggingface/datasets/issues/2998
2,998
cannot shuffle dataset loaded from disk
open
0
2021-10-01T13:49:52
2021-10-01T13:49:52
null
pya25
[ "bug" ]
## Describe the bug dataset loaded from disk cannot be shuffled. ## Steps to reproduce the bug ``` my_dataset = load_from_disk('s3://my_file/validate', fs=s3) sample = my_dataset.select(range(100)).shuffle(seed=1234) ``` ## Actual results ``` sample = my_dataset .select(range(100)).shuffle(seed=1234) ...
false
1,013,270,069
https://api.github.com/repos/huggingface/datasets/issues/2997
https://github.com/huggingface/datasets/issues/2997
2,997
Dataset has incorrect labels
closed
3
2021-10-01T12:09:06
2021-10-01T15:32:00
2021-10-01T13:54:34
heiko-hotz
[]
The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached: ![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3...
false