id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,168,496,328
https://api.github.com/repos/huggingface/datasets/issues/3906
https://github.com/huggingface/datasets/issues/3906
3,906
NonMatchingChecksumError on Spider dataset
closed
1
2022-03-14T14:54:53
2022-03-15T07:09:51
2022-03-15T07:09:51
kolk
[ "bug" ]
## Describe the bug Failure to generate dataset ```spider``` because of checksums error for dataset source files. ## Steps to reproduce the bug ``` from datasets import load_dataset spider = load_dataset("spider") ``` ## Expected results Checksums should match for files from url ['https://drive.google.com...
false
1,168,320,568
https://api.github.com/repos/huggingface/datasets/issues/3905
https://github.com/huggingface/datasets/pull/3905
3,905
Perplexity Metric Card
closed
3
2022-03-14T12:39:40
2022-03-16T19:38:56
2022-03-16T19:38:56
emibaylor
[]
Add Perplexity metric card Note that it is currently still missing the citation, but I plan to add it later today.
true
1,167,730,095
https://api.github.com/repos/huggingface/datasets/issues/3904
https://github.com/huggingface/datasets/issues/3904
3,904
CONLL2003 Dataset not available
closed
4
2022-03-13T23:46:15
2023-06-28T18:08:16
2022-03-17T08:21:32
omarespejel
[ "dataset bug" ]
## Describe the bug [CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip' ![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png) ## Steps to reproduce the bug ```python from datasets impo...
false
1,167,521,627
https://api.github.com/repos/huggingface/datasets/issues/3903
https://github.com/huggingface/datasets/pull/3903
3,903
Add Biwi Kinect Head Pose dataset.
closed
17
2022-03-13T08:59:21
2022-05-31T17:02:19
2022-05-31T12:15:58
dnaveenr
[]
This PR adds the Biwi Kinect Head Pose dataset. Dataset Request : Add Biwi Kinect Head Pose Database [#3822](https://github.com/huggingface/datasets/issues/3822) The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6...
true
1,167,403,377
https://api.github.com/repos/huggingface/datasets/issues/3902
https://github.com/huggingface/datasets/issues/3902
3,902
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
closed
5
2022-03-12T21:22:03
2023-02-09T14:53:49
2022-03-22T07:10:41
arunasank
[ "bug" ]
## Describe the bug Unable to import datasets ## Steps to reproduce the bug ```python from datasets import Dataset, DatasetDict ``` ## Expected results The import works without errors ## Actual results ``` AttributeError Traceback (most recent call last) <ipython-input-37-c8c...
false
1,167,339,773
https://api.github.com/repos/huggingface/datasets/issues/3901
https://github.com/huggingface/datasets/issues/3901
3,901
Dataset viewer issue for IndicParaphrase- the preview doesn't show
closed
1
2022-03-12T16:56:05
2022-04-12T12:10:50
2022-04-12T12:10:49
ratishsp
[ "dataset-viewer" ]
## Dataset viewer issue for '*IndicParaphrase*' **Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)* *The preview of the dataset doesn't come up. The error on the console is: Status code: 400 Exception: FileNotFoundError Message: [Errno 2] ...
false
1,167,224,903
https://api.github.com/repos/huggingface/datasets/issues/3900
https://github.com/huggingface/datasets/pull/3900
3,900
Add MetaShift dataset
closed
16
2022-03-12T08:44:18
2022-04-01T16:59:48
2022-04-01T15:16:30
dnaveenr
[]
This PR adds the MetaShift dataset. Dataset Request : Add MetaShift dataset [#3813](https://github.com/huggingface/datasets/issues/3813) @lhoestq As discussed, - I have copied the preprocessing script and modified it as required to not create new directories and folders and instead yield the images. - I do ...
true
1,166,931,812
https://api.github.com/repos/huggingface/datasets/issues/3899
https://github.com/huggingface/datasets/pull/3899
3,899
Add exact match metric
closed
1
2022-03-11T22:21:40
2022-03-21T16:10:03
2022-03-21T16:05:35
emibaylor
[]
Adding the exact match metric and its metric card. Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend
true
1,166,778,250
https://api.github.com/repos/huggingface/datasets/issues/3898
https://github.com/huggingface/datasets/pull/3898
3,898
Create README.md for WER metric
closed
4
2022-03-11T19:29:09
2022-03-15T17:05:00
2022-03-15T17:04:59
sashavor
[]
Proposing a draft WER metric card, @lhoestq I'm not very certain about "Values from popular papers" -- I don't know ASR very well, what do you think of the examples I found?
true
1,166,715,104
https://api.github.com/repos/huggingface/datasets/issues/3897
https://github.com/huggingface/datasets/pull/3897
3,897
Align tqdm control/cache control with Transformers
closed
1
2022-03-11T18:12:22
2022-03-14T15:01:10
2022-03-14T15:01:08
mariosasko
[]
This PR: * aligns the `tqdm` logic with Transformers (follows https://github.com/huggingface/transformers/pull/15167) by moving the code to `utils/logging.py`, adding `enable_progres_bar`/`disable_progres_bar` and removing `set_progress_bar_enabled` (a note for @lhoestq: I'm not adding `logging.tqdm` to the public nam...
true
1,166,628,270
https://api.github.com/repos/huggingface/datasets/issues/3896
https://github.com/huggingface/datasets/issues/3896
3,896
Missing google file for `multi_news` dataset
closed
5
2022-03-11T16:38:10
2022-03-15T12:30:23
2022-03-15T12:30:23
severo
[ "dataset-viewer" ]
## Dataset viewer issue for '*multi_news*' **Link:** https://huggingface.co/datasets/multi_news ``` Server error Status code: 400 Exception: FileNotFoundError Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src ``` Am I the ...
false
1,166,619,182
https://api.github.com/repos/huggingface/datasets/issues/3895
https://github.com/huggingface/datasets/pull/3895
3,895
Fix code examples indentation
closed
4
2022-03-11T16:29:04
2022-03-11T17:34:30
2022-03-11T17:34:29
lhoestq
[]
Some code examples are currently not rendered correctly. I think this is because they are over-indented cc @mariosasko
true
1,166,611,270
https://api.github.com/repos/huggingface/datasets/issues/3894
https://github.com/huggingface/datasets/pull/3894
3,894
[docs] make dummy data creation optional
closed
3
2022-03-11T16:21:34
2022-03-11T17:27:56
2022-03-11T17:27:55
lhoestq
[]
Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional. We can discuss later to make them optional for datasets in this repository as well
true
1,166,551,684
https://api.github.com/repos/huggingface/datasets/issues/3893
https://github.com/huggingface/datasets/pull/3893
3,893
Add default branch for doc building
closed
2
2022-03-11T15:24:27
2022-03-11T15:34:35
2022-03-11T15:34:34
sgugger
[]
Since other libraries use `main` as their default branch and it's now the standard default, you have to specify a different name in the doc config if you're using `master` like datasets (`doc-builder` tries to guess it, but in the job, we have weird checkout of merge commits so it doesn't always manage to get it right)...
true
1,166,227,003
https://api.github.com/repos/huggingface/datasets/issues/3892
https://github.com/huggingface/datasets/pull/3892
3,892
Fix CLI test checksums
closed
4
2022-03-11T10:04:04
2022-03-15T12:28:24
2022-03-15T12:28:23
albertvillanova
[]
Previous PR: - #3796 introduced a side effect: `datasets-cli test` generates `dataset_infos.json` with `None` checksum values. See: - #3805 This PR introduces a way for `datasets-cli test` to force to record infos, even if `verify_infos=False` Close #3848. CC: @craffel
true
1,165,503,732
https://api.github.com/repos/huggingface/datasets/issues/3891
https://github.com/huggingface/datasets/pull/3891
3,891
Fix race condition in doc build
closed
1
2022-03-10T17:17:10
2022-03-10T17:23:00
2022-03-10T17:17:30
lhoestq
[]
Following https://github.com/huggingface/datasets/runs/5499386744 it seems that race conditions that create issues when updating the doc. I took the same approach as in `transformers` to fix race conditions
true
1,165,502,838
https://api.github.com/repos/huggingface/datasets/issues/3890
https://github.com/huggingface/datasets/pull/3890
3,890
Update beans download urls
closed
2
2022-03-10T17:16:16
2022-03-15T16:47:30
2022-03-15T15:26:48
mariosasko
[]
Replace the old URLs with the Hub [URLs](https://huggingface.co/datasets/beans/tree/main/data). Also reported by @stevhliu. Fix #3889
true
1,165,456,083
https://api.github.com/repos/huggingface/datasets/issues/3889
https://github.com/huggingface/datasets/issues/3889
3,889
Cannot load beans dataset (Couldn't reach the dataset)
closed
1
2022-03-10T16:34:08
2022-03-15T15:26:47
2022-03-15T15:26:47
ivsanro1
[ "dataset bug" ]
## Describe the bug The beans dataset is unavailable to download. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('beans') ``` ## Expected results The dataset would be downloaded with no issue. ## Actual results ``` ConnectionError: Couldn't reach https://s...
false
1,165,435,529
https://api.github.com/repos/huggingface/datasets/issues/3888
https://github.com/huggingface/datasets/issues/3888
3,888
IterableDataset columns and feature types
open
8
2022-03-10T16:19:12
2022-11-29T11:39:24
null
lhoestq
[ "generic discussion", "streaming" ]
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None` However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manua...
false
1,165,380,852
https://api.github.com/repos/huggingface/datasets/issues/3887
https://github.com/huggingface/datasets/pull/3887
3,887
ImageFolder improvements
closed
1
2022-03-10T15:34:46
2022-03-11T15:06:11
2022-03-11T15:06:11
mariosasko
[]
This PR adds the following improvements to the `imagefolder` dataset: * skip the extract step for image files (as discussed in https://github.com/huggingface/datasets/pull/2830#discussion_r816817919) * option to drop labels by setting `drop_labels=True` (useful for image pretraining cc @NielsRogge). This is faster th...
true
1,165,223,319
https://api.github.com/repos/huggingface/datasets/issues/3886
https://github.com/huggingface/datasets/pull/3886
3,886
Retry HfApi call inside push_to_hub when 504 error
closed
8
2022-03-10T13:24:40
2022-03-16T09:00:56
2022-03-15T16:19:50
albertvillanova
[]
Ass suggested by @lhoestq in #3872, this PR: - Implements a retry function - Retries HfApi call inside `push_to_hub` when 504 error. To be agreed: - max_retries = 2 (at 0.5 and 1 seconds) Fix #3872.
true
1,165,102,209
https://api.github.com/repos/huggingface/datasets/issues/3885
https://github.com/huggingface/datasets/pull/3885
3,885
Fix some shuffle docs
closed
1
2022-03-10T11:29:15
2022-03-10T14:16:29
2022-03-10T14:16:28
lhoestq
[]
Following #3842 some docs were still outdated (with `buffer_size` as the first argument)
true
1,164,924,314
https://api.github.com/repos/huggingface/datasets/issues/3884
https://github.com/huggingface/datasets/pull/3884
3,884
Fix bug in METEOR metric due to nltk version
closed
1
2022-03-10T08:44:20
2022-03-10T09:03:40
2022-03-10T09:03:39
albertvillanova
[]
Fix #3883.
true
1,164,663,229
https://api.github.com/repos/huggingface/datasets/issues/3883
https://github.com/huggingface/datasets/issues/3883
3,883
The metric Meteor doesn't work for nltk ==3.6.4
closed
1
2022-03-10T02:28:27
2022-03-10T09:03:39
2022-03-10T09:03:39
zhaowei-wang-nlp
[ "bug" ]
## Describe the bug Using the metric Meteor with nltk == 3.6.4 gives a TypeError: TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object ## Steps to reproduce the bug ```python import datasets metric = datasets.load_metric("meteor") predictions = ["hello world"] references = ["hello ...
false
1,164,595,388
https://api.github.com/repos/huggingface/datasets/issues/3882
https://github.com/huggingface/datasets/pull/3882
3,882
Image process doc
closed
1
2022-03-10T00:32:10
2022-03-15T15:24:16
2022-03-15T15:24:09
stevhliu
[ "documentation" ]
This PR is a first draft of how to process image data. It adds: - Load an image dataset with `image` and `path` (adds tip about `decode=False` param to access the path and bytes, thanks to @mariosasko). - Load an image using the `ImageFolder` builder. I know there is an [example](https://huggingface.co/docs/dataset...
true
1,164,452,005
https://api.github.com/repos/huggingface/datasets/issues/3881
https://github.com/huggingface/datasets/issues/3881
3,881
How to use Image folder
closed
8
2022-03-09T21:18:52
2022-03-11T08:45:52
2022-03-11T08:45:52
rozeappletree
[ "question" ]
Ran this code ``` load_dataset("imagefolder", data_dir="./my-dataset") ``` `https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing ``` --------------------------------------------------------------------------- FileNotFoundError Trace...
false
1,164,406,008
https://api.github.com/repos/huggingface/datasets/issues/3880
https://github.com/huggingface/datasets/pull/3880
3,880
Change the framework switches to the new syntax
closed
2
2022-03-09T20:29:10
2022-03-15T14:13:28
2022-03-15T14:13:27
sgugger
[]
This PR updates the syntax of the framework-specific code samples. With this new syntax, you'll be able to: - have paragraphs of text be framework-specific instead of just code samples - have support for Flax code samples if you want. This should be merged after https://github.com/huggingface/doc-builder/pull/63...
true
1,164,311,612
https://api.github.com/repos/huggingface/datasets/issues/3879
https://github.com/huggingface/datasets/pull/3879
3,879
SQuAD v2 metric: create README.md
closed
1
2022-03-09T18:47:56
2022-03-10T16:48:59
2022-03-10T16:48:59
sashavor
[]
Proposing SQuAD v2 metric card
true
1,164,305,335
https://api.github.com/repos/huggingface/datasets/issues/3878
https://github.com/huggingface/datasets/pull/3878
3,878
Update cats_vs_dogs size
closed
5
2022-03-09T18:40:56
2022-09-30T08:47:43
2022-03-10T14:21:23
mariosasko
[]
It seems like 12 new examples have been added to the `cats_vs_dogs`. This PR updates the size in the card and the info file to avoid a verification error (reported by @stevhliu).
true
1,164,146,311
https://api.github.com/repos/huggingface/datasets/issues/3877
https://github.com/huggingface/datasets/issues/3877
3,877
Align metadata to DCAT/DCAT-AP
open
0
2022-03-09T16:12:25
2022-03-09T16:33:42
null
EmidioStani
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Align to DCAT metadata to describe datasets **Describe the solution you'd like** Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant **Describe alternatives you've considered** **Addition...
false
1,164,045,075
https://api.github.com/repos/huggingface/datasets/issues/3876
https://github.com/huggingface/datasets/pull/3876
3,876
Fix download_mode in dataset_module_factory
closed
1
2022-03-09T14:54:33
2022-03-10T08:47:00
2022-03-10T08:46:59
albertvillanova
[]
Fix `download_mode` value set in `dataset_module_factory`. Before the fix, it was set to `bool` (default to `False`). Also set properly its default value in all public functions.
true
1,164,029,673
https://api.github.com/repos/huggingface/datasets/issues/3875
https://github.com/huggingface/datasets/pull/3875
3,875
Module namespace cleanup for v2.0
closed
4
2022-03-09T14:43:07
2022-03-11T15:42:06
2022-03-11T15:42:05
mariosasko
[]
This is an attempt to make the user-facing `datasets`' submodule namespace cleaner: In particular, this PR does the following: * removes the unused `zip_nested` and `flatten_nest_dict` and their accompanying tests * removes `pyarrow` from the top-level namespace * properly uses `__all__` and the `from <module> i...
true
1,164,013,511
https://api.github.com/repos/huggingface/datasets/issues/3874
https://github.com/huggingface/datasets/pull/3874
3,874
add MSE and MAE metrics - V2
closed
4
2022-03-09T14:30:16
2022-03-09T17:20:42
2022-03-09T17:18:20
dnaveenr
[]
Created a new pull request to resolve unrelated changes in PR caused due to rebasing. Ref Older PR : [#3845](https://github.com/huggingface/datasets/pull/3845) Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
true
1,163,961,578
https://api.github.com/repos/huggingface/datasets/issues/3873
https://github.com/huggingface/datasets/pull/3873
3,873
Create SQuAD metric README.md
closed
2
2022-03-09T13:47:08
2022-03-10T16:45:57
2022-03-10T16:45:57
sashavor
[]
Proposal for a metrics card structure (with an example based on the SQuAD metric). @thomwolf @lhoestq @douwekiela @lewtun -- feel free to comment on structure or content (it's an initial draft, so I realize there's stuff missing!).
true
1,163,853,026
https://api.github.com/repos/huggingface/datasets/issues/3872
https://github.com/huggingface/datasets/issues/3872
3,872
HTTP error 504 Server Error: Gateway Time-out
closed
6
2022-03-09T12:03:37
2022-03-15T16:19:50
2022-03-15T16:19:50
illiyas-sha
[]
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()` While pushing, it gives some error like this. ``` Traceback (most recent call last): File "data_split_speech.py", line 159, in <module> data_new_2.push_to_hub("user-name/dataset-name",private=True) File "/opt/conda/lib...
false
1,163,714,113
https://api.github.com/repos/huggingface/datasets/issues/3871
https://github.com/huggingface/datasets/pull/3871
3,871
add pandas to env command
closed
2
2022-03-09T09:48:51
2022-03-09T11:21:38
2022-03-09T11:21:37
patrickvonplaten
[]
Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command.
true
1,163,633,239
https://api.github.com/repos/huggingface/datasets/issues/3870
https://github.com/huggingface/datasets/pull/3870
3,870
Add wikitablequestions dataset
closed
4
2022-03-09T08:27:43
2022-03-14T11:19:24
2022-03-14T11:16:19
SivilTaram
[]
null
true
1,163,434,800
https://api.github.com/repos/huggingface/datasets/issues/3869
https://github.com/huggingface/datasets/issues/3869
3,869
Making the Hub the place for datasets in Portuguese
open
1
2022-03-09T03:06:18
2022-03-09T09:04:09
null
omarespejel
[ "dataset request" ]
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community. What are some datasets in Portuguese worth ...
false
1,162,914,114
https://api.github.com/repos/huggingface/datasets/issues/3868
https://github.com/huggingface/datasets/pull/3868
3,868
Ignore duplicate keys if `ignore_verifications=True`
closed
2
2022-03-08T17:14:56
2022-03-09T13:50:45
2022-03-09T13:50:44
mariosasko
[]
Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`.
true
1,162,896,605
https://api.github.com/repos/huggingface/datasets/issues/3867
https://github.com/huggingface/datasets/pull/3867
3,867
Update for the rename doc-builder -> hf-doc-utils
closed
4
2022-03-08T16:58:25
2023-09-24T09:54:44
2022-03-08T17:30:45
sgugger
[]
This PR adapts the job to the upcoming change of name of `doc-builder`.
true
1,162,833,848
https://api.github.com/repos/huggingface/datasets/issues/3866
https://github.com/huggingface/datasets/pull/3866
3,866
Bring back imgs so that forsk dont get broken
closed
3
2022-03-08T16:01:31
2022-03-08T17:37:02
2022-03-08T17:37:01
mishig25
[]
null
true
1,162,821,908
https://api.github.com/repos/huggingface/datasets/issues/3865
https://github.com/huggingface/datasets/pull/3865
3,865
Add logo img
closed
2
2022-03-08T15:50:59
2023-09-24T09:54:31
2022-03-08T16:01:59
mishig25
[]
null
true
1,162,804,942
https://api.github.com/repos/huggingface/datasets/issues/3864
https://github.com/huggingface/datasets/pull/3864
3,864
Update image dataset tags
closed
1
2022-03-08T15:36:32
2022-03-08T17:04:47
2022-03-08T17:04:46
mariosasko
[]
Align the existing image datasets' tags with new tags introduced in #3800.
true
1,162,802,857
https://api.github.com/repos/huggingface/datasets/issues/3863
https://github.com/huggingface/datasets/pull/3863
3,863
Update code blocks
closed
1
2022-03-08T15:34:43
2022-03-09T16:45:30
2022-03-09T16:45:29
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/3860#issuecomment-1061756712 and https://github.com/huggingface/datasets/pull/3690 we need to update the code blocks to use markdown instead of sphinx
true
1,162,753,733
https://api.github.com/repos/huggingface/datasets/issues/3862
https://github.com/huggingface/datasets/pull/3862
3,862
Manipulate columns on IterableDataset (rename columns, cast, etc.)
closed
2
2022-03-08T14:53:57
2022-03-10T16:40:22
2022-03-10T16:40:21
lhoestq
[]
I added: - add_column - cast - rename_column - rename_columns related to https://github.com/huggingface/datasets/issues/3444 TODO: - [x] docs - [x] tests
true
1,162,702,044
https://api.github.com/repos/huggingface/datasets/issues/3861
https://github.com/huggingface/datasets/issues/3861
3,861
big_patent cased version
closed
2
2022-03-08T14:08:55
2023-04-21T14:32:03
2023-04-21T14:32:03
slvcsl
[ "dataset request" ]
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is th...
false
1,162,623,329
https://api.github.com/repos/huggingface/datasets/issues/3860
https://github.com/huggingface/datasets/pull/3860
3,860
Small doc fixes
closed
2
2022-03-08T12:55:39
2022-03-08T17:37:13
2022-03-08T17:37:13
mishig25
[]
null
true
1,162,559,333
https://api.github.com/repos/huggingface/datasets/issues/3859
https://github.com/huggingface/datasets/issues/3859
3,859
Unable to dowload big_patent (FileNotFoundError)
closed
1
2022-03-08T11:47:12
2022-03-08T13:04:09
2022-03-08T13:04:04
slvcsl
[ "bug", "duplicate" ]
## Describe the bug I am trying to download some splits of the big_patent dataset, using the following code: `ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") ` However, this leads to a FileNotFoundError. FileNotFoundError Traceback (most recent...
false
1,162,526,688
https://api.github.com/repos/huggingface/datasets/issues/3858
https://github.com/huggingface/datasets/pull/3858
3,858
Udpate index.mdx margins
closed
1
2022-03-08T11:11:52
2022-03-08T12:57:57
2022-03-08T12:57:56
gary149
[]
null
true
1,162,525,353
https://api.github.com/repos/huggingface/datasets/issues/3857
https://github.com/huggingface/datasets/issues/3857
3,857
Order of dataset changes due to glob.glob.
open
1
2022-03-08T11:10:30
2022-03-14T11:08:22
null
patrickvonplaten
[ "generic discussion" ]
## Describe the bug After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system. There are currently multiple datasets that use `glob.g...
false
1,162,522,034
https://api.github.com/repos/huggingface/datasets/issues/3856
https://github.com/huggingface/datasets/pull/3856
3,856
Fix push_to_hub with null images
closed
1
2022-03-08T11:07:09
2022-03-08T15:22:17
2022-03-08T15:22:16
lhoestq
[]
This code currently raises an error because of the null image: ```python import datasets dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] } features = datasets.Features({ 'name': datasets.Value('string'), 'image': datasets.Image(), }) dataset = datasets.Dataset.fro...
true
1,162,448,589
https://api.github.com/repos/huggingface/datasets/issues/3855
https://github.com/huggingface/datasets/issues/3855
3,855
Bad error message when loading private dataset
closed
2
2022-03-08T09:55:17
2022-07-11T15:06:40
2022-07-11T15:06:40
patrickvonplaten
[ "bug" ]
## Describe the bug A pretty common behavior of an interaction between the Hub and datasets is the following. An organization adds a dataset in private mode and wants to load it afterward. ```python from transformers import load_dataset ds = load_dataset("NewT5/dummy_data", "dummy") ``` This command th...
false
1,162,434,199
https://api.github.com/repos/huggingface/datasets/issues/3854
https://github.com/huggingface/datasets/issues/3854
3,854
load only England English dataset from common voice english dataset
closed
2
2022-03-08T09:40:52
2024-03-23T12:40:58
2022-03-09T08:13:33
amanjaiswal777
[ "question" ]
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Prop...
false
1,162,386,592
https://api.github.com/repos/huggingface/datasets/issues/3853
https://github.com/huggingface/datasets/pull/3853
3,853
add ontonotes_conll dataset
closed
2
2022-03-08T08:53:42
2022-03-15T10:48:02
2022-03-15T10:48:02
richarddwang
[]
# Introduction of the dataset OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task , inclu...
true
1,162,252,337
https://api.github.com/repos/huggingface/datasets/issues/3852
https://github.com/huggingface/datasets/pull/3852
3,852
Redundant add dataset information and dead link.
closed
1
2022-03-08T05:57:05
2022-03-08T16:54:36
2022-03-08T16:54:36
dnaveenr
[]
> Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation. The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this inform...
true
1,162,137,998
https://api.github.com/repos/huggingface/datasets/issues/3851
https://github.com/huggingface/datasets/issues/3851
3,851
Load audio dataset error
closed
8
2022-03-08T02:16:04
2022-09-27T12:13:55
2022-03-08T11:20:06
lemoner20
[ "bug" ]
## Load audio dataset error Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb, ``` from datasets import load_dataset, load_metric, Audio raw_datasets = load_dataset("superb", "ks", split="train") prin...
false
1,162,126,030
https://api.github.com/repos/huggingface/datasets/issues/3850
https://github.com/huggingface/datasets/pull/3850
3,850
[feat] Add tqdm arguments
closed
0
2022-03-08T01:53:25
2022-12-16T05:34:07
2022-12-16T05:34:07
penguinwang96825
[]
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
true
1,162,091,075
https://api.github.com/repos/huggingface/datasets/issues/3849
https://github.com/huggingface/datasets/pull/3849
3,849
Add "Adversarial GLUE" dataset to datasets library
closed
5
2022-03-08T00:47:11
2022-03-28T11:17:14
2022-03-28T11:12:04
jxmorris12
[]
Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/ ```python >>> import datasets >>> >>> datasets.load_dataset('adv_glue') Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0...
true
1,162,076,902
https://api.github.com/repos/huggingface/datasets/issues/3848
https://github.com/huggingface/datasets/issues/3848
3,848
NonMatchingChecksumError when checksum is None
closed
7
2022-03-08T00:24:12
2022-03-15T14:37:26
2022-03-15T12:28:23
jxmorris12
[ "bug" ]
I ran into the following error when adding a new dataset: ```bash expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}} recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64...
false
1,161,856,417
https://api.github.com/repos/huggingface/datasets/issues/3847
https://github.com/huggingface/datasets/issues/3847
3,847
Datasets' cache not re-used
open
28
2022-03-07T19:55:15
2025-05-19T11:58:55
null
gejinchen
[ "bug" ]
## Describe the bug For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory. ## Steps to reproduce the bug Here is a reproducer. The GPT2 tokenizer works perfectly with ca...
false
1,161,810,226
https://api.github.com/repos/huggingface/datasets/issues/3846
https://github.com/huggingface/datasets/pull/3846
3,846
Update faiss device docstring
closed
1
2022-03-07T19:06:59
2022-03-07T19:21:23
2022-03-07T19:21:22
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset`
true
1,161,739,483
https://api.github.com/repos/huggingface/datasets/issues/3845
https://github.com/huggingface/datasets/pull/3845
3,845
add RMSE and MAE metrics.
closed
6
2022-03-07T17:53:24
2022-03-09T16:50:03
2022-03-09T16:50:03
dnaveenr
[]
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Please suggest any changes if r...
true
1,161,686,754
https://api.github.com/repos/huggingface/datasets/issues/3844
https://github.com/huggingface/datasets/pull/3844
3,844
Add rmse and mae metrics.
closed
2
2022-03-07T17:06:38
2022-03-07T17:24:32
2022-03-07T17:15:06
dnaveenr
[]
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Any suggestions and changes req...
true
1,161,397,812
https://api.github.com/repos/huggingface/datasets/issues/3843
https://github.com/huggingface/datasets/pull/3843
3,843
Fix Google Drive URL to avoid Virus scan warning in streaming mode
closed
2
2022-03-07T13:09:19
2022-03-15T12:30:25
2022-03-15T12:30:23
mariosasko
[]
The streaming version of https://github.com/huggingface/datasets/pull/3787. Fix #3835 CC: @albertvillanova
true
1,161,336,483
https://api.github.com/repos/huggingface/datasets/issues/3842
https://github.com/huggingface/datasets/pull/3842
3,842
Align IterableDataset.shuffle with Dataset.shuffle
closed
3
2022-03-07T12:10:46
2022-03-07T19:03:43
2022-03-07T19:03:42
lhoestq
[]
From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode). Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe...
true
1,161,203,842
https://api.github.com/repos/huggingface/datasets/issues/3841
https://github.com/huggingface/datasets/issues/3841
3,841
Pyright reportPrivateImportUsage when `from datasets import load_dataset`
closed
6
2022-03-07T10:24:04
2023-02-18T19:14:03
2023-02-13T13:48:41
lkhphuc
[ "bug" ]
## Describe the bug Pyright complains about module not exported. ## Steps to reproduce the bug Use an editor/IDE with Pyright Language server with default configuration: ```python from datasets import load_dataset ``` ## Expected results No complain from Pyright ## Actual results Pyright complain below...
false
1,161,183,773
https://api.github.com/repos/huggingface/datasets/issues/3840
https://github.com/huggingface/datasets/pull/3840
3,840
Pin responses to fix CI for Windows
closed
1
2022-03-07T10:06:53
2022-03-07T10:12:36
2022-03-07T10:07:24
albertvillanova
[]
Temporarily fix CI for Windows by pinning `responses`. See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 Fix: #3839
true
1,161,183,482
https://api.github.com/repos/huggingface/datasets/issues/3839
https://github.com/huggingface/datasets/issues/3839
3,839
CI is broken for Windows
closed
0
2022-03-07T10:06:42
2022-05-20T14:13:43
2022-03-07T10:07:24
albertvillanova
[ "bug" ]
## Describe the bug See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 ``` ___________________ test_datasetdict_from_text_split[test] ____________________ [gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe split...
false
1,161,137,406
https://api.github.com/repos/huggingface/datasets/issues/3838
https://github.com/huggingface/datasets/issues/3838
3,838
Add a data type for labeled images (image segmentation)
open
0
2022-03-07T09:38:15
2024-05-29T16:50:55
null
severo
[ "enhancement" ]
It might be a mix of Image and ClassLabel, and the color palette might be generated automatically. --- ### Example every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette ...
false
1,161,109,031
https://api.github.com/repos/huggingface/datasets/issues/3837
https://github.com/huggingface/datasets/pull/3837
3,837
Release: 1.18.4
closed
0
2022-03-07T09:13:29
2022-03-07T11:07:35
2022-03-07T11:07:02
albertvillanova
[]
null
true
1,161,072,531
https://api.github.com/repos/huggingface/datasets/issues/3836
https://github.com/huggingface/datasets/pull/3836
3,836
Logo float left
closed
3
2022-03-07T08:38:34
2022-03-07T20:21:11
2022-03-07T09:14:11
mishig25
[]
<img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
true
1,161,029,205
https://api.github.com/repos/huggingface/datasets/issues/3835
https://github.com/huggingface/datasets/issues/3835
3,835
The link given on the gigaword does not work
closed
0
2022-03-07T07:56:42
2022-03-15T12:30:23
2022-03-15T12:30:23
martin6336
[ "bug" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,160,657,937
https://api.github.com/repos/huggingface/datasets/issues/3834
https://github.com/huggingface/datasets/pull/3834
3,834
Fix dead dataset scripts creation link.
closed
0
2022-03-06T16:45:48
2022-03-07T12:12:07
2022-03-07T12:12:07
dnaveenr
[]
Previous link gives 404 error. Updated with a new dataset scripts creation link.
true
1,160,543,713
https://api.github.com/repos/huggingface/datasets/issues/3833
https://github.com/huggingface/datasets/pull/3833
3,833
Small typos in How-to-train tutorial.
closed
0
2022-03-06T07:49:49
2022-03-07T12:35:33
2022-03-07T12:13:17
lkhphuc
[]
null
true
1,160,503,446
https://api.github.com/repos/huggingface/datasets/issues/3832
https://github.com/huggingface/datasets/issues/3832
3,832
Making Hugging Face the place to go for Graph NNs datasets
open
4
2022-03-06T03:02:58
2022-03-14T07:45:38
null
omarespejel
[ "dataset request", "graph" ]
Let's make Hugging Face Datasets the central hub for GNN datasets :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field. What are some datasets worth integrating into the Hugging Face hub? In...
false
1,160,501,000
https://api.github.com/repos/huggingface/datasets/issues/3831
https://github.com/huggingface/datasets/issues/3831
3,831
when using to_tf_dataset with shuffle is true, not all completed batches are made
closed
4
2022-03-06T02:43:50
2022-03-08T15:18:56
2022-03-08T15:18:56
greenned
[ "bug" ]
## Describe the bug when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch ## Steps to reproduce the bug this is the sample code below https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing ## Expected resul...
false
1,160,181,404
https://api.github.com/repos/huggingface/datasets/issues/3830
https://github.com/huggingface/datasets/issues/3830
3,830
Got error when load cnn_dailymail dataset
closed
2
2022-03-05T01:43:12
2022-03-07T06:53:41
2022-03-07T06:53:41
wgong0510
[ "duplicate" ]
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below: - windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories' - google colab: NotADirec...
false
1,160,154,352
https://api.github.com/repos/huggingface/datasets/issues/3829
https://github.com/huggingface/datasets/issues/3829
3,829
[📄 Docs] Create a `datasets` performance guide.
open
1
2022-03-05T00:28:06
2022-03-10T16:24:27
null
dynamicwebpaige
[ "enhancement" ]
## Brief Overview Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with...
false
1,160,064,029
https://api.github.com/repos/huggingface/datasets/issues/3828
https://github.com/huggingface/datasets/issues/3828
3,828
The Pile's _FEATURE spec seems to be incorrect
closed
1
2022-03-04T21:25:32
2022-03-08T09:30:49
2022-03-08T09:30:48
dlwh
[ "bug" ]
## Describe the bug If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py: For "all" * the pile_set_name is never set for data * there's actually an id field inside of "meta" For subcorpora pubmed_central and hacker_news: * the meta is specified to be a string, but it's actually a di...
false
1,159,878,436
https://api.github.com/repos/huggingface/datasets/issues/3827
https://github.com/huggingface/datasets/pull/3827
3,827
Remove deprecated `remove_columns` param in `filter`
closed
1
2022-03-04T17:23:26
2022-03-07T12:37:52
2022-03-07T12:37:51
mariosasko
[]
A leftover from #3803.
true
1,159,851,110
https://api.github.com/repos/huggingface/datasets/issues/3826
https://github.com/huggingface/datasets/pull/3826
3,826
Add IterableDataset.filter
closed
2
2022-03-04T16:57:23
2022-03-09T17:23:13
2022-03-09T17:23:11
lhoestq
[]
_Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_ I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`: ```python def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None): ``` TODO: - [x] tests - [x] docs rel...
true
1,159,802,345
https://api.github.com/repos/huggingface/datasets/issues/3825
https://github.com/huggingface/datasets/pull/3825
3,825
Update version and date in Wikipedia dataset
closed
1
2022-03-04T16:05:27
2022-03-04T17:24:37
2022-03-04T17:24:36
albertvillanova
[]
CC: @geohci
true
1,159,574,186
https://api.github.com/repos/huggingface/datasets/issues/3824
https://github.com/huggingface/datasets/pull/3824
3,824
Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute`
closed
1
2022-03-04T12:04:40
2022-03-04T18:04:22
2022-03-04T18:04:21
mariosasko
[]
Fix #3818
true
1,159,497,844
https://api.github.com/repos/huggingface/datasets/issues/3823
https://github.com/huggingface/datasets/issues/3823
3,823
500 internal server error when trying to open a dataset composed of Zarr stores
closed
4
2022-03-04T10:37:14
2022-03-08T09:47:39
2022-03-08T09:47:39
jacobbieker
[ "bug" ]
## Describe the bug The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code. The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of da...
false
1,159,395,728
https://api.github.com/repos/huggingface/datasets/issues/3822
https://github.com/huggingface/datasets/issues/3822
3,822
Add Biwi Kinect Head Pose Database
closed
10
2022-03-04T08:48:39
2025-04-07T13:04:25
2022-06-01T13:00:47
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Biwi Kinect Head Pose Database - **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles. - ...
false
1,159,371,927
https://api.github.com/repos/huggingface/datasets/issues/3821
https://github.com/huggingface/datasets/pull/3821
3,821
Update Wikipedia dataset
closed
3
2022-03-04T08:19:21
2022-03-21T12:35:23
2022-03-21T12:31:00
albertvillanova
[]
This PR combines all updates to Wikipedia dataset. Once approved, this will be used to generate the pre-processed Wikipedia datasets. Finally, this PR will be able to be merged into master: - NOT using squash - BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved TODO: - [x] #3435 - [x]...
true
1,159,106,603
https://api.github.com/repos/huggingface/datasets/issues/3820
https://github.com/huggingface/datasets/issues/3820
3,820
`pubmed_qa` checksum mismatch
closed
1
2022-03-04T00:28:08
2022-03-04T09:42:32
2022-03-04T09:42:32
jon-tow
[ "bug", "duplicate" ]
## Describe the bug Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets try: datasets.load_dataset("pubmed_qa", "pqa_labeled") except Exception as e: print(e...
false
1,158,848,288
https://api.github.com/repos/huggingface/datasets/issues/3819
https://github.com/huggingface/datasets/pull/3819
3,819
Fix typo in doc build yml
closed
1
2022-03-03T20:08:44
2022-03-04T13:07:41
2022-03-04T13:07:41
mishig25
[]
cc: @lhoestq
true
1,158,788,545
https://api.github.com/repos/huggingface/datasets/issues/3818
https://github.com/huggingface/datasets/issues/3818
3,818
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
closed
3
2022-03-03T18:57:54
2022-03-04T18:04:21
2022-03-04T18:04:21
lmvasque
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metr...
false
1,158,592,335
https://api.github.com/repos/huggingface/datasets/issues/3817
https://github.com/huggingface/datasets/pull/3817
3,817
Simplify Common Voice code
closed
1
2022-03-03T16:01:21
2022-03-04T14:51:48
2022-03-04T12:39:23
lhoestq
[]
In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming. In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-strea...
true
1,158,589,913
https://api.github.com/repos/huggingface/datasets/issues/3816
https://github.com/huggingface/datasets/pull/3816
3,816
Doc new UI test workflows2
closed
1
2022-03-03T15:59:14
2022-10-04T09:35:53
2022-03-03T16:42:15
mishig25
[]
null
true
1,158,589,512
https://api.github.com/repos/huggingface/datasets/issues/3815
https://github.com/huggingface/datasets/pull/3815
3,815
Fix iter_archive getting reset
closed
0
2022-03-03T15:58:52
2022-03-03T18:06:37
2022-03-03T18:06:13
lhoestq
[]
The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits. To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times...
true
1,158,518,995
https://api.github.com/repos/huggingface/datasets/issues/3814
https://github.com/huggingface/datasets/pull/3814
3,814
Handle Nones in PyArrow struct
closed
1
2022-03-03T15:03:35
2022-03-03T16:37:44
2022-03-03T16:37:43
mariosasko
[]
This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
true
1,158,474,859
https://api.github.com/repos/huggingface/datasets/issues/3813
https://github.com/huggingface/datasets/issues/3813
3,813
Add MetaShift dataset
closed
7
2022-03-03T14:26:45
2022-04-10T13:39:59
2022-04-10T13:39:59
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** MetaShift - **Description:** collection of 12,868 sets of natural images across 410 classes- - **Paper:** https://arxiv.org/abs/2202.06523v1 - **Data:** https://github.com/weixin-liang/metashift Instructions to add a new dataset can be found [here](https://github.com/huggingface/...
false
1,158,369,995
https://api.github.com/repos/huggingface/datasets/issues/3812
https://github.com/huggingface/datasets/pull/3812
3,812
benchmark streaming speed with tar vs zip archives
closed
1
2022-03-03T12:48:41
2022-03-03T14:55:34
2022-03-03T14:55:33
polinaeterna
[]
# do not merge ## Hypothesis packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar) ## Data I host it [here](https://huggingface.co/datasets/polinaeter...
true
1,158,234,407
https://api.github.com/repos/huggingface/datasets/issues/3811
https://github.com/huggingface/datasets/pull/3811
3,811
Update dev doc gh workflows
closed
0
2022-03-03T10:29:01
2022-10-04T09:35:54
2022-03-03T10:45:54
mishig25
[]
Reflect changes from https://github.com/huggingface/transformers/pull/15891
true
1,158,202,093
https://api.github.com/repos/huggingface/datasets/issues/3810
https://github.com/huggingface/datasets/pull/3810
3,810
Update version of xcopa dataset
closed
0
2022-03-03T09:58:25
2022-03-03T10:44:30
2022-03-03T10:44:29
albertvillanova
[]
Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases We updated our loading script, but we did not bump a new version number: - #3254 This PR updates our loading script version from `1.0.0` to `1.1.0`.
true
1,158,143,480
https://api.github.com/repos/huggingface/datasets/issues/3809
https://github.com/huggingface/datasets/issues/3809
3,809
Checksums didn't match for datasets on Google Drive
closed
1
2022-03-03T09:01:10
2022-03-03T09:24:58
2022-03-03T09:24:05
muelletm
[ "bug", "duplicate" ]
## Describe the bug Datasets hosted on Google Drive do not seem to work right now. Loading them fails with a checksum error. ## Steps to reproduce the bug ```python from datasets import load_dataset for dataset in ["head_qa", "yelp_review_full"]: try: load_dataset(dataset) except Exception as excep...
false
1,157,650,043
https://api.github.com/repos/huggingface/datasets/issues/3808
https://github.com/huggingface/datasets/issues/3808
3,808
Pre-Processing Cache Fails when using a Factory pattern
closed
3
2022-03-02T20:18:43
2022-03-10T23:01:47
2022-03-10T23:01:47
Helw150
[ "bug" ]
## Describe the bug If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time. ## Steps to reproduce the bug ```python def preprocess_function_factory(augmenta...
false
1,157,531,812
https://api.github.com/repos/huggingface/datasets/issues/3807
https://github.com/huggingface/datasets/issues/3807
3,807
NonMatchingChecksumError in xcopa dataset
closed
6
2022-03-02T18:10:19
2022-05-20T06:00:42
2022-03-03T17:40:31
afcruzs-ms
[ "bug" ]
## Describe the bug Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("xcopa", "it") ``` ## Expected results The dataset should be loaded correctly. ## Actual results Fails ...
false