id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
1,179,381,021
https://api.github.com/repos/huggingface/datasets/issues/4007
https://github.com/huggingface/datasets/issues/4007
4,007
set_format does not work with multi dimension tensor
closed
4
2022-03-24T11:27:43
2022-03-30T07:28:57
2022-03-24T14:39:29
phihung
[ "bug" ]
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result...
false
1,179,367,195
https://api.github.com/repos/huggingface/datasets/issues/4006
https://github.com/huggingface/datasets/pull/4006
4,006
Use audio feature in ASR task template
closed
1
2022-03-24T11:15:22
2022-03-24T17:19:29
2022-03-24T16:48:02
lhoestq
[]
The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column. I changed that and updated all the datasets as well as the tests. The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero us...
true
1,179,365,663
https://api.github.com/repos/huggingface/datasets/issues/4005
https://github.com/huggingface/datasets/issues/4005
4,005
Yelp not working
closed
6
2022-03-24T11:14:00
2022-03-25T14:59:57
2022-03-25T14:56:10
patrickvonplaten
[]
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly...
false
1,179,320,795
https://api.github.com/repos/huggingface/datasets/issues/4004
https://github.com/huggingface/datasets/pull/4004
4,004
ASSIN 2 dataset: replace broken Google Drive _URLS by links on github
closed
1
2022-03-24T10:37:39
2022-03-28T14:01:46
2022-03-28T13:56:39
ruanchaves
[]
Closes #4003 . Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin)
true
1,179,286,877
https://api.github.com/repos/huggingface/datasets/issues/4003
https://github.com/huggingface/datasets/issues/4003
4,003
ASSIN2 dataset checksum bug
closed
6
2022-03-24T10:08:50
2022-04-27T14:14:45
2022-03-28T13:56:39
ruanchaves
[ "bug" ]
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` ----------------------------------------------------------------------...
false
1,179,263,787
https://api.github.com/repos/huggingface/datasets/issues/4002
https://github.com/huggingface/datasets/pull/4002
4,002
Support streaming conll2012_ontonotesv5 dataset
closed
1
2022-03-24T09:49:56
2022-03-24T10:53:41
2022-03-24T10:48:47
albertvillanova
[]
Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file).
true
1,179,231,418
https://api.github.com/repos/huggingface/datasets/issues/4001
https://github.com/huggingface/datasets/issues/4001
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
closed
4
2022-03-24T09:21:51
2022-03-26T09:48:21
2022-03-26T03:35:43
gsk1692
[]
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the comma...
false
1,178,844,616
https://api.github.com/repos/huggingface/datasets/issues/4000
https://github.com/huggingface/datasets/issues/4000
4,000
load_dataset error: sndfile library not found
closed
4
2022-03-24T01:52:32
2022-03-25T17:53:33
2022-03-25T17:53:33
i-am-neo
[ "bug" ]
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71...
false
1,178,685,280
https://api.github.com/repos/huggingface/datasets/issues/3999
https://github.com/huggingface/datasets/pull/3999
3,999
Docs maintenance
closed
1
2022-03-23T21:27:33
2022-03-30T17:01:45
2022-03-30T16:56:38
stevhliu
[ "documentation" ]
This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect.
true
1,178,631,986
https://api.github.com/repos/huggingface/datasets/issues/3998
https://github.com/huggingface/datasets/pull/3998
3,998
Fix Audio.encode_example() when writing an array
closed
2
2022-03-23T20:32:13
2022-03-29T14:21:44
2022-03-29T14:16:13
polinaeterna
[]
Closes #3996
true
1,178,566,568
https://api.github.com/repos/huggingface/datasets/issues/3997
https://github.com/huggingface/datasets/pull/3997
3,997
Sync Features dictionaries
closed
1
2022-03-23T19:23:51
2022-04-13T15:52:27
2022-04-13T15:46:19
mariosasko
[]
This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731). A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delit...
true
1,178,415,905
https://api.github.com/repos/huggingface/datasets/issues/3996
https://github.com/huggingface/datasets/issues/3996
3,996
Audio.encode_example() throws an error when writing example from array
closed
3
2022-03-23T17:11:47
2022-03-29T14:16:13
2022-03-29T14:16:13
polinaeterna
[ "bug" ]
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesI...
false
1,178,232,623
https://api.github.com/repos/huggingface/datasets/issues/3995
https://github.com/huggingface/datasets/pull/3995
3,995
Close `PIL.Image` file handler in `Image.decode_example`
closed
1
2022-03-23T14:51:48
2022-03-23T18:24:52
2022-03-23T18:19:27
mariosasko
[]
Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error. To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `...
true
1,178,211,138
https://api.github.com/repos/huggingface/datasets/issues/3994
https://github.com/huggingface/datasets/pull/3994
3,994
Change audio column from string path to Audio feature in ASR task
closed
0
2022-03-23T14:34:52
2022-03-23T15:43:43
2022-03-23T15:43:43
polinaeterna
[]
Will fix #3990
true
1,178,201,495
https://api.github.com/repos/huggingface/datasets/issues/3993
https://github.com/huggingface/datasets/issues/3993
3,993
Streaming dataset + interleave + DataLoader hangs with multiple workers
open
5
2022-03-23T14:27:29
2023-02-28T14:14:24
null
jpilaul
[ "bug" ]
## Describe the bug Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers. ## Steps to reproduce the bug ```python from datasets import interleave_datasets, load_dataset from torch.utils.data import DataLoader ...
false
1,177,946,153
https://api.github.com/repos/huggingface/datasets/issues/3992
https://github.com/huggingface/datasets/issues/3992
3,992
Image column is not decoded in map when using with with_transform
closed
1
2022-03-23T10:51:13
2022-12-13T16:59:06
2022-12-13T16:59:06
phihung
[ "bug" ]
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ...
false
1,177,362,901
https://api.github.com/repos/huggingface/datasets/issues/3991
https://github.com/huggingface/datasets/issues/3991
3,991
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
open
0
2022-03-22T22:16:05
2022-03-23T12:57:16
null
omarespejel
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)* - **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and ev...
false
1,176,976,247
https://api.github.com/repos/huggingface/datasets/issues/3990
https://github.com/huggingface/datasets/issues/3990
3,990
Improve AutomaticSpeechRecognition task template
closed
2
2022-03-22T15:41:08
2022-03-23T17:12:40
2022-03-23T17:12:40
polinaeterna
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it...
false
1,176,955,078
https://api.github.com/repos/huggingface/datasets/issues/3989
https://github.com/huggingface/datasets/pull/3989
3,989
Remove old wikipedia leftovers
closed
3
2022-03-22T15:25:46
2022-03-31T15:35:26
2022-03-31T15:30:16
albertvillanova
[]
After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
true
1,176,858,540
https://api.github.com/repos/huggingface/datasets/issues/3988
https://github.com/huggingface/datasets/pull/3988
3,988
More consistent references in docs
closed
2
2022-03-22T14:18:41
2022-03-22T17:06:32
2022-03-22T16:50:44
mariosasko
[]
Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980. cc @stevhliu
true
1,176,481,659
https://api.github.com/repos/huggingface/datasets/issues/3987
https://github.com/huggingface/datasets/pull/3987
3,987
Fix Faiss custom_index device
closed
1
2022-03-22T09:11:24
2022-03-24T12:18:59
2022-03-24T12:14:12
albertvillanova
[]
Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored. This PR fixes this by raising a ValueError if both arguments are passed. Alternatively, the `custom_index` could be transferred to the target `device`.
true
1,176,429,565
https://api.github.com/repos/huggingface/datasets/issues/3986
https://github.com/huggingface/datasets/issues/3986
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
open
5
2022-03-22T08:23:21
2023-03-06T16:55:04
null
kelvinAI
[ "bug" ]
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear ...
false
1,175,982,937
https://api.github.com/repos/huggingface/datasets/issues/3985
https://github.com/huggingface/datasets/issues/3985
3,985
[image feature] Too many files open error when image feature is returned as a path
closed
0
2022-03-21T21:54:05
2022-03-23T18:19:27
2022-03-23T18:19:27
apsdehal
[ "bug" ]
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at http...
false
1,175,822,117
https://api.github.com/repos/huggingface/datasets/issues/3984
https://github.com/huggingface/datasets/issues/3984
3,984
Local and automatic tests fail
closed
1
2022-03-21T19:07:37
2023-07-25T15:18:40
2023-07-25T15:18:40
MarkusSagen
[ "bug" ]
## Describe the bug Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py` ## Steps to reproduce the bug ```shell git clone https://huggingface/datasets.git cd datasets ``` ```python python -m pip install -e . pytest ``` ## Expected...
false
1,175,759,412
https://api.github.com/repos/huggingface/datasets/issues/3983
https://github.com/huggingface/datasets/issues/3983
3,983
Infinitely attempting lock
closed
4
2022-03-21T18:11:57
2024-05-09T08:24:34
2022-05-06T16:12:18
jyrr
[]
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summariz...
false
1,175,478,099
https://api.github.com/repos/huggingface/datasets/issues/3982
https://github.com/huggingface/datasets/pull/3982
3,982
Exclude Google Drive tests of the CI
closed
2
2022-03-21T14:34:16
2022-03-31T16:38:02
2022-03-21T14:51:35
lhoestq
[]
These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often. I think we can just skip these tests from the CI for now. In the future we could have a CI job that runs only once a day or once a week for such cases cc @albertvillanova @mariosasko @severo Close #3415 ...
true
1,175,423,517
https://api.github.com/repos/huggingface/datasets/issues/3981
https://github.com/huggingface/datasets/pull/3981
3,981
Add TER metric card
closed
1
2022-03-21T13:54:36
2022-03-29T13:57:11
2022-03-29T13:51:40
emibaylor
[]
Add TER metric card This card is still missing content for the following sections: - **Limitations & Biases** - **Values from Papers** If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them!
true
1,175,412,905
https://api.github.com/repos/huggingface/datasets/issues/3980
https://github.com/huggingface/datasets/pull/3980
3,980
Add tip on how to speed up loading with ImageFolder
closed
5
2022-03-21T13:45:58
2022-03-22T13:39:45
2022-03-22T13:34:56
mariosasko
[]
This PR does two things: * adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960)) * replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (...
true
1,175,258,969
https://api.github.com/repos/huggingface/datasets/issues/3979
https://github.com/huggingface/datasets/pull/3979
3,979
Fix google drive streaming for small files
closed
4
2022-03-21T11:38:46
2023-09-24T09:55:19
2022-03-21T14:25:58
lhoestq
[]
Google drive did another change recently, following #3787 #3843 . In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD)
true
1,175,226,456
https://api.github.com/repos/huggingface/datasets/issues/3978
https://github.com/huggingface/datasets/issues/3978
3,978
I can't view HFcallback dataset for ASR Space
open
4
2022-03-21T11:07:49
2023-09-25T12:19:53
null
kingabzpro
[]
## Dataset viewer issue for '*Urdu-ASR-flags*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)* *I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.* A...
false
1,175,049,927
https://api.github.com/repos/huggingface/datasets/issues/3977
https://github.com/huggingface/datasets/issues/3977
3,977
Adapt `docs/README.md` for datasets
closed
1
2022-03-21T08:26:49
2023-02-27T10:32:37
2023-02-27T10:32:37
qqaatw
[ "documentation" ]
## Describe the bug Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`.
false
1,175,043,780
https://api.github.com/repos/huggingface/datasets/issues/3976
https://github.com/huggingface/datasets/pull/3976
3,976
Fix main classes reference in docs
closed
3
2022-03-21T08:19:46
2022-04-12T14:19:39
2022-04-12T14:19:38
qqaatw
[]
Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block. There are other exam...
true
1,174,678,942
https://api.github.com/repos/huggingface/datasets/issues/3975
https://github.com/huggingface/datasets/pull/3975
3,975
Update many missing tags to dataset README's
closed
0
2022-03-20T20:42:27
2022-03-21T18:39:52
2022-03-21T18:39:52
MarkusSagen
[]
I've started to go through the datasets available and noticed that there are 127 datasets that does not have all the tags so I started filling them in; starting with some of the most common and QA datasets Not 100% certain that the task_id is correct for SuperGLUE If anyone is browsing the issues and would like t...
true
1,174,485,044
https://api.github.com/repos/huggingface/datasets/issues/3974
https://github.com/huggingface/datasets/pull/3974
3,974
Add XFUN dataset
closed
8
2022-03-20T09:24:54
2022-10-03T09:38:16
2022-10-03T09:36:22
qqaatw
[ "dataset contribution" ]
This PR adds XFUN dataset. Home page and repository: https://github.com/doc-analysis/XFUND Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py
true
1,174,455,431
https://api.github.com/repos/huggingface/datasets/issues/3973
https://github.com/huggingface/datasets/issues/3973
3,973
ConnectionError and SSLError
closed
6
2022-03-20T06:45:37
2022-03-30T08:13:32
2022-03-30T08:13:32
yanyu2015
[ "bug" ]
code ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') ``` bug report ``` --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_2978...
false
1,174,402,033
https://api.github.com/repos/huggingface/datasets/issues/3972
https://github.com/huggingface/datasets/pull/3972
3,972
Adding Roman Urdu Hate Speech dataset
closed
3
2022-03-20T00:19:26
2022-03-25T15:56:19
2022-03-25T15:51:20
bp-high
[]
This Pull request will add the Roman Urdu Hate speech Dataset.
true
1,174,329,442
https://api.github.com/repos/huggingface/datasets/issues/3971
https://github.com/huggingface/datasets/pull/3971
3,971
Applied index-filters on scores in search.py.
closed
1
2022-03-19T18:43:42
2022-04-12T14:48:23
2022-04-12T14:41:58
vishalsrao
[]
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
true
1,174,327,367
https://api.github.com/repos/huggingface/datasets/issues/3970
https://github.com/huggingface/datasets/pull/3970
3,970
Apply index-filters on scores in get_nearest_examples and get_nearest…
closed
0
2022-03-19T18:32:31
2022-03-19T18:38:12
2022-03-19T18:38:12
vishalsrao
[]
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
true
1,174,273,824
https://api.github.com/repos/huggingface/datasets/issues/3969
https://github.com/huggingface/datasets/issues/3969
3,969
Cannot preview cnn_dailymail dataset
closed
10
2022-03-19T14:08:57
2022-04-20T15:52:49
2022-04-20T15:52:49
hasan-besh
[]
## Dataset viewer issue for '*cnn_dailymail*' **Link:** https://huggingface.co/datasets/cnn_dailymail *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,174,193,962
https://api.github.com/repos/huggingface/datasets/issues/3968
https://github.com/huggingface/datasets/issues/3968
3,968
Cannot preview 'indonesian-nlp/eli5_id' dataset
closed
5
2022-03-19T06:54:09
2022-03-24T16:34:24
2022-03-24T16:34:24
cahya-wirawan
[ "dataset-viewer" ]
## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exis...
false
1,174,107,128
https://api.github.com/repos/huggingface/datasets/issues/3967
https://github.com/huggingface/datasets/pull/3967
3,967
[feat] Add TextVQA dataset
closed
3
2022-03-18T23:29:39
2022-05-05T06:51:31
2022-05-05T06:44:29
apsdehal
[]
This would be the first classification-based vision-and-language dataset in the datasets library. Currently, the dataset downloads everything you need beforehand. See the [paper](https://arxiv.org/abs/1904.08920) for more details. Test Plan: - Ran the full and the dummy data test locally
true
1,173,883,084
https://api.github.com/repos/huggingface/datasets/issues/3966
https://github.com/huggingface/datasets/pull/3966
3,966
Create metric card for BERTScore
closed
1
2022-03-18T18:21:56
2022-03-22T13:35:28
2022-03-22T13:30:56
sashavor
[]
Proposing a metric card for BERTScore
true
1,173,708,739
https://api.github.com/repos/huggingface/datasets/issues/3965
https://github.com/huggingface/datasets/issues/3965
3,965
TypeError: Couldn't cast array of type for JSONLines dataset
closed
1
2022-03-18T15:17:53
2022-05-06T16:13:51
2022-05-06T16:13:51
lewtun
[ "bug" ]
## Describe the bug One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below). This reminds me a bit of #2799 where one can load the dataset in `pan...
false
1,173,564,993
https://api.github.com/repos/huggingface/datasets/issues/3964
https://github.com/huggingface/datasets/issues/3964
3,964
Add default Audio Loader
closed
0
2022-03-18T12:58:55
2022-08-22T14:20:46
2022-08-22T14:20:46
polinaeterna
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Writing a custom loading dataset script might be a bit challenging for users. **Describe the solution you'd like** Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure. **Describe alternatives ...
false
1,173,492,562
https://api.github.com/repos/huggingface/datasets/issues/3963
https://github.com/huggingface/datasets/pull/3963
3,963
Add Audio Folder
closed
14
2022-03-18T11:40:09
2022-06-15T16:33:19
2022-06-15T16:33:19
polinaeterna
[]
Would resolve #3964 AudioFolder loads a .txt file with transcriptions and creates a dataset with all audiofiles in provided directory that has a transcription (independently of the directory structure) as a single split (train). Can be loaded via: ```python # for local dirs dataset = load_dataset("audiofolder...
true
1,173,482,291
https://api.github.com/repos/huggingface/datasets/issues/3962
https://github.com/huggingface/datasets/pull/3962
3,962
Fix flatten of Sequence feature type
closed
1
2022-03-18T11:27:42
2022-03-21T14:40:47
2022-03-21T14:36:12
lhoestq
[]
The `Sequence` features type is not correctly flattened if it contains a dictionary. This PR fixes this, and I added a test case for this. Close https://github.com/huggingface/datasets/issues/3795
true
1,173,223,086
https://api.github.com/repos/huggingface/datasets/issues/3961
https://github.com/huggingface/datasets/issues/3961
3,961
Scores from Index at extra positions are not filtered out
closed
2
2022-03-18T06:13:23
2022-04-12T14:41:58
2022-04-12T14:41:58
vishalsrao
[ "bug" ]
If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too. Reference: https://github.com/hu...
false
1,173,148,884
https://api.github.com/repos/huggingface/datasets/issues/3960
https://github.com/huggingface/datasets/issues/3960
3,960
Load local dataset error
open
13
2022-03-18T03:32:49
2023-08-02T17:12:20
null
TXacs
[ "bug", "dataset bug" ]
When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_dataset('nateraw/image-folder...
false
1,172,872,695
https://api.github.com/repos/huggingface/datasets/issues/3959
https://github.com/huggingface/datasets/issues/3959
3,959
Medium-sized dataset conversion from pandas causes a crash
closed
3
2022-03-17T20:20:35
2022-12-12T17:14:06
2022-04-20T12:35:37
Antymon
[ "bug" ]
Hi, I am suffering from the following issue: ## Describe the bug Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash: ``` File "/home/datasets_crash.py", line 7, in <module> arrow=datasets.Dataset.from_pandas(d) File "/home/.conda/envs/tools...
false
1,172,657,981
https://api.github.com/repos/huggingface/datasets/issues/3958
https://github.com/huggingface/datasets/pull/3958
3,958
Update Wikipedia metadata
closed
2
2022-03-17T17:50:05
2022-03-21T12:26:48
2022-03-21T12:26:47
albertvillanova
[]
This PR updates: - dataset card - metadata JSON
true
1,172,401,455
https://api.github.com/repos/huggingface/datasets/issues/3957
https://github.com/huggingface/datasets/pull/3957
3,957
Fix xtreme s metrics
closed
2
2022-03-17T13:39:04
2022-03-18T13:46:19
2022-03-18T13:42:16
patrickvonplaten
[]
We in fact do need BABEL in xtreme-s
true
1,172,272,327
https://api.github.com/repos/huggingface/datasets/issues/3956
https://github.com/huggingface/datasets/issues/3956
3,956
TypeError: __init__() missing 1 required positional argument: 'scheme'
closed
8
2022-03-17T11:43:13
2023-11-21T04:26:20
2022-03-28T08:00:01
amirj
[ "bug" ]
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting El...
false
1,172,246,647
https://api.github.com/repos/huggingface/datasets/issues/3955
https://github.com/huggingface/datasets/pull/3955
3,955
Remove unncessary 'pylint disable' message in ReadMe
closed
0
2022-03-17T11:16:55
2022-04-12T14:28:35
2022-04-12T14:28:35
Datta0
[]
null
true
1,172,141,664
https://api.github.com/repos/huggingface/datasets/issues/3954
https://github.com/huggingface/datasets/issues/3954
3,954
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
closed
6
2022-03-17T09:38:11
2022-04-20T12:39:07
2022-04-20T12:39:07
MatanBenChorin
[]
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1' **Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true The dataset preview is not available for this dataset. Am I the one who added this dataset ? Yes
false
1,172,123,736
https://api.github.com/repos/huggingface/datasets/issues/3953
https://github.com/huggingface/datasets/issues/3953
3,953
Add ImageNet Sketch
closed
2
2022-03-17T09:20:31
2022-05-23T18:05:29
2022-05-23T18:05:29
NielsRogge
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** ImageNet Sketch - **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale. - **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.o...
false
1,171,895,531
https://api.github.com/repos/huggingface/datasets/issues/3952
https://github.com/huggingface/datasets/issues/3952
3,952
Checksum error for glue sst2, stsb, rte etc datasets
closed
1
2022-03-17T03:45:47
2022-03-17T07:10:15
2022-03-17T07:10:14
ravindra-ut
[ "bug" ]
## Describe the bug Checksum error for glue sst2, stsb, rte etc datasets ## Steps to reproduce the bug ```python >>> nlp.load_dataset('glue', 'sst2') Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ...
false
1,171,568,814
https://api.github.com/repos/huggingface/datasets/issues/3951
https://github.com/huggingface/datasets/issues/3951
3,951
Forked streaming datasets try to `open` data urls rather than use network
closed
1
2022-03-16T21:21:02
2022-06-10T20:47:26
2022-06-10T20:47:26
dlwh
[ "bug" ]
## Describe the bug Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else. ## Steps to reproduce the bug ```python from multiprocessing import freeze_support import transformer...
false
1,171,560,585
https://api.github.com/repos/huggingface/datasets/issues/3950
https://github.com/huggingface/datasets/issues/3950
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
closed
1
2022-03-16T21:14:11
2022-06-10T20:47:26
2022-06-10T20:47:26
dlwh
[ "bug", "good first issue" ]
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('os...
false
1,171,467,981
https://api.github.com/repos/huggingface/datasets/issues/3949
https://github.com/huggingface/datasets/pull/3949
3,949
Remove GLEU metric
closed
1
2022-03-16T19:35:31
2022-04-12T20:43:26
2022-04-12T20:37:09
emibaylor
[]
Remove the GLEU metric as it is not actually implemented.
true
1,171,460,560
https://api.github.com/repos/huggingface/datasets/issues/3948
https://github.com/huggingface/datasets/pull/3948
3,948
Google BLEU Metric Card
closed
1
2022-03-16T19:27:17
2022-03-21T16:04:26
2022-03-21T16:04:25
emibaylor
[]
Add metric card for Google BLEU (GLEU) metric One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing thi...
true
1,171,452,854
https://api.github.com/repos/huggingface/datasets/issues/3947
https://github.com/huggingface/datasets/pull/3947
3,947
BLEU metric card
closed
2
2022-03-16T19:20:07
2022-03-29T14:59:50
2022-03-29T14:54:14
emibaylor
[]
Add BLEU metric card
true
1,171,239,287
https://api.github.com/repos/huggingface/datasets/issues/3946
https://github.com/huggingface/datasets/pull/3946
3,946
Add newline to text dataset builder for controlling universal newlines mode
closed
3
2022-03-16T16:11:11
2023-09-24T10:10:50
2023-09-24T10:10:47
albertvillanova
[]
Fix #3804.
true
1,171,222,257
https://api.github.com/repos/huggingface/datasets/issues/3945
https://github.com/huggingface/datasets/pull/3945
3,945
Fix comet metric
closed
4
2022-03-16T15:56:47
2022-03-22T15:10:12
2022-03-22T15:05:30
lhoestq
[]
The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed. This PR fixes the metric, updates the download_model mock and updates the doctest.
true
1,171,209,510
https://api.github.com/repos/huggingface/datasets/issues/3944
https://github.com/huggingface/datasets/pull/3944
3,944
Create README.md
closed
1
2022-03-16T15:46:26
2022-03-17T17:50:54
2022-03-17T17:47:05
sashavor
[]
Proposing COMET metric card
true
1,171,185,070
https://api.github.com/repos/huggingface/datasets/issues/3943
https://github.com/huggingface/datasets/pull/3943
3,943
[Doc] Don't use v for version tags on GitHub
closed
1
2022-03-16T15:28:30
2022-03-17T11:46:26
2022-03-17T11:46:25
sgugger
[]
This removes the `v` automatically used by `doc-builder` for versions.
true
1,171,177,122
https://api.github.com/repos/huggingface/datasets/issues/3942
https://github.com/huggingface/datasets/issues/3942
3,942
reddit_tifu dataset: Checksums didn't match for dataset source files
closed
3
2022-03-16T15:23:30
2022-03-16T15:57:43
2022-03-16T15:39:25
XingxingZhang
[ "bug", "duplicate" ]
## Describe the bug When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files" ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) # load_dataset('billsum') load_dataset('reddit_tifu'...
false
1,171,132,709
https://api.github.com/repos/huggingface/datasets/issues/3941
https://github.com/huggingface/datasets/issues/3941
3,941
billsum dataset: Checksums didn't match for dataset source files:
closed
3
2022-03-16T14:52:08
2024-03-13T12:11:35
2022-03-16T15:46:44
XingxingZhang
[ "bug" ]
## Describe the bug When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files" ``` File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_u...
false
1,171,106,853
https://api.github.com/repos/huggingface/datasets/issues/3940
https://github.com/huggingface/datasets/pull/3940
3,940
Create CoVAL metric card
closed
1
2022-03-16T14:31:49
2022-03-18T17:37:59
2022-03-18T17:35:14
sashavor
[]
Initial CoVAL metric card
true
1,170,882,331
https://api.github.com/repos/huggingface/datasets/issues/3939
https://github.com/huggingface/datasets/issues/3939
3,939
Source links broken
closed
8
2022-03-16T11:17:47
2022-03-19T04:41:32
2022-03-19T04:41:32
qqaatw
[ "bug" ]
## Describe the bug The source links of v2.0.0 docs are broken: For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features...
false
1,170,875,417
https://api.github.com/repos/huggingface/datasets/issues/3938
https://github.com/huggingface/datasets/pull/3938
3,938
Avoid info log messages from transformers in FrugalScore metric
closed
1
2022-03-16T11:11:29
2022-03-17T08:37:25
2022-03-17T08:37:24
albertvillanova
[]
Fix #3928.
true
1,170,832,006
https://api.github.com/repos/huggingface/datasets/issues/3937
https://github.com/huggingface/datasets/issues/3937
3,937
Missing languages in lvwerra/github-code dataset
closed
5
2022-03-16T10:32:03
2022-03-22T07:09:23
2022-03-21T14:50:47
Eytan-S
[ "Dataset discussion" ]
Hi, I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset! I've noticed that two languages are missing from the dataset: TypeScript and Scala. Looks like they're also omitted from the query you used to get the original code. Are there any plans to add them in the fut...
false
1,170,713,473
https://api.github.com/repos/huggingface/datasets/issues/3936
https://github.com/huggingface/datasets/pull/3936
3,936
Fix Wikipedia version and re-add tests
closed
1
2022-03-16T08:48:04
2022-03-16T17:04:07
2022-03-16T17:04:05
albertvillanova
[]
To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301": - de - en - fr - frr - it - simple These pre-processed data can be acces...
true
1,170,292,492
https://api.github.com/repos/huggingface/datasets/issues/3934
https://github.com/huggingface/datasets/pull/3934
3,934
Create MAUVE metric card
closed
1
2022-03-15T21:36:07
2022-03-18T17:38:14
2022-03-18T17:34:13
sashavor
[]
Proposing a MAUVE metric card
true
1,170,253,605
https://api.github.com/repos/huggingface/datasets/issues/3933
https://github.com/huggingface/datasets/pull/3933
3,933
Update README.md
closed
1
2022-03-15T20:52:05
2022-03-17T17:51:24
2022-03-17T17:47:37
sashavor
[]
Fixing missing triple quote
true
1,170,221,773
https://api.github.com/repos/huggingface/datasets/issues/3932
https://github.com/huggingface/datasets/pull/3932
3,932
Create SARI metric card
closed
1
2022-03-15T20:37:23
2022-03-18T17:37:01
2022-03-18T17:32:55
sashavor
[]
SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: )
true
1,170,097,208
https://api.github.com/repos/huggingface/datasets/issues/3931
https://github.com/huggingface/datasets/pull/3931
3,931
Add align_labels_with_mapping docs
closed
1
2022-03-15T19:24:57
2022-03-18T16:28:31
2022-03-18T16:24:33
stevhliu
[ "documentation" ]
This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko πŸŽ‰ ). For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface....
true
1,170,087,793
https://api.github.com/repos/huggingface/datasets/issues/3930
https://github.com/huggingface/datasets/pull/3930
3,930
Create README.md
closed
1
2022-03-15T19:16:59
2022-04-04T15:23:15
2022-04-04T15:17:28
sashavor
[]
Creating a README for IndicGLUE cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?)
true
1,170,066,235
https://api.github.com/repos/huggingface/datasets/issues/3929
https://github.com/huggingface/datasets/issues/3929
3,929
Load a local dataset twice
closed
1
2022-03-15T18:59:26
2022-03-16T09:55:09
2022-03-16T09:54:06
caush
[ "bug" ]
## Describe the bug Load a local "dataset" composed of two csv files twice. ## Steps to reproduce the bug Put the two joined files in a repository named "Data". Then in python: import datasets as ds ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'}) ## Expected results Should give something ...
false
1,170,017,132
https://api.github.com/repos/huggingface/datasets/issues/3928
https://github.com/huggingface/datasets/issues/3928
3,928
Frugal score deprecations
closed
1
2022-03-15T18:10:42
2022-03-17T08:37:24
2022-03-17T08:37:24
ierezell
[ "bug" ]
## Describe the bug The frugal score returns a really verbose output with warnings that can be easily changed. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets.load import load_metric frugal = load_metric("frugalscore") frugal.compute(predictions=["Do you like spinach...
false
1,170,016,465
https://api.github.com/repos/huggingface/datasets/issues/3927
https://github.com/huggingface/datasets/pull/3927
3,927
Update main readme
closed
2
2022-03-15T18:09:59
2022-03-29T10:13:47
2022-03-29T10:08:20
lhoestq
[]
The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets
true
1,169,945,052
https://api.github.com/repos/huggingface/datasets/issues/3926
https://github.com/huggingface/datasets/pull/3926
3,926
Doc maintenance
closed
1
2022-03-15T17:00:46
2022-03-15T19:27:15
2022-03-15T19:27:12
stevhliu
[ "documentation" ]
This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page.
true
1,169,913,769
https://api.github.com/repos/huggingface/datasets/issues/3925
https://github.com/huggingface/datasets/pull/3925
3,925
Fix main_classes docs index
closed
3
2022-03-15T16:33:46
2022-03-22T13:49:11
2022-03-22T13:44:04
lhoestq
[]
Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types ![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
true
1,169,805,813
https://api.github.com/repos/huggingface/datasets/issues/3924
https://github.com/huggingface/datasets/pull/3924
3,924
Document cases for github datasets
closed
2
2022-03-15T15:10:10
2022-04-05T18:33:15
2022-03-15T15:41:23
lhoestq
[]
In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases. I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github: ...
true
1,169,773,869
https://api.github.com/repos/huggingface/datasets/issues/3923
https://github.com/huggingface/datasets/pull/3923
3,923
Add methods to IterableDatasetDict
closed
5
2022-03-15T14:46:03
2022-07-06T15:40:20
2022-03-15T16:45:06
lhoestq
[]
Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict: - map - filter - shuffle - with_format - cast - cast_column - remove_columns - rename_column - rename_columns
true
1,169,761,293
https://api.github.com/repos/huggingface/datasets/issues/3922
https://github.com/huggingface/datasets/pull/3922
3,922
Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset
closed
2
2022-03-15T14:36:28
2022-03-15T16:07:04
2022-03-15T16:07:03
albertvillanova
[]
Fix #2957
true
1,169,749,338
https://api.github.com/repos/huggingface/datasets/issues/3921
https://github.com/huggingface/datasets/pull/3921
3,921
Fix NonMatchingChecksumError in CRD3 dataset
closed
2
2022-03-15T14:27:14
2022-03-15T15:54:27
2022-03-15T15:54:26
albertvillanova
[]
Fix #3051
true
1,169,532,807
https://api.github.com/repos/huggingface/datasets/issues/3920
https://github.com/huggingface/datasets/issues/3920
3,920
'datasets.features' is not a package
closed
2
2022-03-15T11:14:23
2022-03-16T09:17:12
2022-03-16T09:17:12
Arij-Aladel
[]
@albertvillanova python 3.9 os: ubuntu 20.04 In conda environment torch installed by ```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html``` datasets package is installed by ``` /env/bin/pip install datasets==1.8....
false
1,169,497,210
https://api.github.com/repos/huggingface/datasets/issues/3919
https://github.com/huggingface/datasets/issues/3919
3,919
AttributeError: 'DatasetDict' object has no attribute 'features'
closed
2
2022-03-15T10:46:59
2022-03-17T04:16:14
2022-03-17T04:16:14
jswapnil10
[ "bug" ]
## Describe the bug Receiving the error when trying to check for Dataset features ## Steps to reproduce the bug from datasets import Dataset dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']]) dataset.features ## Expected results A clear and concise description of the exp...
false
1,169,366,117
https://api.github.com/repos/huggingface/datasets/issues/3918
https://github.com/huggingface/datasets/issues/3918
3,918
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
closed
3
2022-03-15T08:53:45
2022-03-16T15:36:58
2022-03-15T14:01:25
willowdong
[ "bug", "duplicate" ]
## Describe the bug Can't load the dataset ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('multi_news') dataset_2=load_dataset("reddit_tifu", "long") ## Actual results raise NonMatchingChecksumError(error_msg + s...
false
1,168,906,154
https://api.github.com/repos/huggingface/datasets/issues/3917
https://github.com/huggingface/datasets/pull/3917
3,917
Create README.md
closed
1
2022-03-14T21:08:10
2022-03-17T17:45:39
2022-03-17T17:45:39
sashavor
[]
This follows the same structure as the GLUE metric card, hope that works for everyone :)
true
1,168,869,191
https://api.github.com/repos/huggingface/datasets/issues/3916
https://github.com/huggingface/datasets/pull/3916
3,916
Create README.md for GLUE
closed
1
2022-03-14T20:27:22
2022-03-15T17:06:57
2022-03-15T17:06:56
sashavor
[]
I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify. Also tagging @yjernite for the Limitations section. Happy to hear your thoughts!
true
1,168,848,101
https://api.github.com/repos/huggingface/datasets/issues/3915
https://github.com/huggingface/datasets/pull/3915
3,915
Metric card template
closed
6
2022-03-14T20:07:08
2022-05-04T10:44:09
2022-05-04T10:37:06
emibaylor
[]
Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!). All feedback is welcome, but am especially curious about feedback in terms of: - things that should be included but aren't - things that are included but should be changed o...
true
1,168,777,880
https://api.github.com/repos/huggingface/datasets/issues/3914
https://github.com/huggingface/datasets/pull/3914
3,914
Use templates for doc-builidng jobs
closed
2
2022-03-14T18:53:06
2022-03-17T15:02:59
2022-03-17T15:02:58
sgugger
[]
This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-) Note: all libraries must share the same docker image for those doc-building jobs. For now,...
true
1,168,723,950
https://api.github.com/repos/huggingface/datasets/issues/3913
https://github.com/huggingface/datasets/pull/3913
3,913
Deterministic split order in DatasetDict.map
closed
3
2022-03-14T17:58:37
2023-09-24T09:55:10
2022-03-15T10:45:15
lhoestq
[]
The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed Close https://github.com/huggingface/datasets/issues/3847
true
1,168,720,098
https://api.github.com/repos/huggingface/datasets/issues/3912
https://github.com/huggingface/datasets/pull/3912
3,912
add draft of registering function for pandas
closed
3
2022-03-14T17:54:29
2023-09-24T09:55:01
2023-01-24T12:57:10
lvwerra
[]
This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub. Here is an exampl...
true
1,168,652,374
https://api.github.com/repos/huggingface/datasets/issues/3911
https://github.com/huggingface/datasets/pull/3911
3,911
Create README.md for CER metric
closed
1
2022-03-14T16:54:51
2022-03-17T17:49:40
2022-03-17T17:45:54
sashavor
[]
Initial proposal for a CER metric card cc @patrickvonplaten - wdyt this time around? :smile:
true
1,168,579,694
https://api.github.com/repos/huggingface/datasets/issues/3910
https://github.com/huggingface/datasets/pull/3910
3,910
Fix text loader to split only on universal newlines
closed
6
2022-03-14T15:54:58
2022-03-15T16:16:11
2022-03-15T16:16:09
albertvillanova
[]
Currently, `text` loader breaks on a superset of universal newlines, which also contains Unicode line boundaries. See: https://docs.python.org/3/library/stdtypes.html#str.splitlines However, the expected behavior is to get the lines splitted only on universal newlines: "\n", "\r\n" and "\r". See: oscar-corpus/cor...
true
1,168,578,058
https://api.github.com/repos/huggingface/datasets/issues/3909
https://github.com/huggingface/datasets/issues/3909
3,909
Error loading file audio when downloading the Common Voice dataset directly from the Hub
closed
8
2022-03-14T15:53:50
2023-03-02T15:31:27
2023-03-02T15:31:26
aliceinland
[ "bug" ]
## Describe the bug When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened. ## Steps to reproduce the bug ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2...
false
1,168,576,963
https://api.github.com/repos/huggingface/datasets/issues/3908
https://github.com/huggingface/datasets/pull/3908
3,908
Update README.md for SQuAD v2 metric
closed
1
2022-03-14T15:53:10
2022-03-15T17:04:11
2022-03-15T17:04:11
sashavor
[]
Putting "Values from popular papers" as a subsection of "Output values"
true
1,168,575,998
https://api.github.com/repos/huggingface/datasets/issues/3907
https://github.com/huggingface/datasets/pull/3907
3,907
Update README.md for SQuAD metric
closed
1
2022-03-14T15:52:31
2022-03-15T17:04:20
2022-03-15T17:04:19
sashavor
[]
Putting "Values from popular papers" as a subsection of "Output values"
true