id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
946,446,967
2,661
Add SD task for SUPERB
closed
[]
2021-07-16T16:43:21
2021-08-04T17:03:53
2021-08-04T17:03:53
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). TODO: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upl...
albertvillanova
https://github.com/huggingface/datasets/pull/2661
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2661", "html_url": "https://github.com/huggingface/datasets/pull/2661", "diff_url": "https://github.com/huggingface/datasets/pull/2661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2661.patch", "merged_at": "2021-08-04T17:03...
true
946,316,180
2,660
Move checks from _map_single to map
closed
[]
2021-07-16T13:53:33
2021-09-06T14:12:23
2021-09-06T14:12:23
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is the...
mariosasko
https://github.com/huggingface/datasets/pull/2660
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2660", "html_url": "https://github.com/huggingface/datasets/pull/2660", "diff_url": "https://github.com/huggingface/datasets/pull/2660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2660.patch", "merged_at": "2021-09-06T14:12...
true
946,155,407
2,659
Allow dataset config kwargs to be None
closed
[]
2021-07-16T10:25:38
2021-07-16T12:46:07
2021-07-16T12:46:07
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
lhoestq
https://github.com/huggingface/datasets/pull/2659
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2659", "html_url": "https://github.com/huggingface/datasets/pull/2659", "diff_url": "https://github.com/huggingface/datasets/pull/2659.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2659.patch", "merged_at": "2021-07-16T12:46...
true
946,139,532
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
closed
[]
2021-07-16T10:05:44
2021-07-16T12:46:06
2021-07-16T12:46:06
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
lhoestq
https://github.com/huggingface/datasets/issues/2658
null
false
945,822,829
2,657
`to_json` reporting enhancements
open
[]
2021-07-15T23:32:18
2021-07-15T23:33:53
null
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps...
stas00
https://github.com/huggingface/datasets/issues/2657
null
false
945,421,790
2,656
Change `from_csv` default arguments
closed
[]
2021-07-15T14:09:06
2023-09-24T09:56:44
2021-07-16T10:23:26
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`: ```python Dataset.from_csv( ..., sep=None ) ```
SBrandeis
https://github.com/huggingface/datasets/pull/2656
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2656", "html_url": "https://github.com/huggingface/datasets/pull/2656", "diff_url": "https://github.com/huggingface/datasets/pull/2656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2656.patch", "merged_at": null }
true
945,382,723
2,655
Allow the selection of multiple columns at once
closed
[]
2021-07-15T13:30:45
2024-01-09T15:11:27
2024-01-09T07:46:28
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **...
Dref360
https://github.com/huggingface/datasets/issues/2655
null
false
945,167,231
2,654
Give a user feedback if the dataset he loads is streamable or not
open
[]
2021-07-15T09:07:27
2021-08-02T11:03:21
null
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g....
philschmid
https://github.com/huggingface/datasets/issues/2654
null
false
945,102,321
2,653
Add SD task for SUPERB
closed
[]
2021-07-15T07:51:40
2021-08-04T17:03:52
2021-08-04T17:03:52
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Up...
albertvillanova
https://github.com/huggingface/datasets/issues/2653
null
false
944,865,924
2,652
Fix logging docstring
closed
[]
2021-07-14T23:19:58
2021-07-18T11:41:06
2021-07-15T09:57:31
Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534.
mariosasko
https://github.com/huggingface/datasets/pull/2652
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2652", "html_url": "https://github.com/huggingface/datasets/pull/2652", "diff_url": "https://github.com/huggingface/datasets/pull/2652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2652.patch", "merged_at": "2021-07-15T09:57...
true
944,796,961
2,651
Setting log level higher than warning does not suppress progress bar
closed
[]
2021-07-14T21:06:51
2022-07-08T14:51:57
2021-07-15T03:41:35
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOS...
Isa-rentacs
https://github.com/huggingface/datasets/issues/2651
null
false
944,672,565
2,650
[load_dataset] shard and parallelize the process
closed
[]
2021-07-14T18:04:58
2023-11-28T19:11:41
2023-11-28T19:11:40
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core. - If the build crashes, everything done up to that point gets lost Request: Shard the build over multiple arrow files, which would enable: - much faster build by parallelizing the build process - if the p...
stas00
https://github.com/huggingface/datasets/issues/2650
null
false
944,651,229
2,649
adding progress bar / ETA for `load_dataset`
open
[]
2021-07-14T17:34:39
2023-03-27T10:32:49
null
Please consider: ``` Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2... HF google storage unre...
stas00
https://github.com/huggingface/datasets/issues/2649
null
false
944,484,522
2,648
Add web_split dataset for Paraphase and Rephrase benchmark
open
[]
2021-07-14T14:24:36
2021-07-14T14:26:12
null
## Describe: For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better resu...
bhadreshpsavani
https://github.com/huggingface/datasets/issues/2648
null
false
944,424,941
2,647
Fix anchor in README
closed
[]
2021-07-14T13:22:44
2021-07-18T11:41:18
2021-07-15T06:50:47
I forgot to push this fix in #2611, so I'm sending it now.
mariosasko
https://github.com/huggingface/datasets/pull/2647
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2647", "html_url": "https://github.com/huggingface/datasets/pull/2647", "diff_url": "https://github.com/huggingface/datasets/pull/2647.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2647.patch", "merged_at": "2021-07-15T06:50...
true
944,379,954
2,646
downloading of yahoo_answers_topics dataset failed
closed
[]
2021-07-14T12:31:05
2022-08-04T08:28:24
2022-08-04T08:28:24
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config...
vikrant7k
https://github.com/huggingface/datasets/issues/2646
null
false
944,374,284
2,645
load_dataset processing failed with OS error after downloading a dataset
closed
[]
2021-07-14T12:23:53
2021-07-15T09:34:02
2021-07-15T09:34:02
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ...
fake-warrior8
https://github.com/huggingface/datasets/issues/2645
null
false
944,254,748
2,644
Batched `map` not allowed to return 0 items
closed
[]
2021-07-14T09:58:19
2021-07-26T14:55:15
2021-07-26T14:55:15
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
pcuenca
https://github.com/huggingface/datasets/issues/2644
null
false
944,220,273
2,643
Enum used in map functions will raise a RecursionError with dill.
open
[]
2021-07-14T09:16:08
2021-11-02T09:51:11
null
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ...
jorgeecardona
https://github.com/huggingface/datasets/issues/2643
null
false
944,175,697
2,642
Support multi-worker with streaming dataset (IterableDataset).
open
[]
2021-07-14T08:22:58
2024-05-03T10:11:04
null
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **D...
changjonathanc
https://github.com/huggingface/datasets/issues/2642
null
false
943,838,085
2,641
load_dataset("financial_phrasebank") NonMatchingChecksumError
closed
[]
2021-07-13T21:21:49
2022-08-04T08:30:08
2022-08-04T08:30:08
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financi...
courtmckay
https://github.com/huggingface/datasets/issues/2641
null
false
943,591,055
2,640
Fix docstrings
closed
[]
2021-07-13T16:09:14
2021-07-15T06:51:01
2021-07-15T06:06:12
Fix rendering of some docstrings.
albertvillanova
https://github.com/huggingface/datasets/pull/2640
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2640", "html_url": "https://github.com/huggingface/datasets/pull/2640", "diff_url": "https://github.com/huggingface/datasets/pull/2640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2640.patch", "merged_at": "2021-07-15T06:06...
true
943,527,463
2,639
Refactor patching to specific submodule
closed
[]
2021-07-13T15:08:45
2021-07-13T16:52:49
2021-07-13T16:52:49
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created. In relation with the initial approach followed in #2631.
albertvillanova
https://github.com/huggingface/datasets/pull/2639
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2639", "html_url": "https://github.com/huggingface/datasets/pull/2639", "diff_url": "https://github.com/huggingface/datasets/pull/2639.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2639.patch", "merged_at": "2021-07-13T16:52...
true
943,484,913
2,638
Streaming for the Json loader
closed
[]
2021-07-13T14:37:06
2021-07-16T15:59:32
2021-07-16T15:59:31
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows. Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related...
lhoestq
https://github.com/huggingface/datasets/pull/2638
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2638", "html_url": "https://github.com/huggingface/datasets/pull/2638", "diff_url": "https://github.com/huggingface/datasets/pull/2638.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2638.patch", "merged_at": "2021-07-16T15:59...
true
943,044,514
2,636
Streaming for the Pandas loader
closed
[]
2021-07-13T09:18:21
2021-07-13T14:37:24
2021-07-13T14:37:23
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example. Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
lhoestq
https://github.com/huggingface/datasets/pull/2636
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2636", "html_url": "https://github.com/huggingface/datasets/pull/2636", "diff_url": "https://github.com/huggingface/datasets/pull/2636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2636.patch", "merged_at": "2021-07-13T14:37...
true
943,030,999
2,635
Streaming for the CSV loader
closed
[]
2021-07-13T09:08:58
2021-07-13T15:19:38
2021-07-13T15:19:37
It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows. Indeed, when streaming, `open` is extended to support reading from remote file progressively.
lhoestq
https://github.com/huggingface/datasets/pull/2635
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2635", "html_url": "https://github.com/huggingface/datasets/pull/2635", "diff_url": "https://github.com/huggingface/datasets/pull/2635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2635.patch", "merged_at": "2021-07-13T15:19...
true
942,805,621
2,634
Inject ASR template for lj_speech dataset
closed
[]
2021-07-13T06:04:54
2021-07-13T09:05:09
2021-07-13T09:05:09
Related to: #2565, #2633. cc: @lewtun
albertvillanova
https://github.com/huggingface/datasets/pull/2634
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2634", "html_url": "https://github.com/huggingface/datasets/pull/2634", "diff_url": "https://github.com/huggingface/datasets/pull/2634.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2634.patch", "merged_at": "2021-07-13T09:05...
true
942,396,414
2,633
Update ASR tags
closed
[]
2021-07-12T19:58:31
2021-07-13T05:45:26
2021-07-13T05:45:13
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
lewtun
https://github.com/huggingface/datasets/pull/2633
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2633", "html_url": "https://github.com/huggingface/datasets/pull/2633", "diff_url": "https://github.com/huggingface/datasets/pull/2633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2633.patch", "merged_at": "2021-07-13T05:45...
true
942,293,727
2,632
add image-classification task template
closed
[]
2021-07-12T17:41:03
2021-07-13T15:44:28
2021-07-13T15:28:16
Snippet below is the tl;dr, but you can try it out directly here: [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb) ```python from datasets import load_datase...
nateraw
https://github.com/huggingface/datasets/pull/2632
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2632", "html_url": "https://github.com/huggingface/datasets/pull/2632", "diff_url": "https://github.com/huggingface/datasets/pull/2632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2632.patch", "merged_at": "2021-07-13T15:28...
true
942,242,271
2,631
Delete extracted files when loading dataset
closed
[]
2021-07-12T16:39:33
2021-07-19T09:08:19
2021-07-19T09:08:19
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
albertvillanova
https://github.com/huggingface/datasets/pull/2631
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2631", "html_url": "https://github.com/huggingface/datasets/pull/2631", "diff_url": "https://github.com/huggingface/datasets/pull/2631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2631.patch", "merged_at": "2021-07-19T09:08...
true
942,102,956
2,630
Progress bars are not properly rendered in Jupyter notebook
closed
[]
2021-07-12T14:07:13
2022-02-03T15:55:33
2022-02-03T15:55:33
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc...
albertvillanova
https://github.com/huggingface/datasets/issues/2630
null
false
941,819,205
2,629
Load datasets from the Hub without requiring a dataset script
closed
[]
2021-07-12T08:45:17
2021-08-25T14:18:08
2021-08-25T14:18:08
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `da...
lhoestq
https://github.com/huggingface/datasets/issues/2629
null
false
941,676,404
2,628
Use ETag of remote data files
closed
[]
2021-07-12T05:10:10
2021-07-12T14:08:34
2021-07-12T08:40:07
Use ETag of remote data files to create config ID. Related to #2616.
albertvillanova
https://github.com/huggingface/datasets/pull/2628
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2628", "html_url": "https://github.com/huggingface/datasets/pull/2628", "diff_url": "https://github.com/huggingface/datasets/pull/2628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2628.patch", "merged_at": "2021-07-12T08:40...
true
941,503,349
2,627
Minor fix tests with Windows paths
closed
[]
2021-07-11T17:55:48
2021-07-12T14:08:47
2021-07-12T08:34:50
Minor fix tests with Windows paths.
albertvillanova
https://github.com/huggingface/datasets/pull/2627
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2627", "html_url": "https://github.com/huggingface/datasets/pull/2627", "diff_url": "https://github.com/huggingface/datasets/pull/2627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2627.patch", "merged_at": "2021-07-12T08:34...
true
941,497,830
2,626
Use correct logger in metrics.py
closed
[]
2021-07-11T17:22:30
2021-07-12T14:08:54
2021-07-12T05:54:29
Fixes #2624
mariosasko
https://github.com/huggingface/datasets/pull/2626
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2626", "html_url": "https://github.com/huggingface/datasets/pull/2626", "diff_url": "https://github.com/huggingface/datasets/pull/2626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2626.patch", "merged_at": "2021-07-12T05:54...
true
941,439,922
2,625
βš›οΈπŸ˜‡βš™οΈπŸ”‘
closed
[]
2021-07-11T12:14:34
2021-07-12T05:55:59
2021-07-12T05:55:59
hustlen0mics
https://github.com/huggingface/datasets/issues/2625
null
false
941,318,247
2,624
can't set verbosity for `metric.py`
closed
[]
2021-07-10T20:23:45
2021-07-12T05:54:29
2021-07-12T05:54:29
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingfa...
thomas-happify
https://github.com/huggingface/datasets/issues/2624
null
false
941,265,342
2,623
[Metrics] added wiki_split metrics
closed
[]
2021-07-10T14:51:50
2021-07-14T14:28:13
2021-07-12T22:34:31
Fixes: #2606 This pull request adds combine metrics for the wikisplit or English sentence split task Reviewer: @patrickvonplaten
bhadreshpsavani
https://github.com/huggingface/datasets/pull/2623
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2623", "html_url": "https://github.com/huggingface/datasets/pull/2623", "diff_url": "https://github.com/huggingface/datasets/pull/2623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2623.patch", "merged_at": "2021-07-12T22:34...
true
941,127,785
2,622
Integration with AugLy
closed
[]
2021-07-10T00:03:09
2023-07-20T13:18:48
2023-07-20T13:18:47
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP m...
Darktex
https://github.com/huggingface/datasets/issues/2622
null
false
940,916,446
2,621
Use prefix to allow exceed Windows MAX_PATH
closed
[]
2021-07-09T16:39:53
2021-07-16T15:28:12
2021-07-16T15:28:11
By using this prefix, you can exceed the Windows MAX_PATH limit. See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces Related to #2524, #2220.
albertvillanova
https://github.com/huggingface/datasets/pull/2621
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2621", "html_url": "https://github.com/huggingface/datasets/pull/2621", "diff_url": "https://github.com/huggingface/datasets/pull/2621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2621.patch", "merged_at": "2021-07-16T15:28...
true
940,893,389
2,620
Add speech processing tasks
closed
[]
2021-07-09T16:07:29
2021-07-12T18:32:59
2021-07-12T17:32:02
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
lewtun
https://github.com/huggingface/datasets/pull/2620
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2620", "html_url": "https://github.com/huggingface/datasets/pull/2620", "diff_url": "https://github.com/huggingface/datasets/pull/2620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2620.patch", "merged_at": "2021-07-12T17:32...
true
940,858,236
2,619
Add ASR task for SUPERB
closed
[]
2021-07-09T15:19:45
2021-07-15T08:55:58
2021-07-13T12:40:18
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition). Usage: ```python from datasets import load_dataset ...
lewtun
https://github.com/huggingface/datasets/pull/2619
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2619", "html_url": "https://github.com/huggingface/datasets/pull/2619", "diff_url": "https://github.com/huggingface/datasets/pull/2619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2619.patch", "merged_at": "2021-07-13T12:40...
true
940,852,640
2,618
`filelock.py` Error
closed
[]
2021-07-09T15:12:49
2024-06-21T06:14:07
2023-11-23T19:06:19
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) ...
liyucheng09
https://github.com/huggingface/datasets/issues/2618
null
false
940,846,847
2,617
Fix missing EOL issue in to_json for old versions of pandas
closed
[]
2021-07-09T15:05:45
2021-07-12T14:09:00
2021-07-09T15:28:33
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
lhoestq
https://github.com/huggingface/datasets/pull/2617
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2617", "html_url": "https://github.com/huggingface/datasets/pull/2617", "diff_url": "https://github.com/huggingface/datasets/pull/2617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2617.patch", "merged_at": "2021-07-09T15:28...
true
940,799,038
2,616
Support remote data files
closed
[]
2021-07-09T14:07:38
2021-07-09T16:13:41
2021-07-09T16:13:41
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
albertvillanova
https://github.com/huggingface/datasets/pull/2616
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2616", "html_url": "https://github.com/huggingface/datasets/pull/2616", "diff_url": "https://github.com/huggingface/datasets/pull/2616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2616.patch", "merged_at": "2021-07-09T16:13...
true
940,794,339
2,615
Jsonlines export error
closed
[]
2021-07-09T14:02:05
2021-07-09T15:29:07
2021-07-09T15:28:33
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
TevenLeScao
https://github.com/huggingface/datasets/issues/2615
null
false
940,762,427
2,614
Convert numpy scalar to python float in Pearsonr output
closed
[]
2021-07-09T13:22:55
2021-07-12T14:13:02
2021-07-09T14:04:38
Following of https://github.com/huggingface/datasets/pull/2612
lhoestq
https://github.com/huggingface/datasets/pull/2614
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2614", "html_url": "https://github.com/huggingface/datasets/pull/2614", "diff_url": "https://github.com/huggingface/datasets/pull/2614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2614.patch", "merged_at": "2021-07-09T14:04...
true
940,759,852
2,613
Use ndarray.item instead of ndarray.tolist
closed
[]
2021-07-09T13:19:35
2021-07-12T14:12:57
2021-07-09T13:50:05
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#nump...
lewtun
https://github.com/huggingface/datasets/pull/2613
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2613", "html_url": "https://github.com/huggingface/datasets/pull/2613", "diff_url": "https://github.com/huggingface/datasets/pull/2613.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2613.patch", "merged_at": "2021-07-09T13:50...
true
940,604,512
2,612
Return Python float instead of numpy.float64 in sklearn metrics
closed
[]
2021-07-09T09:48:09
2021-07-12T14:12:53
2021-07-09T13:03:54
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`. The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-...
lewtun
https://github.com/huggingface/datasets/pull/2612
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2612", "html_url": "https://github.com/huggingface/datasets/pull/2612", "diff_url": "https://github.com/huggingface/datasets/pull/2612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2612.patch", "merged_at": "2021-07-09T13:03...
true
940,307,053
2,611
More consistent naming
closed
[]
2021-07-09T00:09:17
2021-07-13T17:13:19
2021-07-13T16:08:30
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`πŸ€—Datasets` -> `πŸ€— Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
mariosasko
https://github.com/huggingface/datasets/pull/2611
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2611", "html_url": "https://github.com/huggingface/datasets/pull/2611", "diff_url": "https://github.com/huggingface/datasets/pull/2611.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2611.patch", "merged_at": "2021-07-13T16:08...
true
939,899,829
2,610
Add missing WikiANN language tags
closed
[]
2021-07-08T14:08:01
2021-07-12T14:12:16
2021-07-08T15:44:04
Add missing language tags for WikiANN datasets.
albertvillanova
https://github.com/huggingface/datasets/pull/2610
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2610", "html_url": "https://github.com/huggingface/datasets/pull/2610", "diff_url": "https://github.com/huggingface/datasets/pull/2610.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2610.patch", "merged_at": "2021-07-08T15:44...
true
939,616,682
2,609
Fix potential DuplicatedKeysError
closed
[]
2021-07-08T08:38:04
2021-07-12T14:13:16
2021-07-09T16:42:08
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
albertvillanova
https://github.com/huggingface/datasets/pull/2609
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2609", "html_url": "https://github.com/huggingface/datasets/pull/2609", "diff_url": "https://github.com/huggingface/datasets/pull/2609.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2609.patch", "merged_at": "2021-07-09T16:42...
true
938,897,626
2,608
Support streaming JSON files
closed
[]
2021-07-07T13:30:22
2021-07-12T14:12:31
2021-07-08T16:08:41
Use open in JSON dataset builder, so that it can be patched with xopen for streaming. Close #2607.
albertvillanova
https://github.com/huggingface/datasets/pull/2608
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2608", "html_url": "https://github.com/huggingface/datasets/pull/2608", "diff_url": "https://github.com/huggingface/datasets/pull/2608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2608.patch", "merged_at": "2021-07-08T16:08...
true
938,796,902
2,607
Streaming local gzip compressed JSON line files is not working
closed
[]
2021-07-07T11:36:33
2021-07-20T09:50:19
2021-07-08T16:08:41
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset))...
thomwolf
https://github.com/huggingface/datasets/issues/2607
null
false
938,763,684
2,606
[Metrics] addition of wiki_split metrics
closed
[]
2021-07-07T10:56:04
2021-07-12T22:34:31
2021-07-12T22:34:31
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01...
bhadreshpsavani
https://github.com/huggingface/datasets/issues/2606
null
false
938,648,164
2,605
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
closed
[]
2021-07-07T08:47:23
2021-07-12T14:10:27
2021-07-07T08:59:13
During the FLAX sprint some users have this error when streaming datasets: ```python aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer ``` This error must trigger a retry instead of directly crashing Therefore I extended the error type that triggers the retry to be the base aiohttp er...
lhoestq
https://github.com/huggingface/datasets/pull/2605
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2605", "html_url": "https://github.com/huggingface/datasets/pull/2605", "diff_url": "https://github.com/huggingface/datasets/pull/2605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2605.patch", "merged_at": "2021-07-07T08:59...
true
938,602,237
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
closed
[]
2021-07-07T07:56:16
2021-07-19T09:08:18
2021-07-19T09:08:18
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
thomwolf
https://github.com/huggingface/datasets/issues/2604
null
false
938,588,149
2,603
Fix DuplicatedKeysError in omp
closed
[]
2021-07-07T07:38:32
2021-07-12T14:10:41
2021-07-07T12:56:35
Close #2598.
albertvillanova
https://github.com/huggingface/datasets/pull/2603
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2603", "html_url": "https://github.com/huggingface/datasets/pull/2603", "diff_url": "https://github.com/huggingface/datasets/pull/2603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2603.patch", "merged_at": "2021-07-07T12:56...
true
938,555,712
2,602
Remove import of transformers
closed
[]
2021-07-07T06:58:18
2021-07-12T14:10:22
2021-07-07T08:28:51
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers. Related to huggingface/transformers#12549 and #502.
albertvillanova
https://github.com/huggingface/datasets/pull/2602
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2602", "html_url": "https://github.com/huggingface/datasets/pull/2602", "diff_url": "https://github.com/huggingface/datasets/pull/2602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2602.patch", "merged_at": "2021-07-07T08:28...
true
938,096,396
2,601
Fix `filter` with multiprocessing in case all samples are discarded
closed
[]
2021-07-06T17:06:28
2021-07-12T14:10:35
2021-07-07T12:50:31
Fixes #2600 Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
mxschmdt
https://github.com/huggingface/datasets/pull/2601
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2601", "html_url": "https://github.com/huggingface/datasets/pull/2601", "diff_url": "https://github.com/huggingface/datasets/pull/2601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2601.patch", "merged_at": "2021-07-07T12:50...
true
938,086,745
2,600
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
closed
[]
2021-07-06T16:53:25
2021-07-07T12:50:31
2021-07-07T12:50:31
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) dat...
mxschmdt
https://github.com/huggingface/datasets/issues/2600
null
false
937,980,229
2,599
Update processing.rst with other export formats
closed
[]
2021-07-06T14:50:38
2021-07-12T14:10:16
2021-07-07T08:05:48
Add other supported export formats than CSV in the docs.
TevenLeScao
https://github.com/huggingface/datasets/pull/2599
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2599", "html_url": "https://github.com/huggingface/datasets/pull/2599", "diff_url": "https://github.com/huggingface/datasets/pull/2599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2599.patch", "merged_at": "2021-07-07T08:05...
true
937,930,632
2,598
Unable to download omp dataset
closed
[]
2021-07-06T14:00:52
2021-07-07T12:56:35
2021-07-07T12:56:35
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual r...
erikadistefano
https://github.com/huggingface/datasets/issues/2598
null
false
937,917,770
2,597
Remove redundant prepare_module
closed
[]
2021-07-06T13:47:45
2021-07-12T14:10:52
2021-07-07T13:01:46
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
albertvillanova
https://github.com/huggingface/datasets/pull/2597
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2597", "html_url": "https://github.com/huggingface/datasets/pull/2597", "diff_url": "https://github.com/huggingface/datasets/pull/2597.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2597.patch", "merged_at": "2021-07-07T13:01...
true
937,598,914
2,596
Transformer Class on dataset
closed
[]
2021-07-06T07:27:15
2022-11-02T14:26:09
2022-11-02T14:26:09
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
arita37
https://github.com/huggingface/datasets/issues/2596
null
false
937,483,120
2,595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
closed
[]
2021-07-06T03:20:55
2021-07-06T05:59:49
2021-07-06T05:59:49
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_da...
profsatwinder
https://github.com/huggingface/datasets/issues/2595
null
false
937,294,772
2,594
Fix BibTeX entry
closed
[]
2021-07-05T18:24:10
2021-07-06T04:59:38
2021-07-06T04:59:38
Fix BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/2594
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2594", "html_url": "https://github.com/huggingface/datasets/pull/2594", "diff_url": "https://github.com/huggingface/datasets/pull/2594.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2594.patch", "merged_at": "2021-07-06T04:59...
true
937,242,137
2,593
Support pandas 1.3.0 read_csv
closed
[]
2021-07-05T16:40:04
2021-07-05T17:14:14
2021-07-05T17:14:14
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on...
lhoestq
https://github.com/huggingface/datasets/pull/2593
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2593", "html_url": "https://github.com/huggingface/datasets/pull/2593", "diff_url": "https://github.com/huggingface/datasets/pull/2593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2593.patch", "merged_at": "2021-07-05T17:14...
true
937,060,559
2,592
Add c4.noclean infos
closed
[]
2021-07-05T12:51:40
2021-07-05T13:15:53
2021-07-05T13:15:52
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
lhoestq
https://github.com/huggingface/datasets/pull/2592
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2592", "html_url": "https://github.com/huggingface/datasets/pull/2592", "diff_url": "https://github.com/huggingface/datasets/pull/2592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2592.patch", "merged_at": "2021-07-05T13:15...
true
936,957,975
2,591
Cached dataset overflowing disk space
closed
[]
2021-07-05T10:43:19
2021-07-19T09:08:19
2021-07-19T09:08:19
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to b...
BirgerMoell
https://github.com/huggingface/datasets/issues/2591
null
false
936,954,348
2,590
Add language tags
closed
[]
2021-07-05T10:39:57
2021-07-05T10:58:48
2021-07-05T10:58:48
This PR adds some missing language tags needed for ASR datasets in #2565
lewtun
https://github.com/huggingface/datasets/pull/2590
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2590", "html_url": "https://github.com/huggingface/datasets/pull/2590", "diff_url": "https://github.com/huggingface/datasets/pull/2590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2590.patch", "merged_at": "2021-07-05T10:58...
true
936,825,060
2,589
Support multilabel metrics
closed
[]
2021-07-05T08:19:25
2022-07-29T10:56:25
2021-07-08T08:40:15
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
albertvillanova
https://github.com/huggingface/datasets/pull/2589
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2589", "html_url": "https://github.com/huggingface/datasets/pull/2589", "diff_url": "https://github.com/huggingface/datasets/pull/2589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2589.patch", "merged_at": "2021-07-08T08:40...
true
936,795,541
2,588
Fix test_is_small_dataset
closed
[]
2021-07-05T07:46:26
2021-07-12T14:10:11
2021-07-06T17:09:30
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
albertvillanova
https://github.com/huggingface/datasets/pull/2588
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2588", "html_url": "https://github.com/huggingface/datasets/pull/2588", "diff_url": "https://github.com/huggingface/datasets/pull/2588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2588.patch", "merged_at": "2021-07-06T17:09...
true
936,771,339
2,587
Add aiohttp to tests extras require
closed
[]
2021-07-05T07:14:01
2021-07-05T09:04:38
2021-07-05T09:04:38
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies. Our CI test suite should be exhaustive and test all the library functionalities.
albertvillanova
https://github.com/huggingface/datasets/pull/2587
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2587", "html_url": "https://github.com/huggingface/datasets/pull/2587", "diff_url": "https://github.com/huggingface/datasets/pull/2587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2587.patch", "merged_at": "2021-07-05T09:04...
true
936,747,588
2,586
Fix misalignment in SQuAD
closed
[]
2021-07-05T06:42:20
2021-07-12T14:11:10
2021-07-07T13:18:51
Fix misalignment between: - the answer text and - the answer_start within the context by keeping original leading blank spaces in the context. Fix #2585.
albertvillanova
https://github.com/huggingface/datasets/pull/2586
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2586", "html_url": "https://github.com/huggingface/datasets/pull/2586", "diff_url": "https://github.com/huggingface/datasets/pull/2586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2586.patch", "merged_at": "2021-07-07T13:18...
true
936,484,419
2,585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
closed
[]
2021-07-04T15:39:49
2021-07-07T13:18:51
2021-07-07T13:18:51
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['P...
mmajurski
https://github.com/huggingface/datasets/issues/2585
null
false
936,049,736
2,584
wi_locness: reference latest leaderboard on codalab
closed
[]
2021-07-02T20:26:22
2021-07-05T09:06:14
2021-07-05T09:06:14
The dataset's author asked me to put this codalab link into the dataset's README.
aseifert
https://github.com/huggingface/datasets/pull/2584
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2584", "html_url": "https://github.com/huggingface/datasets/pull/2584", "diff_url": "https://github.com/huggingface/datasets/pull/2584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2584.patch", "merged_at": "2021-07-05T09:06...
true
936,034,976
2,583
Error iteration over IterableDataset using Torch DataLoader
closed
[]
2021-07-02T19:55:58
2021-07-20T09:04:45
2021-07-05T23:48:23
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh...
LeenaShekhar
https://github.com/huggingface/datasets/issues/2583
null
false
935,859,104
2,582
Add skip and take
closed
[]
2021-07-02T15:10:19
2021-07-05T16:06:40
2021-07-05T16:06:39
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets. You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a...
lhoestq
https://github.com/huggingface/datasets/pull/2582
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2582", "html_url": "https://github.com/huggingface/datasets/pull/2582", "diff_url": "https://github.com/huggingface/datasets/pull/2582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2582.patch", "merged_at": "2021-07-05T16:06...
true
935,783,588
2,581
Faster search_batch for ElasticsearchIndex due to threading
closed
[]
2021-07-02T13:42:07
2021-07-12T14:13:46
2021-07-12T09:52:51
Hey, I think it makes sense to perform search_batch threaded, so ES can perform search in parallel. Cheers!
mwrzalik
https://github.com/huggingface/datasets/pull/2581
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2581", "html_url": "https://github.com/huggingface/datasets/pull/2581", "diff_url": "https://github.com/huggingface/datasets/pull/2581.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2581.patch", "merged_at": "2021-07-12T09:52...
true
935,767,421
2,580
Fix Counter import
closed
[]
2021-07-02T13:21:48
2021-07-02T14:37:47
2021-07-02T14:37:46
Import from `collections` instead of `typing`.
albertvillanova
https://github.com/huggingface/datasets/pull/2580
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2580", "html_url": "https://github.com/huggingface/datasets/pull/2580", "diff_url": "https://github.com/huggingface/datasets/pull/2580.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2580.patch", "merged_at": "2021-07-02T14:37...
true
935,486,894
2,579
Fix BibTeX entry
closed
[]
2021-07-02T07:10:40
2021-07-02T07:33:44
2021-07-02T07:33:44
Add missing contributor to BibTeX entry. cc: @abhishekkrthakur @thomwolf
albertvillanova
https://github.com/huggingface/datasets/pull/2579
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2579", "html_url": "https://github.com/huggingface/datasets/pull/2579", "diff_url": "https://github.com/huggingface/datasets/pull/2579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2579.patch", "merged_at": "2021-07-02T07:33...
true
935,187,497
2,578
Support Zstandard compressed files
closed
[]
2021-07-01T20:22:34
2021-08-11T14:46:24
2021-07-05T10:50:27
Close #2572. cc: @thomwolf
albertvillanova
https://github.com/huggingface/datasets/pull/2578
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2578", "html_url": "https://github.com/huggingface/datasets/pull/2578", "diff_url": "https://github.com/huggingface/datasets/pull/2578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2578.patch", "merged_at": "2021-07-05T10:50...
true
934,986,761
2,576
Add mC4
closed
[]
2021-07-01T15:51:25
2021-07-02T14:50:56
2021-07-02T14:50:55
AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them ! In this PR I added the mC4 dataset builder. It supports 108 languages You can load it with ```python from datasets import load_dataset en_mc4 = load_dataset("mc4", "en") f...
lhoestq
https://github.com/huggingface/datasets/pull/2576
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2576", "html_url": "https://github.com/huggingface/datasets/pull/2576", "diff_url": "https://github.com/huggingface/datasets/pull/2576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2576.patch", "merged_at": "2021-07-02T14:50...
true
934,876,496
2,575
Add C4
closed
[]
2021-07-01T13:58:08
2021-07-02T14:50:23
2021-07-02T14:50:23
The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets. However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them for their amazing work ! In this PR I changed the script to download and prepare ...
lhoestq
https://github.com/huggingface/datasets/pull/2575
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2575", "html_url": "https://github.com/huggingface/datasets/pull/2575", "diff_url": "https://github.com/huggingface/datasets/pull/2575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2575.patch", "merged_at": "2021-07-02T14:50...
true
934,632,378
2,574
Add streaming in load a dataset docs
closed
[]
2021-07-01T09:32:53
2021-07-01T14:12:22
2021-07-01T14:12:21
Mention dataset streaming on the "loading a dataset" page of the documentation
lhoestq
https://github.com/huggingface/datasets/pull/2574
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2574", "html_url": "https://github.com/huggingface/datasets/pull/2574", "diff_url": "https://github.com/huggingface/datasets/pull/2574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2574.patch", "merged_at": "2021-07-01T14:12...
true
934,584,745
2,573
Finding right block-size with JSON loading difficult for user
open
[]
2021-07-01T08:48:35
2021-07-01T19:10:53
null
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
albertvillanova
https://github.com/huggingface/datasets/issues/2573
null
false
934,573,767
2,572
Support Zstandard compressed files
closed
[]
2021-07-01T08:37:04
2023-01-03T15:34:01
2021-07-05T10:50:27
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
albertvillanova
https://github.com/huggingface/datasets/issues/2572
null
false
933,791,018
2,571
Filter expected warning log from transformers
closed
[]
2021-06-30T14:48:19
2021-07-02T04:08:17
2021-07-02T04:08:17
Close #2569.
albertvillanova
https://github.com/huggingface/datasets/pull/2571
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2571", "html_url": "https://github.com/huggingface/datasets/pull/2571", "diff_url": "https://github.com/huggingface/datasets/pull/2571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2571.patch", "merged_at": "2021-07-02T04:08...
true
933,402,521
2,570
Minor fix docs format for bertscore
closed
[]
2021-06-30T07:42:12
2021-06-30T15:31:01
2021-06-30T15:31:01
Minor fix docs format for bertscore: - link to README - format of KWARGS_DESCRIPTION
albertvillanova
https://github.com/huggingface/datasets/pull/2570
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2570", "html_url": "https://github.com/huggingface/datasets/pull/2570", "diff_url": "https://github.com/huggingface/datasets/pull/2570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2570.patch", "merged_at": "2021-06-30T15:31...
true
933,015,797
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
closed
[]
2021-06-29T18:55:23
2021-07-01T07:08:59
2021-06-30T07:35:49
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical ...
suzyahyah
https://github.com/huggingface/datasets/issues/2569
null
false
932,934,795
2,568
Add interleave_datasets for map-style datasets
closed
[]
2021-06-29T17:19:24
2021-07-01T09:33:34
2021-07-01T09:33:33
### Add interleave_datasets for map-style datasets Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`. It was only supporting iterable datasets (i.e. `IterableDataset` objects). ### Implementation details It works by concatenating the datasets and then re-order the indices to...
lhoestq
https://github.com/huggingface/datasets/pull/2568
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2568", "html_url": "https://github.com/huggingface/datasets/pull/2568", "diff_url": "https://github.com/huggingface/datasets/pull/2568.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2568.patch", "merged_at": "2021-07-01T09:33...
true
932,933,536
2,567
Add ASR task and new languages to resources
closed
[]
2021-06-29T17:18:01
2021-07-01T09:42:23
2021-07-01T09:42:09
This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`. Note: I used the [Papers with Code list](https://www.paperswithcode.com/area/speech/speech-recognition) as inspiration for the ASR subtasks
lewtun
https://github.com/huggingface/datasets/pull/2567
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2567", "html_url": "https://github.com/huggingface/datasets/pull/2567", "diff_url": "https://github.com/huggingface/datasets/pull/2567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2567.patch", "merged_at": "2021-07-01T09:42...
true
932,804,725
2,566
fix Dataset.map when num_procs > num rows
closed
[]
2021-06-29T15:07:07
2021-07-01T09:11:13
2021-07-01T09:11:13
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map...
connor-mccarthy
https://github.com/huggingface/datasets/pull/2566
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2566", "html_url": "https://github.com/huggingface/datasets/pull/2566", "diff_url": "https://github.com/huggingface/datasets/pull/2566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2566.patch", "merged_at": "2021-07-01T09:11...
true
932,445,439
2,565
Inject templates for ASR datasets
closed
[]
2021-06-29T10:02:01
2021-07-05T14:26:26
2021-07-05T14:26:26
This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them. I also fixed a bunch of the tags in the READMEs 😎
lewtun
https://github.com/huggingface/datasets/pull/2565
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2565", "html_url": "https://github.com/huggingface/datasets/pull/2565", "diff_url": "https://github.com/huggingface/datasets/pull/2565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2565.patch", "merged_at": "2021-07-05T14:26...
true
932,389,639
2,564
concatenate_datasets for iterable datasets
closed
[]
2021-06-29T08:59:41
2022-06-28T21:15:04
2022-06-28T21:15:04
Currently `concatenate_datasets` only works for map-style `Dataset`. It would be nice to have it work for `IterableDataset` objects as well. It would simply chain the iterables of the iterable datasets.
lhoestq
https://github.com/huggingface/datasets/issues/2564
null
false
932,387,639
2,563
interleave_datasets for map-style datasets
closed
[]
2021-06-29T08:57:24
2021-07-01T09:33:33
2021-07-01T09:33:33
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
lhoestq
https://github.com/huggingface/datasets/issues/2563
null
false
932,333,436
2,562
Minor fix in loading metrics docs
closed
[]
2021-06-29T07:55:11
2021-06-29T17:21:22
2021-06-29T17:21:22
Make some minor fixes in "Loading metrics" docs.
albertvillanova
https://github.com/huggingface/datasets/pull/2562
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2562", "html_url": "https://github.com/huggingface/datasets/pull/2562", "diff_url": "https://github.com/huggingface/datasets/pull/2562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2562.patch", "merged_at": "2021-06-29T17:21...
true
932,321,725
2,561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
closed
[]
2021-06-29T07:43:03
2022-08-04T11:58:36
2022-08-04T11:58:36
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
apsdehal
https://github.com/huggingface/datasets/issues/2561
null
false
932,143,634
2,560
fix Dataset.map when num_procs > num rows
closed
[]
2021-06-29T02:24:11
2021-06-29T15:00:18
2021-06-29T14:53:31
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map...
connor-mccarthy
https://github.com/huggingface/datasets/pull/2560
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2560", "html_url": "https://github.com/huggingface/datasets/pull/2560", "diff_url": "https://github.com/huggingface/datasets/pull/2560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2560.patch", "merged_at": null }
true