id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,413,623,687
https://api.github.com/repos/huggingface/datasets/issues/5134
https://github.com/huggingface/datasets/issues/5134
5,134
Raise ImportError instead of OSError if required extraction library is not installed
closed
2
2022-10-18T17:53:46
2022-10-25T15:56:59
2022-10-25T15:56:59
mariosasko
[ "enhancement", "good first issue", "hacktoberfest" ]
According to the official Python docs, `OSError` should be thrown in the following situations: > This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors). Hence, it makes...
false
1,413,623,462
https://api.github.com/repos/huggingface/datasets/issues/5133
https://github.com/huggingface/datasets/issues/5133
5,133
Tensor operation not functioning in dataset mapping
closed
2
2022-10-18T17:53:35
2022-10-19T04:15:45
2022-10-19T04:15:44
xinghaow99
[ "bug" ]
## Describe the bug I'm doing a torch.mean() operation in data preprocessing, and it's not working. ## Steps to reproduce the bug ``` from transformers import pipeline import torch import numpy as np from datasets import load_dataset device = 'cuda:0' raw_dataset = load_dataset("glue", "sst2") feature_extra...
false
1,413,607,306
https://api.github.com/repos/huggingface/datasets/issues/5132
https://github.com/huggingface/datasets/issues/5132
5,132
Depracate `num_proc` parameter in `DownloadManager.extract`
closed
5
2022-10-18T17:41:05
2022-10-25T15:56:46
2022-10-25T15:56:46
mariosasko
[ "enhancement", "good first issue", "hacktoberfest" ]
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `Download...
false
1,413,534,863
https://api.github.com/repos/huggingface/datasets/issues/5131
https://github.com/huggingface/datasets/issues/5131
5,131
WikiText 103 tokenizer hangs
closed
1
2022-10-18T16:44:00
2023-08-08T08:42:40
2023-07-21T14:41:51
TrentBrick
[ "bug" ]
See issue here: https://github.com/huggingface/transformers/issues/19702
false
1,413,435,000
https://api.github.com/repos/huggingface/datasets/issues/5130
https://github.com/huggingface/datasets/pull/5130
5,130
Avoid extra cast in `class_encode_column`
closed
1
2022-10-18T15:31:24
2022-10-19T11:53:02
2022-10-19T11:50:46
mariosasko
[]
Pass the updated features to `map` to avoid the `cast` in `class_encode_column`.
true
1,413,031,664
https://api.github.com/repos/huggingface/datasets/issues/5129
https://github.com/huggingface/datasets/issues/5129
5,129
unexpected `cast` or `class_encode_column` result after `rename_column`
closed
4
2022-10-18T11:15:24
2022-10-19T03:02:26
2022-10-19T03:02:26
quaeast
[ "bug" ]
## Describe the bug When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version. ## Steps to reproduce the bug ```python...
false
1,412,783,855
https://api.github.com/repos/huggingface/datasets/issues/5128
https://github.com/huggingface/datasets/pull/5128
5,128
Make filename matching more robust
closed
3
2022-10-18T08:22:48
2022-10-28T13:07:38
2022-10-28T13:05:06
riccardobucco
[]
Fix #5046
true
1,411,897,544
https://api.github.com/repos/huggingface/datasets/issues/5127
https://github.com/huggingface/datasets/pull/5127
5,127
[WIP] WebDataset export
closed
2
2022-10-17T16:50:22
2024-01-11T06:27:04
2024-01-08T14:25:43
lhoestq
[]
I added a first draft of the `IterableDataset.to_wds` method. You can use it to savea dataset loaded in streamign mode as a webdataset locally. The API can be further improved to allow to export to a cloud storage like the HF Hub. I also included sharding with a default max shard size of 500MB (uncompressed), an...
true
1,411,757,124
https://api.github.com/repos/huggingface/datasets/issues/5126
https://github.com/huggingface/datasets/pull/5126
5,126
Fix class name of symbolic link
closed
4
2022-10-17T15:11:02
2022-11-14T14:40:18
2022-11-14T14:40:18
riccardobucco
[]
Fix #5098
true
1,411,602,813
https://api.github.com/repos/huggingface/datasets/issues/5125
https://github.com/huggingface/datasets/pull/5125
5,125
Add `pyproject.toml` for `black`
closed
1
2022-10-17T13:38:47
2024-11-20T13:36:11
2022-10-17T14:21:09
mariosasko
[]
Add `pyproject.toml` as a config file for the `black` tool to support VS Code's auto-formatting on save (and to be more consistent with the other HF projects).
true
1,411,159,725
https://api.github.com/repos/huggingface/datasets/issues/5124
https://github.com/huggingface/datasets/pull/5124
5,124
Install tensorflow-macos dependency conditionally
closed
1
2022-10-17T08:45:08
2022-10-19T09:12:17
2022-10-19T09:10:06
albertvillanova
[]
Fix #5118.
true
1,410,828,756
https://api.github.com/repos/huggingface/datasets/issues/5123
https://github.com/huggingface/datasets/issues/5123
5,123
datasets freezes with streaming mode in multiple-gpu
open
11
2022-10-17T03:28:16
2023-05-14T06:55:20
null
jackfeinmann5
[ "bug" ]
## Describe the bug Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/cod...
false
1,410,732,403
https://api.github.com/repos/huggingface/datasets/issues/5122
https://github.com/huggingface/datasets/pull/5122
5,122
Add warning
closed
1
2022-10-17T01:30:37
2022-11-05T12:23:53
2022-11-05T12:23:53
Salehbigdeli
[]
Fixes: #5105 I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory.
true
1,410,681,067
https://api.github.com/repos/huggingface/datasets/issues/5121
https://github.com/huggingface/datasets/pull/5121
5,121
Bugfix ignore function when creating new_fingerprint for caching
closed
1
2022-10-17T00:03:43
2022-10-17T12:39:36
2022-10-17T12:39:36
Salehbigdeli
[]
maybe fixes: #5109
true
1,410,641,221
https://api.github.com/repos/huggingface/datasets/issues/5120
https://github.com/huggingface/datasets/pull/5120
5,120
Fix `tqdm` zip bug
closed
11
2022-10-16T22:19:18
2022-10-23T10:27:53
2022-10-19T08:53:17
david1542
[]
This PR solves #5117, by wrapping the entire `zip` clause in tqdm. For more information, please checkout this Stack Overflow thread: https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together
true
1,410,561,363
https://api.github.com/repos/huggingface/datasets/issues/5119
https://github.com/huggingface/datasets/pull/5119
5,119
[TYPO] Update new_dataset_script.py
closed
1
2022-10-16T17:36:49
2022-10-19T09:48:19
2022-10-19T09:45:59
cakiki
[]
null
true
1,410,547,373
https://api.github.com/repos/huggingface/datasets/issues/5118
https://github.com/huggingface/datasets/issues/5118
5,118
Installing `datasets` on M1 computers
closed
1
2022-10-16T16:50:08
2022-10-19T09:10:08
2022-10-19T09:10:08
david1542
[ "bug" ]
## Describe the bug I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`. On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1...
false
1,409,571,346
https://api.github.com/repos/huggingface/datasets/issues/5117
https://github.com/huggingface/datasets/issues/5117
5,117
Progress bars have color red and never completed to 100%
closed
5
2022-10-14T16:12:30
2024-06-19T19:03:42
2022-10-23T12:58:41
echatzikyriakidis
[ "bug" ]
## Describe the bug Progress bars after transformative operations turn in red and never be completed to 100% ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('rotten_tomatoes', split='test').filter(lambda o: True) ``` ## Expected results Progress bar should be 100% an...
false
1,409,549,471
https://api.github.com/repos/huggingface/datasets/issues/5116
https://github.com/huggingface/datasets/pull/5116
5,116
Use yaml for issue templates + revamp
closed
1
2022-10-14T15:53:13
2022-10-19T13:05:49
2022-10-19T13:03:22
mariosasko
[]
Use YAML instead of markdown (more expressive) for the issue templates. In addition, update their structure/fields to be more aligned with Transformers. PS: also removes the "add_dataset" PR template, as we no longer accept such PRs.
true
1,409,250,020
https://api.github.com/repos/huggingface/datasets/issues/5115
https://github.com/huggingface/datasets/pull/5115
5,115
Fix iter_batches
closed
3
2022-10-14T12:06:14
2022-10-14T15:02:15
2022-10-14T14:59:58
lhoestq
[]
The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on o...
true
1,409,236,738
https://api.github.com/repos/huggingface/datasets/issues/5114
https://github.com/huggingface/datasets/issues/5114
5,114
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
open
2
2022-10-14T11:54:53
2022-11-19T07:13:10
null
bruno-hays
[ "bug" ]
## Describe the bug The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py: ```python if is_remote_filesystem(fs): src_dataset_path = extract_path_from_uri(dataset_path) dataset_path = Dataset._build...
false
1,409,207,607
https://api.github.com/repos/huggingface/datasets/issues/5113
https://github.com/huggingface/datasets/pull/5113
5,113
Fix filter indices when batched
closed
3
2022-10-14T11:30:03
2022-10-24T06:21:09
2022-10-14T12:11:44
albertvillanova
[]
This PR fixes a bug introduced by: - #5030 Fix #5112.
true
1,409,143,409
https://api.github.com/repos/huggingface/datasets/issues/5112
https://github.com/huggingface/datasets/issues/5112
5,112
Bug with filtered indices
closed
3
2022-10-14T10:35:47
2022-10-14T13:55:03
2022-10-14T12:11:45
albertvillanova
[ "bug" ]
## Describe the bug As reported by @PartiallyTyped (and by @Muennighoff): - https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524 There is an issue with the indices of a filtered dataset. ## Steps to reproduce the bug ```python ds = Dataset.from_dict({"num": [0, 1, 2, 3]}) ds = ds.filte...
false
1,408,143,170
https://api.github.com/repos/huggingface/datasets/issues/5111
https://github.com/huggingface/datasets/issues/5111
5,111
map and filter not working properly in multiprocessing with the new release 2.6.0
closed
14
2022-10-13T17:00:55
2022-10-17T08:26:59
2022-10-14T14:59:59
loubnabnl
[ "bug" ]
## Describe the bug When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5...
false
1,407,434,706
https://api.github.com/repos/huggingface/datasets/issues/5109
https://github.com/huggingface/datasets/issues/5109
5,109
Map caching not working for some class methods
closed
2
2022-10-13T09:12:58
2022-10-17T10:38:45
2022-10-17T10:38:45
Mouhanedg56
[ "bug" ]
## Describe the bug The cache loading is not working as expected for some class methods with a model stored in an attribute. The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method. This comes from `dumps` function in `datasets.utils.py_utils` whic...
false
1,407,044,107
https://api.github.com/repos/huggingface/datasets/issues/5108
https://github.com/huggingface/datasets/pull/5108
5,108
Fix a typo in arrow_dataset.py
closed
0
2022-10-13T02:33:55
2022-10-14T09:47:28
2022-10-14T09:47:27
yangky11
[]
null
true
1,406,736,710
https://api.github.com/repos/huggingface/datasets/issues/5107
https://github.com/huggingface/datasets/pull/5107
5,107
Multiprocessed dataset builder
closed
17
2022-10-12T19:59:17
2022-12-01T15:37:09
2022-11-09T17:11:43
TevenLeScao
[]
This PR adds the multiprocessing part of #2650 (but not the caching of already-computed arrow files). On the other side, loading of sharded arrow files still needs to be implemented (sharded parquet files can already be loaded).
true
1,406,635,758
https://api.github.com/repos/huggingface/datasets/issues/5106
https://github.com/huggingface/datasets/pull/5106
5,106
Fix task template reload from dict
closed
2
2022-10-12T18:33:49
2022-10-13T09:59:07
2022-10-13T09:56:51
lhoestq
[]
Since #4926 the JSON dumps are simplified and it made task template dicts empty by default. I fixed this by always including the task name which is needed to reload a task from a dict
true
1,406,078,357
https://api.github.com/repos/huggingface/datasets/issues/5105
https://github.com/huggingface/datasets/issues/5105
5,105
Specifying an exisiting folder in download_and_prepare deletes everything in it
open
5
2022-10-12T11:53:33
2022-10-20T11:53:59
null
cakiki
[ "bug" ]
## Describe the bug The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following: ``` ...
false
1,405,973,102
https://api.github.com/repos/huggingface/datasets/issues/5104
https://github.com/huggingface/datasets/pull/5104
5,104
Fix loading how to guide (#5102)
closed
1
2022-10-12T10:34:42
2022-10-12T11:34:07
2022-10-12T11:31:55
riccardobucco
[]
null
true
1,405,956,311
https://api.github.com/repos/huggingface/datasets/issues/5103
https://github.com/huggingface/datasets/pull/5103
5,103
url encode hub url (#5099)
closed
1
2022-10-12T10:22:12
2022-10-12T15:27:24
2022-10-12T15:24:47
riccardobucco
[]
null
true
1,404,746,554
https://api.github.com/repos/huggingface/datasets/issues/5102
https://github.com/huggingface/datasets/issues/5102
5,102
Error in create a dataset from a Python generator
closed
2
2022-10-11T14:28:58
2022-10-12T11:31:56
2022-10-12T11:31:56
yangxuhui
[ "bug", "good first issue", "hacktoberfest" ]
## Describe the bug In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in. ```Python >>> from datasets import Dataset >>> def my_gen...
false
1,404,513,085
https://api.github.com/repos/huggingface/datasets/issues/5101
https://github.com/huggingface/datasets/pull/5101
5,101
Free the "hf" filesystem protocol for `hffs`
closed
1
2022-10-11T11:57:21
2022-10-12T15:32:59
2022-10-12T15:30:38
lhoestq
[]
null
true
1,404,458,586
https://api.github.com/repos/huggingface/datasets/issues/5100
https://github.com/huggingface/datasets/issues/5100
5,100
datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method
closed
0
2022-10-11T11:16:31
2022-10-11T13:48:26
2022-10-11T13:48:26
jagochi
[]
null
false
1,404,370,191
https://api.github.com/repos/huggingface/datasets/issues/5099
https://github.com/huggingface/datasets/issues/5099
5,099
datasets doesn't support # in data paths
closed
9
2022-10-11T10:05:32
2022-10-13T13:14:20
2022-10-13T13:14:20
loubnabnl
[ "bug", "good first issue", "hacktoberfest" ]
## Describe the bug dataset files with `#` symbol their paths aren't read correctly. ## Steps to reproduce the bug The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly ```python ds = lo...
false
1,404,058,518
https://api.github.com/repos/huggingface/datasets/issues/5098
https://github.com/huggingface/datasets/issues/5098
5,098
Classes label error when loading symbolic links using imagefolder
closed
3
2022-10-11T06:10:58
2022-11-14T14:40:20
2022-11-14T14:40:20
horizon86
[ "enhancement", "good first issue", "hacktoberfest" ]
**Is your feature request related to a problem? Please describe.** Like this: #4015 When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide wh...
false
1,403,679,353
https://api.github.com/repos/huggingface/datasets/issues/5097
https://github.com/huggingface/datasets/issues/5097
5,097
Fatal error with pyarrow/libarrow.so
closed
1
2022-10-10T20:29:04
2022-10-11T06:56:01
2022-10-11T06:56:00
catalys1
[ "bug" ]
## Describe the bug When using datasets, at the very end of my jobs the program crashes (see trace below). It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error. ## Steps to reproduce the bug This is sufficient to reprodu...
false
1,403,379,816
https://api.github.com/repos/huggingface/datasets/issues/5096
https://github.com/huggingface/datasets/issues/5096
5,096
Transfer some canonical datasets under an organization namespace
closed
11
2022-10-10T15:44:31
2024-06-24T06:06:28
2024-06-24T06:02:45
albertvillanova
[ "dataset contribution" ]
As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist). On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventua...
false
1,403,221,408
https://api.github.com/repos/huggingface/datasets/issues/5095
https://github.com/huggingface/datasets/pull/5095
5,095
Fix tutorial (#5093)
closed
2
2022-10-10T13:55:15
2022-10-10T17:50:52
2022-10-10T15:32:20
riccardobucco
[]
Close #5093
true
1,403,214,950
https://api.github.com/repos/huggingface/datasets/issues/5094
https://github.com/huggingface/datasets/issues/5094
5,094
Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock
closed
11
2022-10-10T13:50:56
2023-07-24T15:29:13
2023-07-24T15:29:13
RR-28023
[ "bug" ]
## Describe the bug There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datase...
false
1,402,939,660
https://api.github.com/repos/huggingface/datasets/issues/5093
https://github.com/huggingface/datasets/issues/5093
5,093
Mismatch between tutoriel and doc
closed
3
2022-10-10T10:23:53
2022-10-10T17:51:15
2022-10-10T17:51:14
clefourrier
[ "bug", "good first issue", "hacktoberfest" ]
## Describe the bug In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor...
false
1,402,713,517
https://api.github.com/repos/huggingface/datasets/issues/5092
https://github.com/huggingface/datasets/pull/5092
5,092
Use HTML relative paths for tiles in the docs
closed
3
2022-10-10T07:24:27
2022-10-11T13:25:45
2022-10-11T13:23:23
lewtun
[]
This PR replaces the absolute paths in the landing page tiles with relative ones so that one can test navigation both locally in and in future PRs (see [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084/en/index) for an example PR where the links don't work). I encountered this while working on the `op...
true
1,401,112,552
https://api.github.com/repos/huggingface/datasets/issues/5091
https://github.com/huggingface/datasets/pull/5091
5,091
Allow connection objects in `from_sql` + small doc improvement
closed
1
2022-10-07T12:39:44
2022-10-09T13:19:15
2022-10-09T13:16:57
mariosasko
[]
Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string. PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~...
true
1,401,102,407
https://api.github.com/repos/huggingface/datasets/issues/5090
https://github.com/huggingface/datasets/issues/5090
5,090
Review sync issues from GitHub to Hub
closed
1
2022-10-07T12:31:56
2022-10-08T07:07:36
2022-10-08T07:07:36
albertvillanova
[ "bug" ]
## Describe the bug We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch. For example: - this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b - was not properly synced with the Hub: https...
false
1,400,788,486
https://api.github.com/repos/huggingface/datasets/issues/5089
https://github.com/huggingface/datasets/issues/5089
5,089
Resume failed process
open
0
2022-10-07T08:07:03
2022-10-07T08:07:03
null
felix-schneider
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress. **Describe the solution you'd like** It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart ...
false
1,400,530,412
https://api.github.com/repos/huggingface/datasets/issues/5088
https://github.com/huggingface/datasets/issues/5088
5,088
load_datasets("json", ...) don't read local .json.gz properly
open
2
2022-10-07T02:16:58
2022-10-07T14:43:16
null
junwang-wish
[ "bug" ]
## Describe the bug I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines) ## Steps to reproduce the bug ```python fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz' ds_panda = Da...
false
1,400,487,967
https://api.github.com/repos/huggingface/datasets/issues/5087
https://github.com/huggingface/datasets/pull/5087
5,087
Fix filter with empty indices
closed
1
2022-10-07T01:07:00
2022-10-07T18:43:03
2022-10-07T18:40:26
Mouhanedg56
[]
Fix #5085
true
1,400,216,975
https://api.github.com/repos/huggingface/datasets/issues/5086
https://github.com/huggingface/datasets/issues/5086
5,086
HTTPError: 404 Client Error: Not Found for url
closed
3
2022-10-06T19:48:58
2022-10-07T15:12:01
2022-10-07T15:12:01
keyuchen21
[ "bug" ]
## Describe the bug I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf However, I'm not able to download the datasets, with a 404 erros <img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-...
false
1,400,113,569
https://api.github.com/repos/huggingface/datasets/issues/5085
https://github.com/huggingface/datasets/issues/5085
5,085
Filtering on an empty dataset returns a corrupted dataset.
closed
3
2022-10-06T18:18:49
2022-10-07T19:06:02
2022-10-07T18:40:26
gabegma
[ "bug", "hacktoberfest" ]
## Describe the bug When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted. ## Steps to reproduce the bug ```python datasets = load_dataset("glue", "sst2") dataset_split = datasets['validation'] ds_filter_1 = dataset_split.filter(lambda x: False) # ...
false
1,400,016,229
https://api.github.com/repos/huggingface/datasets/issues/5084
https://github.com/huggingface/datasets/pull/5084
5,084
IterableDataset formatting in numpy/torch/tf/jax
closed
3
2022-10-06T16:53:38
2023-09-24T10:06:51
2022-12-20T17:19:52
lhoestq
[]
This code now returns a numpy array: ```python from datasets import load_dataset ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np") print(next(iter(ds))["image"]) ``` It also works with "arrow", "pandas", "torch", "tf" and "jax" ### Implementation details: I'm using the ex...
true
1,399,842,514
https://api.github.com/repos/huggingface/datasets/issues/5083
https://github.com/huggingface/datasets/issues/5083
5,083
Support numpy/torch/tf/jax formatting for IterableDataset
closed
2
2022-10-06T15:14:58
2023-10-09T12:42:15
2023-10-09T12:42:15
lhoestq
[ "enhancement", "streaming", "good second issue" ]
Right now `IterableDataset` doesn't do any formatting. In particular this code should return a numpy array: ```python from datasets import load_dataset ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np") print(next(iter(ds))["image"]) ``` Right now it returns a PIL.Image. S...
false
1,399,379,777
https://api.github.com/repos/huggingface/datasets/issues/5082
https://github.com/huggingface/datasets/pull/5082
5,082
adding keep in memory
closed
2
2022-10-06T11:10:46
2022-10-07T14:35:34
2022-10-07T14:32:54
Mustapha-AJEGHRIR
[]
Fixing #514 . Hello @mariosasko 👋, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 .
true
1,399,340,050
https://api.github.com/repos/huggingface/datasets/issues/5081
https://github.com/huggingface/datasets/issues/5081
5,081
Bug loading `sentence-transformers/parallel-sentences`
open
8
2022-10-06T10:47:51
2022-10-11T10:00:48
null
PhilipMay
[ "bug" ]
## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sentence-transformers/parallel-sentences") ``` raises this: ``` /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the '...
false
1,398,849,565
https://api.github.com/repos/huggingface/datasets/issues/5080
https://github.com/huggingface/datasets/issues/5080
5,080
Use hfh for caching
open
1
2022-10-06T05:51:58
2022-10-06T14:26:05
null
albertvillanova
[ "enhancement" ]
## Is your feature request related to a problem? As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching. ## Describe the solution you'd like Due to the peculiarities of the `datasets` cache, I would prop...
false
1,398,609,305
https://api.github.com/repos/huggingface/datasets/issues/5079
https://github.com/huggingface/datasets/pull/5079
5,079
refactor: replace AssertionError with more meaningful exceptions (#5074)
closed
1
2022-10-06T01:39:35
2022-10-07T14:35:43
2022-10-07T14:33:10
galbwe
[]
Closes #5074 Replaces `AssertionError` in the following files with more descriptive exceptions: - `src/datasets/arrow_reader.py` - `src/datasets/builder.py` - `src/datasets/utils/version.py` The issue listed more files that needed to be fixed, but the rest of them were contained in the top-level `datasets` d...
true
1,398,335,148
https://api.github.com/repos/huggingface/datasets/issues/5078
https://github.com/huggingface/datasets/pull/5078
5,078
Fix header level in Audio docs
closed
1
2022-10-05T20:22:44
2022-10-06T08:12:23
2022-10-06T08:09:41
stevhliu
[]
Fixes header level so `Dataset features` is the doc title instead of `The Audio type`: ![Screen Shot 2022-10-05 at 1 22 02 PM](https://user-images.githubusercontent.com/59462357/194155840-eeb5d62f-f4eb-411e-b281-8494c5fffdce.png)
true
1,398,080,859
https://api.github.com/repos/huggingface/datasets/issues/5077
https://github.com/huggingface/datasets/pull/5077
5,077
Fix passed download_config in HubDatasetModuleFactoryWithoutScript
closed
1
2022-10-05T16:42:36
2022-10-06T05:31:22
2022-10-06T05:29:06
albertvillanova
[]
Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`.
true
1,397,918,092
https://api.github.com/repos/huggingface/datasets/issues/5076
https://github.com/huggingface/datasets/pull/5076
5,076
fix: update exception throw from OSError to EnvironmentError in `push…
closed
1
2022-10-05T14:46:29
2022-10-07T14:35:57
2022-10-07T14:33:27
rahulXs
[]
Status: Ready for review Description of Changes: Fixes #5075 Changes proposed in this pull request: - Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present.
true
1,397,865,501
https://api.github.com/repos/huggingface/datasets/issues/5075
https://github.com/huggingface/datasets/issues/5075
5,075
Throw EnvironmentError when token is not present
closed
1
2022-10-05T14:14:18
2022-10-07T14:33:28
2022-10-07T14:33:28
mariosasko
[ "good first issue", "hacktoberfest" ]
Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present.
false
1,397,850,352
https://api.github.com/repos/huggingface/datasets/issues/5074
https://github.com/huggingface/datasets/issues/5074
5,074
Replace AssertionErrors with more meaningful errors
closed
3
2022-10-05T14:03:55
2022-10-07T14:33:11
2022-10-07T14:33:11
mariosasko
[ "good first issue", "hacktoberfest" ]
Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc. The files with AssertionErrors that need to be replaced: ``` src/datasets/arrow_reader.py src/datasets/builder.py src/datasets/utils/version.py ```
false
1,397,832,183
https://api.github.com/repos/huggingface/datasets/issues/5073
https://github.com/huggingface/datasets/pull/5073
5,073
Restore saved format state in `load_from_disk`
closed
1
2022-10-05T13:51:47
2022-10-11T16:55:07
2022-10-11T16:49:23
asofiaoliveira
[]
Hello! @mariosasko This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk. All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first. I don't know if I should add a test and where, so let me know if I should and ...
true
1,397,765,531
https://api.github.com/repos/huggingface/datasets/issues/5072
https://github.com/huggingface/datasets/pull/5072
5,072
Image & Audio formatting for numpy/torch/tf/jax
closed
3
2022-10-05T13:07:03
2022-10-10T13:24:10
2022-10-10T13:21:32
lhoestq
[]
Added support for image and audio formatting for numpy, torch, tf and jax. For images, the dtype used is the one of the image (the one returned by PIL.Image), e.g. uint8 I also added support for string, binary and None types. In particular for torch and jax, strings are kept unchanged (previously it was returning...
true
1,397,301,270
https://api.github.com/repos/huggingface/datasets/issues/5071
https://github.com/huggingface/datasets/pull/5071
5,071
Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS
closed
2
2022-10-05T06:28:39
2022-10-06T14:43:12
2022-10-06T14:40:26
albertvillanova
[]
This PR supports defining a default config name, even if no predefined allowed config names are set. Fix #5070. CC: @stas00
true
1,396,765,647
https://api.github.com/repos/huggingface/datasets/issues/5070
https://github.com/huggingface/datasets/issues/5070
5,070
Support default config name when no builder configs
closed
1
2022-10-04T19:49:35
2022-10-06T14:40:26
2022-10-06T14:40:26
albertvillanova
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined. **Additional context** In order to ...
false
1,396,361,768
https://api.github.com/repos/huggingface/datasets/issues/5067
https://github.com/huggingface/datasets/pull/5067
5,067
Fix CONTRIBUTING once dataset scripts transferred to Hub
closed
1
2022-10-04T14:16:05
2022-10-06T06:14:43
2022-10-06T06:12:12
albertvillanova
[]
This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub: - #4974 See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b Additionally, this PR fixes the line separator that by som...
true
1,396,086,745
https://api.github.com/repos/huggingface/datasets/issues/5066
https://github.com/huggingface/datasets/pull/5066
5,066
Support streaming gzip.open
closed
1
2022-10-04T11:20:05
2022-10-06T15:13:51
2022-10-06T15:11:29
albertvillanova
[]
This PR implements support for streaming out-of-the-box dataset scripts containing `gzip.open`. This has been a recurring issue. See, e.g.: - #5060 - #3191
true
1,396,003,362
https://api.github.com/repos/huggingface/datasets/issues/5065
https://github.com/huggingface/datasets/pull/5065
5,065
Ci py3.10
closed
2
2022-10-04T10:13:51
2022-11-29T15:28:05
2022-11-29T15:25:26
lhoestq
[]
Added a CI job for python 3.10 Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case. I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway)
true
1,395,978,143
https://api.github.com/repos/huggingface/datasets/issues/5064
https://github.com/huggingface/datasets/pull/5064
5,064
Align signature of create/delete_repo with latest hfh
closed
1
2022-10-04T09:54:53
2022-10-07T17:02:11
2022-10-07T16:59:30
albertvillanova
[]
This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead. Related to: - #5063 CC: @lhoestq
true
1,395,895,463
https://api.github.com/repos/huggingface/datasets/issues/5063
https://github.com/huggingface/datasets/pull/5063
5,063
Align signature of list_repo_files with latest hfh
closed
1
2022-10-04T08:51:46
2022-10-07T16:42:57
2022-10-07T16:40:16
albertvillanova
[]
This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`. This is already the case for `dataset_info`. CC: @lhoestq
true
1,395,739,417
https://api.github.com/repos/huggingface/datasets/issues/5062
https://github.com/huggingface/datasets/pull/5062
5,062
Fix CI hfh token warning
closed
2
2022-10-04T06:36:54
2022-10-04T08:58:15
2022-10-04T08:42:31
albertvillanova
[]
In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431 ``` tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub tests/te...
true
1,395,476,770
https://api.github.com/repos/huggingface/datasets/issues/5061
https://github.com/huggingface/datasets/issues/5061
5,061
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`
closed
6
2022-10-03T23:51:38
2023-07-21T14:43:35
2023-07-21T14:43:34
ZhaofengWu
[ "bug" ]
## Describe the bug When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`. ``` File "~/project/dataset.py", line 204, in <dictcomp> split: dataset.map( File ".../site-packages/datasets/arrow_dataset.py", line 24...
false
1,395,382,940
https://api.github.com/repos/huggingface/datasets/issues/5060
https://github.com/huggingface/datasets/issues/5060
5,060
Unable to Use Custom Dataset Locally
closed
4
2022-10-03T21:55:16
2022-10-06T14:29:18
2022-10-06T14:29:17
zanussbaum
[ "bug" ]
## Describe the bug I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says ``` If the data files live in ...
false
1,395,050,876
https://api.github.com/repos/huggingface/datasets/issues/5059
https://github.com/huggingface/datasets/pull/5059
5,059
Fix typo
closed
1
2022-10-03T17:05:25
2022-10-03T17:34:40
2022-10-03T17:32:27
stevhliu
[]
Fixes a small typo :)
true
1,394,962,424
https://api.github.com/repos/huggingface/datasets/issues/5058
https://github.com/huggingface/datasets/pull/5058
5,058
Mark CI tests as xfail when 502 error
closed
1
2022-10-03T15:53:55
2022-10-04T10:03:23
2022-10-04T10:01:23
albertvillanova
[]
To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error): - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files - https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431 ``` > raise HTTPEr...
true
1,394,827,216
https://api.github.com/repos/huggingface/datasets/issues/5057
https://github.com/huggingface/datasets/pull/5057
5,057
Support `converters` in `CsvBuilder`
closed
1
2022-10-03T14:23:21
2022-10-04T11:19:28
2022-10-04T11:17:32
mariosasko
[]
Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
true
1,394,713,173
https://api.github.com/repos/huggingface/datasets/issues/5056
https://github.com/huggingface/datasets/pull/5056
5,056
Fix broken URL's (GEM)
closed
2
2022-10-03T13:13:22
2022-10-04T13:49:00
2022-10-04T13:48:59
manandey
[]
This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova
true
1,394,503,844
https://api.github.com/repos/huggingface/datasets/issues/5055
https://github.com/huggingface/datasets/pull/5055
5,055
Fix backward compatibility for dataset_infos.json
closed
1
2022-10-03T10:30:14
2022-10-03T13:43:55
2022-10-03T13:41:32
lhoestq
[]
While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored o...
true
1,394,152,728
https://api.github.com/repos/huggingface/datasets/issues/5054
https://github.com/huggingface/datasets/pull/5054
5,054
Fix license/citation information of squadshifts dataset card
closed
1
2022-10-03T05:19:13
2022-10-03T09:26:49
2022-10-03T09:24:30
albertvillanova
[ "dataset contribution" ]
This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information: - https://github.com/modestyachts/squadshifts-website/issues/1 Additionally, we have updated the mention in their website to our `datasets` library (they were referring ...
true
1,393,739,882
https://api.github.com/repos/huggingface/datasets/issues/5053
https://github.com/huggingface/datasets/issues/5053
5,053
Intermittent JSON parse error when streaming the Pile
open
3
2022-10-02T11:56:46
2022-10-04T17:59:03
null
neelnanda-io
[ "bug" ]
## Describe the bug I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash. This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tok...
false
1,393,076,765
https://api.github.com/repos/huggingface/datasets/issues/5052
https://github.com/huggingface/datasets/pull/5052
5,052
added from_generator method to IterableDataset class.
closed
3
2022-09-30T22:14:05
2022-10-05T12:51:48
2022-10-05T12:10:48
hamid-vakilzadeh
[]
Hello, This resolves issues #4988. I added a method `from_generator` to class `IterableDataset`. I modified the `read` method of input stream generator to also return Iterable_dataset.
true
1,392,559,503
https://api.github.com/repos/huggingface/datasets/issues/5051
https://github.com/huggingface/datasets/pull/5051
5,051
Revert task removal in folder-based builders
closed
1
2022-09-30T14:50:03
2022-10-03T12:23:35
2022-10-03T12:21:31
mariosasko
[]
Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency. This is needed to fix https://github.com/huggingface/transformers/issues/19177. I think we should soon deprecate and remove the current task API (and investigate if it's possible to in...
true
1,392,381,882
https://api.github.com/repos/huggingface/datasets/issues/5050
https://github.com/huggingface/datasets/issues/5050
5,050
Restore saved format state in `load_from_disk`
closed
2
2022-09-30T12:40:07
2022-10-11T16:49:24
2022-10-11T16:49:24
mariosasko
[ "bug", "good first issue" ]
Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that. Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815
false
1,392,361,381
https://api.github.com/repos/huggingface/datasets/issues/5049
https://github.com/huggingface/datasets/pull/5049
5,049
Add `kwargs` to `Dataset.from_generator`
closed
1
2022-09-30T12:24:27
2022-10-03T11:00:11
2022-10-03T10:58:15
mariosasko
[]
Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance).
true
1,392,170,680
https://api.github.com/repos/huggingface/datasets/issues/5048
https://github.com/huggingface/datasets/pull/5048
5,048
Fix bug with labels of eurlex config of lex_glue dataset
closed
4
2022-09-30T09:47:12
2022-09-30T16:30:25
2022-09-30T16:21:41
iliaschalkidis
[ "dataset contribution" ]
Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable. In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequ...
true
1,392,088,398
https://api.github.com/repos/huggingface/datasets/issues/5047
https://github.com/huggingface/datasets/pull/5047
5,047
Fix cats_vs_dogs
closed
1
2022-09-30T08:47:29
2022-09-30T10:23:22
2022-09-30T09:34:28
lhoestq
[ "dataset contribution" ]
Reported in https://github.com/huggingface/datasets/pull/3878 I updated the number of examples
true
1,391,372,519
https://api.github.com/repos/huggingface/datasets/issues/5046
https://github.com/huggingface/datasets/issues/5046
5,046
Audiofolder creates empty Dataset if files same level as metadata
closed
5
2022-09-29T19:17:23
2022-10-28T13:05:07
2022-10-28T13:05:07
msis
[ "bug", "good first issue", "hacktoberfest" ]
## Describe the bug When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns. https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain...
false
1,391,287,609
https://api.github.com/repos/huggingface/datasets/issues/5045
https://github.com/huggingface/datasets/issues/5045
5,045
Automatically revert to last successful commit to hub when a push_to_hub is interrupted
closed
5
2022-09-29T18:08:12
2023-10-16T13:30:49
2023-10-16T13:30:49
jorahn
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names do...
false
1,391,242,908
https://api.github.com/repos/huggingface/datasets/issues/5044
https://github.com/huggingface/datasets/issues/5044
5,044
integrate `load_from_disk` into `load_dataset`
open
15
2022-09-29T17:37:12
2025-06-28T09:00:44
null
stas00
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types? Currently one has to choose a different loader depending on how ...
false
1,391,141,773
https://api.github.com/repos/huggingface/datasets/issues/5043
https://github.com/huggingface/datasets/pull/5043
5,043
Fix `flatten_indices` with empty indices mapping
closed
1
2022-09-29T16:17:28
2022-09-30T15:46:39
2022-09-30T15:44:25
mariosasko
[]
Fix #5038
true
1,390,762,877
https://api.github.com/repos/huggingface/datasets/issues/5042
https://github.com/huggingface/datasets/pull/5042
5,042
Update swiss judgment prediction
closed
1
2022-09-29T12:10:02
2022-09-30T07:14:00
2022-09-29T14:32:02
JoelNiklaus
[ "dataset contribution" ]
I forgot to add the new citation.
true
1,390,722,230
https://api.github.com/repos/huggingface/datasets/issues/5041
https://github.com/huggingface/datasets/pull/5041
5,041
Support streaming hendrycks_test dataset.
closed
1
2022-09-29T11:37:58
2022-09-30T07:13:38
2022-09-29T12:07:29
albertvillanova
[ "dataset contribution" ]
This PR: - supports streaming - fixes the description section of the dataset card
true
1,390,566,428
https://api.github.com/repos/huggingface/datasets/issues/5040
https://github.com/huggingface/datasets/pull/5040
5,040
Fix NonMatchingChecksumError in hendrycks_test dataset
closed
1
2022-09-29T09:37:43
2022-09-29T10:06:22
2022-09-29T10:04:19
albertvillanova
[ "dataset contribution" ]
Update metadata JSON. Fix #5039.
true
1,390,353,315
https://api.github.com/repos/huggingface/datasets/issues/5039
https://github.com/huggingface/datasets/issues/5039
5,039
Hendrycks Checksum
closed
3
2022-09-29T06:56:20
2022-09-29T10:23:30
2022-09-29T10:04:20
DanielHesslow
[ "dataset bug" ]
Hi, The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote. ``` datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://people.eecs.berkeley.edu/~hendrycks/data....
false
1,389,631,122
https://api.github.com/repos/huggingface/datasets/issues/5038
https://github.com/huggingface/datasets/issues/5038
5,038
`Dataset.unique` showing wrong output after filtering
closed
2
2022-09-28T16:20:35
2022-09-30T15:44:25
2022-09-30T15:44:25
mxschmdt
[ "bug" ]
## Describe the bug After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset. ## Steps to reproduce the bug ```python from datasets import Dataset dataset = Dataset.from_dict({'id': [0]}) dataset = dataset.filter(lambda _: False) print(data...
false
1,389,244,722
https://api.github.com/repos/huggingface/datasets/issues/5037
https://github.com/huggingface/datasets/pull/5037
5,037
Improve CI performance speed of PackagedDatasetTest
closed
2
2022-09-28T12:08:16
2022-09-30T16:05:42
2022-09-30T16:03:24
albertvillanova
[]
This PR improves PackagedDatasetTest CI performance speed. For Ubuntu (latest): - Duration (without parallelism) before: 334.78s (5.58m) - Duration (without parallelism) afterwards: 0.48s The approach is passing a dummy `data_files` argument to load the builder, so that it avoids the slow inferring of it over the ...
true
1,389,094,075
https://api.github.com/repos/huggingface/datasets/issues/5036
https://github.com/huggingface/datasets/pull/5036
5,036
Add oversampling strategy iterable datasets interleave
closed
1
2022-09-28T10:10:23
2022-09-30T12:30:48
2022-09-30T12:28:23
ylacombe
[]
Hello everyone, Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list. The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once. It follows roughly the same logic behind #4831, namely...
true
1,388,914,476
https://api.github.com/repos/huggingface/datasets/issues/5035
https://github.com/huggingface/datasets/pull/5035
5,035
Fix typos in load docstrings and comments
closed
1
2022-09-28T08:05:07
2022-09-28T17:28:40
2022-09-28T17:26:15
albertvillanova
[]
Minor fix of typos in load docstrings and comments
true
1,388,855,136
https://api.github.com/repos/huggingface/datasets/issues/5034
https://github.com/huggingface/datasets/pull/5034
5,034
Update README.md of yahoo_answers_topics dataset
closed
4
2022-09-28T07:17:33
2022-10-06T15:56:05
2022-10-04T13:49:25
borgr
[]
null
true
1,388,842,236
https://api.github.com/repos/huggingface/datasets/issues/5033
https://github.com/huggingface/datasets/pull/5033
5,033
Remove redundant code from some dataset module factories
closed
1
2022-09-28T07:06:26
2022-09-28T16:57:51
2022-09-28T16:55:12
albertvillanova
[]
This PR removes some redundant code introduced by mistake after a refactoring in: - #4576
true
1,388,270,935
https://api.github.com/repos/huggingface/datasets/issues/5032
https://github.com/huggingface/datasets/issues/5032
5,032
new dataset type: single-label and multi-label video classification
open
6
2022-09-27T19:40:11
2022-11-02T19:10:13
null
fcakyon
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset. **Describe the solution you'd like** Assume I h...
false