id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,588,951,379
https://api.github.com/repos/huggingface/datasets/issues/5543
https://github.com/huggingface/datasets/issues/5543
5,543
the pile datasets url seems to change back
closed
2
2023-02-17T08:40:11
2023-02-21T06:37:00
2023-02-20T08:41:33
wjfwzzc
[]
### Describe the bug in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again. ### Steps to reproduce the bug ```python3 from datasets import load_dataset dataset = load_dataset("bookcorpusopen") ``` shows ```python3 ...
false
1,588,633,724
https://api.github.com/repos/huggingface/datasets/issues/5542
https://github.com/huggingface/datasets/pull/5542
5,542
Avoid saving sparse ChunkedArrays in pyarrow tables
closed
2
2023-02-17T01:52:38
2023-02-17T19:20:49
2023-02-17T11:12:32
marioga
[]
Fixes https://github.com/huggingface/datasets/issues/5541
true
1,588,633,555
https://api.github.com/repos/huggingface/datasets/issues/5541
https://github.com/huggingface/datasets/issues/5541
5,541
Flattening indices in selected datasets is extremely inefficient
closed
3
2023-02-17T01:52:24
2023-02-22T13:15:20
2023-02-17T11:12:33
marioga
[]
### Describe the bug If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. Thi...
false
1,588,438,344
https://api.github.com/repos/huggingface/datasets/issues/5540
https://github.com/huggingface/datasets/pull/5540
5,540
Tutorial for creating a dataset
closed
2
2023-02-16T22:09:35
2023-02-17T18:50:46
2023-02-17T18:41:28
stevhliu
[]
A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂
true
1,587,970,083
https://api.github.com/repos/huggingface/datasets/issues/5539
https://github.com/huggingface/datasets/issues/5539
5,539
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
closed
4
2023-02-16T16:08:51
2023-02-22T10:30:30
2023-02-21T13:03:57
aalbersk
[ "good first issue" ]
### Describe the bug When dataset contains a 0-dim tensor, formatting.py raises a following error and fails. ```bash Traceback (most recent call last): File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row return _unnest(formatted_batch) File "<path>/lib/py...
false
1,587,732,596
https://api.github.com/repos/huggingface/datasets/issues/5538
https://github.com/huggingface/datasets/issues/5538
5,538
load_dataset in seaborn is not working for me. getting this error.
closed
1
2023-02-16T14:01:58
2023-02-16T14:44:36
2023-02-16T14:44:36
reemaranibarik
[]
TimeoutError Traceback (most recent call last) ~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1345 try: -> 1346 h.request(req.get_method(), req.selector, req.data, headers, 1347 encode_chu...
false
1,587,567,464
https://api.github.com/repos/huggingface/datasets/issues/5537
https://github.com/huggingface/datasets/issues/5537
5,537
Increase speed of data files resolution
closed
5
2023-02-16T12:11:45
2023-12-15T13:12:31
2023-12-15T13:12:31
lhoestq
[ "enhancement", "good second issue" ]
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step. `datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files. This comes from `res...
false
1,586,930,643
https://api.github.com/repos/huggingface/datasets/issues/5536
https://github.com/huggingface/datasets/issues/5536
5,536
Failure to hash function when using .map()
closed
14
2023-02-16T03:12:07
2023-09-08T21:06:01
2023-02-16T14:56:41
venzen
[]
### Describe the bug _Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca...
false
1,586,520,369
https://api.github.com/repos/huggingface/datasets/issues/5535
https://github.com/huggingface/datasets/pull/5535
5,535
Add JAX-formatting documentation
closed
9
2023-02-15T20:35:11
2023-02-20T10:39:42
2023-02-20T10:32:39
alvarobartt
[]
## What's in this PR? As a follow-up of #5522, I've created this entry in the documentation to explain how to use `.with_format("jax")` and why is it useful. @lhoestq Feel free to drop any feedback and/or suggestion, as probably more useful features can be included there!
true
1,586,177,862
https://api.github.com/repos/huggingface/datasets/issues/5534
https://github.com/huggingface/datasets/issues/5534
5,534
map() breaks at certain dataset size when using Array3D
open
2
2023-02-15T16:34:25
2023-03-03T16:31:33
null
ArneBinder
[]
### Describe the bug `map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception: ``` Traceback (most recent cal...
false
1,585,885,871
https://api.github.com/repos/huggingface/datasets/issues/5533
https://github.com/huggingface/datasets/pull/5533
5,533
Add reduce function
closed
21
2023-02-15T13:44:01
2024-11-25T14:33:27
2023-02-28T14:46:12
AJDERS
[]
This PR closes #5496 . I tried to imitate the `reduce`-method from `functools`, i.e. the function input must be a binary operation. I assume that the input type has an empty element, i.e. `input_type()` is defined, as the acumulant is instantiated as this object - im not sure that is this a reasonable assumption? ...
true
1,584,505,128
https://api.github.com/repos/huggingface/datasets/issues/5532
https://github.com/huggingface/datasets/issues/5532
5,532
train_test_split in arrow_dataset does not ensure to keep single classes in test set
closed
1
2023-02-14T16:52:29
2023-02-15T16:09:19
2023-02-15T16:09:19
Ulipenitz
[]
### Describe the bug When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training. ### Steps to reproduce the bug ``` import numpy as np from datasets import Dataset ...
false
1,584,387,276
https://api.github.com/repos/huggingface/datasets/issues/5531
https://github.com/huggingface/datasets/issues/5531
5,531
Invalid Arrow data from JSONL
open
0
2023-02-14T15:39:49
2023-02-14T15:46:09
null
lhoestq
[ "bug" ]
This code fails: ```python from datasets import Dataset ds = Dataset.from_json(path_to_file) ds.data.validate() ``` raises ```python ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063) ``` This ...
false
1,582,938,241
https://api.github.com/repos/huggingface/datasets/issues/5530
https://github.com/huggingface/datasets/pull/5530
5,530
Add missing license in `NumpyFormatter`
closed
2
2023-02-13T19:33:23
2023-02-14T14:40:41
2023-02-14T12:23:58
alvarobartt
[]
## What's in this PR? As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there.
true
1,582,501,233
https://api.github.com/repos/huggingface/datasets/issues/5529
https://github.com/huggingface/datasets/pull/5529
5,529
Fix `datasets.load_from_disk`, `DatasetDict.load_from_disk` and `Dataset.load_from_disk`
closed
12
2023-02-13T14:54:55
2023-02-23T18:14:32
2023-02-23T18:05:26
alvarobartt
[]
## What's in this PR? After playing around a little bit with 🤗`datasets` in Google Cloud Storage (GCS), I found out some things that should be fixed IMO in the code: * `datasets.load_from_disk` is not checking whether `state.json` is there too when trying to load a `Dataset`, just `dataset_info.json` is checked ...
true
1,582,195,085
https://api.github.com/repos/huggingface/datasets/issues/5528
https://github.com/huggingface/datasets/pull/5528
5,528
Push to hub in a pull request
open
11
2023-02-13T11:43:47
2023-10-06T21:58:02
null
AJDERS
[]
Fixes #5492. Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`.
true
1,581,228,531
https://api.github.com/repos/huggingface/datasets/issues/5527
https://github.com/huggingface/datasets/pull/5527
5,527
Fix benchmarks CI - pin protobuf
closed
5
2023-02-12T11:51:25
2023-02-13T10:29:03
2023-02-13T09:24:16
lhoestq
[]
fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331
true
1,580,488,133
https://api.github.com/repos/huggingface/datasets/issues/5526
https://github.com/huggingface/datasets/pull/5526
5,526
Allow loading/saving of FAISS index using fsspec
closed
4
2023-02-10T23:37:14
2023-03-27T15:26:46
2023-03-27T15:18:20
Dref360
[]
Fixes #5428 Allow loading/saving of FAISS index using fsspec: 1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream. 2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense. I can work on the documentation once the code changes are approved.
true
1,580,342,729
https://api.github.com/repos/huggingface/datasets/issues/5525
https://github.com/huggingface/datasets/issues/5525
5,525
TypeError: Couldn't cast array of type string to null
closed
6
2023-02-10T21:12:36
2023-02-14T17:41:08
2023-02-14T09:35:49
TJ-Solergibert
[]
### Describe the bug Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error. I alredy tried reseting the shorter strings...
false
1,580,219,454
https://api.github.com/repos/huggingface/datasets/issues/5524
https://github.com/huggingface/datasets/pull/5524
5,524
[INVALID PR]
closed
1
2023-02-10T19:35:50
2023-02-10T19:51:45
2023-02-10T19:49:12
alvarobartt
[]
Hi to whoever is reading this! 🤗 ## What's in this PR? ~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to ...
true
1,580,193,015
https://api.github.com/repos/huggingface/datasets/issues/5523
https://github.com/huggingface/datasets/issues/5523
5,523
Checking that split name is correct happens only after the data is downloaded
open
0
2023-02-10T19:13:03
2023-02-10T19:14:50
null
polinaeterna
[ "bug" ]
### Describe the bug Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time. ### Steps to reproduce the bug Load any dataset with rand...
false
1,580,183,124
https://api.github.com/repos/huggingface/datasets/issues/5522
https://github.com/huggingface/datasets/pull/5522
5,522
Minor changes in JAX-formatting docstrings & type-hints
closed
16
2023-02-10T19:05:00
2023-02-15T14:48:27
2023-02-15T13:19:06
alvarobartt
[]
Hi to whoever is reading this! 🤗 ## What's in this PR? I was exploring the code regarding the `JaxFormatter` implemented in 🤗`datasets`, and found some things that IMO could be changed. Those are mainly regarding the docstrings and the type-hints based on `jax`'s 0.4.1 release where `jax.Array` was introduced a...
true
1,578,418,289
https://api.github.com/repos/huggingface/datasets/issues/5521
https://github.com/huggingface/datasets/pull/5521
5,521
Fix bug when casting empty array to class labels
closed
1
2023-02-09T18:47:59
2023-02-13T20:40:48
2023-02-12T11:17:17
marioga
[]
Fix https://github.com/huggingface/datasets/issues/5520.
true
1,578,417,074
https://api.github.com/repos/huggingface/datasets/issues/5520
https://github.com/huggingface/datasets/issues/5520
5,520
ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray
closed
0
2023-02-09T18:46:52
2023-02-12T11:17:18
2023-02-12T11:17:18
marioga
[]
### Describe the bug `ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`. ### Steps to reproduce the bug Minimal steps: ```python import pyarrow as pa from datasets import ClassLabel ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64())) ``` In practice, thi...
false
1,578,341,785
https://api.github.com/repos/huggingface/datasets/issues/5519
https://github.com/huggingface/datasets/pull/5519
5,519
Lint code with `ruff`
closed
6
2023-02-09T17:50:21
2024-06-01T15:35:02
2023-02-14T16:18:38
mariosasko
[]
EDIT: Use `ruff` for linting instead of `isort` and `flake8` ~~`black`~~ to be consistent with [`transformers`](https://github.com/huggingface/transformers/pull/21480) and [`hfh`](https://github.com/huggingface/huggingface_hub/pull/1323). TODO: - [x] ~Merge the community contributors' PR to avoid having to run `ma...
true
1,578,203,962
https://api.github.com/repos/huggingface/datasets/issues/5518
https://github.com/huggingface/datasets/pull/5518
5,518
Remove py.typed
closed
3
2023-02-09T16:22:29
2023-02-13T13:55:49
2023-02-13T13:48:40
mariosasko
[]
Fix https://github.com/huggingface/datasets/issues/3841
true
1,577,976,608
https://api.github.com/repos/huggingface/datasets/issues/5517
https://github.com/huggingface/datasets/issues/5517
5,517
`with_format("numpy")` silently downcasts float64 to float32 features
open
13
2023-02-09T14:18:00
2024-01-18T08:42:17
null
ernestum
[]
### Describe the bug When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy") print(...
false
1,577,661,640
https://api.github.com/repos/huggingface/datasets/issues/5516
https://github.com/huggingface/datasets/pull/5516
5,516
Reload features from Parquet metadata
closed
4
2023-02-09T10:52:15
2023-02-12T16:00:00
2023-02-12T15:57:01
MFreidank
[]
Resolves #5482. Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`. This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482). @lhoest...
true
1,577,590,611
https://api.github.com/repos/huggingface/datasets/issues/5515
https://github.com/huggingface/datasets/pull/5515
5,515
Unify `load_from_cache_file` type and logic
closed
4
2023-02-09T10:04:46
2023-02-14T15:38:13
2023-02-14T14:26:42
HallerPatrick
[]
* Updating type annotations for #`load_from_cache_file` * Added logic for cache checking if needed * Updated documentation following the wording of `Dataset.map`
true
1,576,453,837
https://api.github.com/repos/huggingface/datasets/issues/5514
https://github.com/huggingface/datasets/issues/5514
5,514
Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
closed
4
2023-02-08T16:40:44
2023-02-14T14:26:44
2023-02-14T14:26:44
HallerPatrick
[ "enhancement" ]
### Feature request 1. Replace the `load_from_cache_file` default value to `True`. 2. Remove or alter checks from `is_caching_enabled` logic. ### Motivation I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`: ``` load_...
false
1,576,300,803
https://api.github.com/repos/huggingface/datasets/issues/5513
https://github.com/huggingface/datasets/issues/5513
5,513
Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
closed
4
2023-02-08T15:13:46
2023-07-24T16:02:18
2023-07-24T14:27:59
alvarobartt
[]
Hi @mariosasko, @lhoestq, or whoever reads this! :) After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released? Just wanted to get your inp...
false
1,576,142,432
https://api.github.com/repos/huggingface/datasets/issues/5512
https://github.com/huggingface/datasets/pull/5512
5,512
Speed up batched PyTorch DataLoader
closed
9
2023-02-08T13:38:59
2023-02-19T18:35:09
2023-02-19T18:27:29
lhoestq
[]
I implemented `__getitems__` to speed up batched data loading in PyTorch close https://github.com/huggingface/datasets/issues/5505
true
1,575,851,768
https://api.github.com/repos/huggingface/datasets/issues/5511
https://github.com/huggingface/datasets/issues/5511
5,511
Creating a dummy dataset from a bigger one
closed
8
2023-02-08T10:18:41
2023-12-28T18:21:01
2023-02-08T10:35:48
patrickvonplaten
[]
### Describe the bug I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset...
false
1,575,191,549
https://api.github.com/repos/huggingface/datasets/issues/5510
https://github.com/huggingface/datasets/pull/5510
5,510
Milvus integration for search
open
5
2023-02-07T23:30:26
2023-02-24T16:45:09
null
filip-halt
[]
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
true
1,574,177,320
https://api.github.com/repos/huggingface/datasets/issues/5509
https://github.com/huggingface/datasets/pull/5509
5,509
Add a static `__all__` to `__init__.py` for typecheckers
open
2
2023-02-07T11:42:40
2023-02-08T17:48:24
null
LoicGrobol
[]
This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) t...
true
1,573,290,359
https://api.github.com/repos/huggingface/datasets/issues/5508
https://github.com/huggingface/datasets/issues/5508
5,508
Saving a dataset after setting format to torch doesn't work, but only if filtering
closed
2
2023-02-06T21:08:58
2023-02-09T14:55:26
2023-02-09T14:55:26
joebhakim
[]
### Describe the bug Saving a dataset after setting format to torch doesn't work, but only if filtering ### Steps to reproduce the bug ``` a = Dataset.from_dict({"b": [1, 2]}) a.set_format('torch') a.save_to_disk("test_save") # saves successfully a.filter(None).save_to_disk("test_save_filter") # does not >> [.....
false
1,572,667,036
https://api.github.com/repos/huggingface/datasets/issues/5507
https://github.com/huggingface/datasets/issues/5507
5,507
Optimise behaviour in respect to indices mapping
open
0
2023-02-06T14:25:55
2023-02-28T18:19:18
null
mariosasko
[ "enhancement" ]
_Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_ Considering all this, perhaps for Datasets 3.0, we can do the following: * [ ] have `continuous=True` by default in `.shard` (requested in the survey and makes more sense...
false
1,571,838,641
https://api.github.com/repos/huggingface/datasets/issues/5506
https://github.com/huggingface/datasets/issues/5506
5,506
IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
closed
4
2023-02-06T03:26:03
2023-02-08T18:30:08
2023-02-08T18:30:07
kheyer
[]
### Describe the bug I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256. Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous...
false
1,571,720,814
https://api.github.com/repos/huggingface/datasets/issues/5505
https://github.com/huggingface/datasets/issues/5505
5,505
PyTorch BatchSampler still loads from Dataset one-by-one
closed
2
2023-02-06T01:14:55
2023-02-19T18:27:30
2023-02-19T18:27:30
davidgilbertson
[]
### Describe the bug In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue. I'm not sure if this is a mistake in the docs or the code, but it seems that the on...
false
1,570,621,242
https://api.github.com/repos/huggingface/datasets/issues/5504
https://github.com/huggingface/datasets/pull/5504
5,504
don't zero copy timestamps
closed
3
2023-02-03T23:39:04
2023-02-08T17:28:50
2023-02-08T14:33:17
dwyatte
[]
Fixes https://github.com/huggingface/datasets/issues/5495 I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug
true
1,570,091,225
https://api.github.com/repos/huggingface/datasets/issues/5502
https://github.com/huggingface/datasets/pull/5502
5,502
Added functionality: sort datasets by multiple keys
closed
5
2023-02-03T16:17:00
2023-02-21T14:46:49
2023-02-21T14:39:23
MichlF
[]
Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425.
true
1,569,644,159
https://api.github.com/repos/huggingface/datasets/issues/5501
https://github.com/huggingface/datasets/pull/5501
5,501
Increase chunk size for speeding up file downloads
open
4
2023-02-03T10:50:10
2023-02-09T11:04:11
null
Narsil
[]
Original fix: https://github.com/huggingface/huggingface_hub/pull/1267 Not sure this function is actually still called though. I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ?
true
1,569,257,240
https://api.github.com/repos/huggingface/datasets/issues/5500
https://github.com/huggingface/datasets/issues/5500
5,500
WMT19 custom download checksum error
closed
1
2023-02-03T05:45:37
2023-02-03T05:52:56
2023-02-03T05:52:56
Hannibal046
[]
### Describe the bug I use the following scripts to download data from WMT19: ```python import datasets from datasets import inspect_dataset, load_dataset_builder from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS ## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-fi...
false
1,568,937,026
https://api.github.com/repos/huggingface/datasets/issues/5499
https://github.com/huggingface/datasets/issues/5499
5,499
`load_dataset` has ~4 seconds of overhead for cached data
open
2
2023-02-02T23:34:50
2023-02-07T19:35:11
null
davidgilbertson
[ "enhancement" ]
### Feature request When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory). This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk...
false
1,568,190,529
https://api.github.com/repos/huggingface/datasets/issues/5498
https://github.com/huggingface/datasets/issues/5498
5,498
TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset
closed
3
2023-02-02T14:46:49
2023-10-08T06:12:47
2023-02-04T17:19:36
vmuel
[]
### Describe the bug Hi, Thanks for the amazing work on the library! **Describe the bug** I think I might have noticed a small bug in the filter method. Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError. ### Steps to reproduce the ...
false
1,567,601,264
https://api.github.com/repos/huggingface/datasets/issues/5497
https://github.com/huggingface/datasets/pull/5497
5,497
Improved error message for gated/private repos
closed
3
2023-02-02T08:56:15
2023-02-02T11:26:08
2023-02-02T11:17:15
osanseviero
[]
Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos See https://github.com/huggingface/huggingface_hub/pull/1064
true
1,567,301,765
https://api.github.com/repos/huggingface/datasets/issues/5496
https://github.com/huggingface/datasets/issues/5496
5,496
Add a `reduce` method
closed
4
2023-02-02T04:30:22
2024-11-12T05:58:14
2023-07-21T14:24:32
zhangir-azerbayev
[ "enhancement" ]
### Feature request Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`. ### Motivation A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average...
false
1,566,803,452
https://api.github.com/repos/huggingface/datasets/issues/5495
https://github.com/huggingface/datasets/issues/5495
5,495
to_tf_dataset fails with datetime UTC columns even if not included in columns argument
closed
2
2023-02-01T20:47:33
2023-02-08T14:33:19
2023-02-08T14:33:19
dwyatte
[ "bug", "good first issue" ]
### Describe the bug There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column...
false
1,566,655,348
https://api.github.com/repos/huggingface/datasets/issues/5494
https://github.com/huggingface/datasets/issues/5494
5,494
Update audio installation doc page
closed
4
2023-02-01T19:07:50
2023-03-02T16:08:17
2023-03-02T16:08:17
polinaeterna
[ "documentation" ]
Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a cust...
false
1,566,637,806
https://api.github.com/repos/huggingface/datasets/issues/5493
https://github.com/huggingface/datasets/pull/5493
5,493
Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring
closed
3
2023-02-01T18:57:48
2023-02-08T15:10:46
2023-02-08T15:03:50
polinaeterna
[]
null
true
1,566,604,216
https://api.github.com/repos/huggingface/datasets/issues/5492
https://github.com/huggingface/datasets/issues/5492
5,492
Push_to_hub in a pull request
closed
2
2023-02-01T18:32:14
2023-10-16T13:30:48
2023-10-16T13:30:48
lhoestq
[ "enhancement", "good first issue" ]
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name cc @nateraw It should be possible to tweak the use of `huggingface_hub` in `pus...
false
1,566,235,012
https://api.github.com/repos/huggingface/datasets/issues/5491
https://github.com/huggingface/datasets/pull/5491
5,491
[MINOR] Typo
closed
2
2023-02-01T14:39:39
2023-02-02T07:42:28
2023-02-02T07:35:14
cakiki
[]
null
true
1,565,842,327
https://api.github.com/repos/huggingface/datasets/issues/5490
https://github.com/huggingface/datasets/pull/5490
5,490
Do not add index column by default when exporting to CSV
closed
2
2023-02-01T10:20:55
2023-02-09T09:29:08
2023-02-09T09:22:23
albertvillanova
[]
As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name. This PR changes the default behavior, so that now the index column is not written. To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` t...
true
1,565,761,705
https://api.github.com/repos/huggingface/datasets/issues/5489
https://github.com/huggingface/datasets/pull/5489
5,489
Pin dill lower version
closed
2
2023-02-01T09:33:42
2023-02-02T07:48:09
2023-02-02T07:40:43
albertvillanova
[]
Pin `dill` lower version compatible with `datasets`. Related to: - #5487 - #288 Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multip...
true
1,565,025,262
https://api.github.com/repos/huggingface/datasets/issues/5488
https://github.com/huggingface/datasets/issues/5488
5,488
Error loading MP3 files from CommonVoice
closed
4
2023-01-31T21:25:33
2023-03-02T16:25:14
2023-03-02T16:25:13
kradonneoh
[]
### Describe the bug When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays: ```python --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) ~/.l...
false
1,564,480,121
https://api.github.com/repos/huggingface/datasets/issues/5487
https://github.com/huggingface/datasets/issues/5487
5,487
Incorrect filepath for dill module
closed
5
2023-01-31T15:01:08
2023-02-24T16:18:36
2023-02-24T16:18:36
avivbrokman
[]
### Describe the bug I installed the `datasets` package and when I try to `import` it, I get the following error: ``` Traceback (most recent call last): File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module> import datasets File "/Users/avivbrokman/...
false
1,564,059,749
https://api.github.com/repos/huggingface/datasets/issues/5486
https://github.com/huggingface/datasets/issues/5486
5,486
Adding `sep` to TextConfig
open
2
2023-01-31T10:39:53
2023-01-31T14:50:18
null
omar-araboghli
[]
I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` ...
false
1,563,002,829
https://api.github.com/repos/huggingface/datasets/issues/5485
https://github.com/huggingface/datasets/pull/5485
5,485
Add section in tutorial for IterableDataset
closed
2
2023-01-30T18:43:04
2023-02-01T18:15:38
2023-02-01T18:08:46
stevhliu
[]
Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new d...
true
1,562,877,070
https://api.github.com/repos/huggingface/datasets/issues/5484
https://github.com/huggingface/datasets/pull/5484
5,484
Update docs for `nyu_depth_v2` dataset
closed
6
2023-01-30T17:37:08
2023-09-29T06:43:11
2023-02-05T14:15:04
awsaf49
[]
This PR will fix the issue mentioned in #5461. Here is brief overview, ## Bug: Discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are diffe...
true
1,560,894,690
https://api.github.com/repos/huggingface/datasets/issues/5483
https://github.com/huggingface/datasets/issues/5483
5,483
Unable to upload dataset
closed
1
2023-01-28T15:18:26
2023-01-29T08:09:49
2023-01-29T08:09:49
yuvalkirstain
[]
### Describe the bug Uploading a simple dataset ends with an exception ### Steps to reproduce the bug I created a new conda env with python 3.10, pip installed datasets and: ```python >>> from datasets import load_dataset, load_from_disk, Dataset >>> d = Dataset.from_dict({"text": ["hello"] * 2}) >>> d.pus...
false
1,560,853,137
https://api.github.com/repos/huggingface/datasets/issues/5482
https://github.com/huggingface/datasets/issues/5482
5,482
Reload features from Parquet metadata
closed
3
2023-01-28T13:12:31
2023-02-12T15:57:02
2023-02-12T15:57:02
lhoestq
[ "enhancement", "good second issue" ]
The idea would be to allow this : ```python ds.to_parquet("my_dataset/ds.parquet") reloaded = load_dataset("my_dataset") assert ds.features == reloaded.features ``` And it should also work with Image and Audio types (right now they're reloaded as a dict type) This can be implemented by storing and reading th...
false
1,560,468,195
https://api.github.com/repos/huggingface/datasets/issues/5481
https://github.com/huggingface/datasets/issues/5481
5,481
Load a cached dataset as iterable
open
22
2023-01-27T21:43:51
2025-06-19T19:30:52
null
lhoestq
[ "enhancement", "good second issue" ]
The idea would be to allow something like ```python ds = load_dataset("c4", "en", as_iterable=True) ``` To be used to train models. It would load an IterableDataset from the cached Arrow files. Cc @stas00 Edit : from the discussions we may load from cache when streaming=True
false
1,560,364,866
https://api.github.com/repos/huggingface/datasets/issues/5480
https://github.com/huggingface/datasets/pull/5480
5,480
Select columns of Dataset or DatasetDict
closed
2
2023-01-27T20:06:16
2023-02-13T11:10:13
2023-02-13T09:59:35
daskol
[]
Close #5474 and #5468.
true
1,560,357,590
https://api.github.com/repos/huggingface/datasets/issues/5479
https://github.com/huggingface/datasets/issues/5479
5,479
audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated
closed
0
2023-01-27T20:01:22
2023-01-29T05:23:14
2023-01-29T05:23:14
jcho19
[]
### Describe the bug I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what cou...
false
1,560,357,583
https://api.github.com/repos/huggingface/datasets/issues/5478
https://github.com/huggingface/datasets/pull/5478
5,478
Tip for recomputing metadata
closed
2
2023-01-27T20:01:22
2023-01-30T19:22:21
2023-01-30T19:15:26
stevhliu
[]
From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset.
true
1,559,909,892
https://api.github.com/repos/huggingface/datasets/issues/5477
https://github.com/huggingface/datasets/issues/5477
5,477
Unpin sqlalchemy once issue is fixed
closed
2
2023-01-27T15:01:55
2024-01-26T14:50:45
2024-01-26T14:50:45
albertvillanova
[]
Once the source issue is fixed: - pandas-dev/pandas#51015 we should revert the pin introduced in: - #5476
false
1,559,594,684
https://api.github.com/repos/huggingface/datasets/issues/5476
https://github.com/huggingface/datasets/pull/5476
5,476
Pin sqlalchemy
closed
3
2023-01-27T11:26:38
2023-01-27T12:06:51
2023-01-27T11:57:48
lhoestq
[]
since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514 the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015
true
1,559,030,149
https://api.github.com/repos/huggingface/datasets/issues/5475
https://github.com/huggingface/datasets/issues/5475
5,475
Dataset scan time is much slower than using native arrow
closed
3
2023-01-27T01:32:25
2023-01-30T16:17:11
2023-01-30T16:17:11
jonny-cyberhaven
[]
### Describe the bug I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version. I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that exp...
false
1,558,827,155
https://api.github.com/repos/huggingface/datasets/issues/5474
https://github.com/huggingface/datasets/issues/5474
5,474
Column project operation on `datasets.Dataset`
closed
1
2023-01-26T21:47:53
2023-02-13T09:59:37
2023-02-13T09:59:37
daskol
[ "duplicate", "enhancement" ]
### Feature request There is no operation to select a subset of columns of original dataset. Expected API follows. ```python a = Dataset.from_dict({ 'int': [0, 1, 2] 'char': ['a', 'b', 'c'], 'none': [None] * 3, }) b = a.project('int', 'char') # usually, .select() print(a.column_names) # std...
false
1,558,668,197
https://api.github.com/repos/huggingface/datasets/issues/5473
https://github.com/huggingface/datasets/pull/5473
5,473
Set dev version
closed
3
2023-01-26T19:34:44
2023-01-26T19:47:34
2023-01-26T19:38:30
lhoestq
[]
null
true
1,558,662,251
https://api.github.com/repos/huggingface/datasets/issues/5472
https://github.com/huggingface/datasets/pull/5472
5,472
Release: 2.9.0
closed
4
2023-01-26T19:29:42
2023-01-26T19:40:44
2023-01-26T19:33:00
lhoestq
[]
null
true
1,558,557,545
https://api.github.com/repos/huggingface/datasets/issues/5471
https://github.com/huggingface/datasets/pull/5471
5,471
Add num_test_batches option
closed
4
2023-01-26T18:09:40
2023-01-27T18:16:45
2023-01-27T18:08:36
amyeroberts
[]
`to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same ac...
true
1,558,542,611
https://api.github.com/repos/huggingface/datasets/issues/5470
https://github.com/huggingface/datasets/pull/5470
5,470
Update dataset card creation
closed
4
2023-01-26T17:57:51
2023-01-27T16:27:00
2023-01-27T16:20:10
stevhliu
[]
Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one.
true
1,558,346,906
https://api.github.com/repos/huggingface/datasets/issues/5469
https://github.com/huggingface/datasets/pull/5469
5,469
Remove deprecated `shard_size` arg from `.push_to_hub()`
closed
2
2023-01-26T15:40:56
2023-01-26T17:37:51
2023-01-26T17:30:59
polinaeterna
[]
The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it?
true
1,558,066,625
https://api.github.com/repos/huggingface/datasets/issues/5468
https://github.com/huggingface/datasets/issues/5468
5,468
Allow opposite of remove_columns on Dataset and DatasetDict
closed
9
2023-01-26T12:28:09
2023-02-13T09:59:38
2023-02-13T09:59:38
hollance
[ "enhancement", "good first issue" ]
### Feature request In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code: ```python COLUMNS_TO_KEEP = ["text", "audio"] all_columns = gigaspeech["train"].column_names columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP) gigaspeech = gigaspeech.remove_columns(column...
false
1,557,898,273
https://api.github.com/repos/huggingface/datasets/issues/5467
https://github.com/huggingface/datasets/pull/5467
5,467
Fix conda command in readme
closed
4
2023-01-26T10:03:01
2023-09-24T10:06:59
2023-01-26T18:29:37
lhoestq
[]
The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining ``` conda install -c huggingface datasets ```
true
1,557,584,845
https://api.github.com/repos/huggingface/datasets/issues/5466
https://github.com/huggingface/datasets/pull/5466
5,466
remove pathlib.Path with URIs
closed
5
2023-01-26T03:25:45
2023-01-26T17:08:57
2023-01-26T16:59:11
jonny-cyberhaven
[]
Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage
true
1,557,510,618
https://api.github.com/repos/huggingface/datasets/issues/5465
https://github.com/huggingface/datasets/issues/5465
5,465
audiofolder creates empty dataset even though the dataset passed in follows the correct structure
closed
0
2023-01-26T01:45:45
2023-01-26T08:48:45
2023-01-26T08:48:45
jcho19
[]
### Describe the bug The structure of my dataset folder called "my_dataset" is : data metadata.csv The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset. When I run the follo...
false
1,557,462,104
https://api.github.com/repos/huggingface/datasets/issues/5464
https://github.com/huggingface/datasets/issues/5464
5,464
NonMatchingChecksumError for hendrycks_test
closed
2
2023-01-26T00:43:23
2023-01-27T05:44:31
2023-01-26T07:41:58
sarahwie
[]
### Describe the bug The checksum of the file has likely changed on the remote host. ### Steps to reproduce the bug `dataset = nlp.load_dataset("hendrycks_test", "anatomy")` ### Expected behavior no error thrown ### Environment info - `datasets` version: 2.2.1 - Platform: macOS-13.1-arm64-arm-64bit - Pyt...
false
1,557,021,041
https://api.github.com/repos/huggingface/datasets/issues/5463
https://github.com/huggingface/datasets/pull/5463
5,463
Imagefolder docs: mention support of CSV and ZIP
closed
3
2023-01-25T17:24:01
2023-01-25T18:33:35
2023-01-25T18:26:15
lhoestq
[]
null
true
1,556,572,144
https://api.github.com/repos/huggingface/datasets/issues/5462
https://github.com/huggingface/datasets/pull/5462
5,462
Concatenate on axis=1 with misaligned blocks
closed
4
2023-01-25T12:33:22
2023-01-26T09:37:00
2023-01-26T09:27:19
lhoestq
[]
Allow to concatenate on axis 1 two tables made of misaligned blocks. For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each. To do that, I slice the row blocks to re-align the blocks. Fix https://github.com/huggingface/datasets/issues/5413
true
1,555,532,719
https://api.github.com/repos/huggingface/datasets/issues/5461
https://github.com/huggingface/datasets/issues/5461
5,461
Discrepancy in `nyu_depth_v2` dataset
open
37
2023-01-24T19:15:46
2023-02-06T20:52:00
null
awsaf49
[]
### Describe the bug I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-sid...
false
1,555,387,532
https://api.github.com/repos/huggingface/datasets/issues/5460
https://github.com/huggingface/datasets/pull/5460
5,460
Document that removing all the columns returns an empty document and the num_row is lost
closed
4
2023-01-24T17:33:38
2023-01-25T16:11:10
2023-01-25T16:04:03
thomasw21
[]
null
true
1,555,367,504
https://api.github.com/repos/huggingface/datasets/issues/5459
https://github.com/huggingface/datasets/pull/5459
5,459
Disable aiohttp requoting of redirection URL
closed
7
2023-01-24T17:18:59
2024-09-01T18:08:31
2023-01-31T08:37:54
albertvillanova
[]
The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'` This is a problem for our Hugging Face Hub, which requires exact URL from location header. Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `re...
true
1,555,054,737
https://api.github.com/repos/huggingface/datasets/issues/5458
https://github.com/huggingface/datasets/issues/5458
5,458
slice split while streaming
closed
2
2023-01-24T14:08:17
2023-01-24T15:11:47
2023-01-24T15:11:47
SvenDS9
[]
### Describe the bug When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported. Did I miss this in the documentation? ### Steps to reproduce the bug `load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")` causes ValueError: Bad split:...
false
1,554,171,264
https://api.github.com/repos/huggingface/datasets/issues/5457
https://github.com/huggingface/datasets/issues/5457
5,457
prebuilt dataset relies on `downloads/extracted`
open
3
2023-01-24T02:09:32
2024-11-18T07:43:51
null
stas00
[]
### Describe the bug I pre-built the dataset: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` and it can be used just fine. now I wipe out `downloads/extracted` and it no longer works. ``` rm -r ~/.cache/huggingface...
false
1,553,905,148
https://api.github.com/repos/huggingface/datasets/issues/5456
https://github.com/huggingface/datasets/pull/5456
5,456
feat: tqdm for `to_parquet`
closed
2
2023-01-23T22:05:38
2023-01-24T11:26:47
2023-01-24T11:17:12
zanussbaum
[]
As described in #5418 I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet?
true
1,553,040,080
https://api.github.com/repos/huggingface/datasets/issues/5455
https://github.com/huggingface/datasets/pull/5455
5,455
Single TQDM bar in multi-proc map
closed
12
2023-01-23T12:49:40
2023-02-13T20:23:34
2023-02-13T20:16:38
mariosasko
[]
Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode. Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issue...
true
1,552,890,419
https://api.github.com/repos/huggingface/datasets/issues/5454
https://github.com/huggingface/datasets/issues/5454
5,454
Save and resume the state of a DataLoader
open
21
2023-01-23T10:58:54
2024-11-27T01:19:21
null
lhoestq
[ "enhancement", "generic discussion" ]
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed) What I have in mind (but lmk if you have other ideas or comments): For map-style datasets, this requires to have a PyTorch Sampler state that can be sav...
false
1,552,727,425
https://api.github.com/repos/huggingface/datasets/issues/5453
https://github.com/huggingface/datasets/pull/5453
5,453
Fix base directory while extracting insecure TAR files
closed
3
2023-01-23T08:57:40
2023-01-24T01:34:20
2023-01-23T10:10:42
albertvillanova
[]
This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared: - from: "." - to: `output_path` This PR also adds tests for extracting insecure TAR files. Related to: - #5441 - #5452 @stas00 please note this PR addresses just one of the issues you pointe...
true
1,552,655,939
https://api.github.com/repos/huggingface/datasets/issues/5452
https://github.com/huggingface/datasets/pull/5452
5,452
Swap log messages for symbolic/hard links in tar extractor
closed
2
2023-01-23T07:53:38
2023-01-23T09:40:55
2023-01-23T08:31:17
albertvillanova
[]
The log messages do not match their if-condition. This PR swaps them. Found while investigating: - #5441 CC: @lhoestq
true
1,552,336,300
https://api.github.com/repos/huggingface/datasets/issues/5451
https://github.com/huggingface/datasets/issues/5451
5,451
ImageFolder BadZipFile: Bad offset for central directory
closed
3
2023-01-22T23:50:12
2023-05-23T10:35:48
2023-02-10T16:31:36
hmartiro
[]
### Describe the bug I'm getting the following exception: ``` lib/python3.10/zipfile.py:1353 in _RealGetContents │ │ │ │ 1350 │ │ # self.start_dir: Position of start of central directory ...
false
1,551,109,365
https://api.github.com/repos/huggingface/datasets/issues/5450
https://github.com/huggingface/datasets/issues/5450
5,450
to_tf_dataset with a TF collator causes bizarrely persistent slowdown
closed
7
2023-01-20T16:08:37
2023-02-13T14:13:34
2023-02-13T14:13:34
Rocketknight1
[]
### Describe the bug This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing) Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data colla...
false
1,550,801,453
https://api.github.com/repos/huggingface/datasets/issues/5449
https://github.com/huggingface/datasets/pull/5449
5,449
Support fsspec 2023.1.0 in CI
closed
2
2023-01-20T12:53:17
2023-01-20T13:32:50
2023-01-20T13:26:03
albertvillanova
[]
Support fsspec 2023.1.0 in CI. In the 2023.1.0 fsspec release, they replaced the type of `fsspec.registry`: - from `ReadOnlyRegistry`, with an attribute called `target` - to `MappingProxyType`, without that attribute Consequently, we need to change our `mock_fsspec` fixtures, that were using the `target` attrib...
true
1,550,618,514
https://api.github.com/repos/huggingface/datasets/issues/5448
https://github.com/huggingface/datasets/issues/5448
5,448
Support fsspec 2023.1.0 in CI
closed
0
2023-01-20T10:26:31
2023-01-20T13:26:05
2023-01-20T13:26:05
albertvillanova
[ "enhancement" ]
Once we find out the root cause of: - #5445 we should revert the temporary pin on fsspec introduced by: - #5447
false
1,550,599,193
https://api.github.com/repos/huggingface/datasets/issues/5447
https://github.com/huggingface/datasets/pull/5447
5,447
Fix CI by temporarily pinning fsspec < 2023.1.0
closed
2
2023-01-20T10:11:02
2023-01-20T10:38:13
2023-01-20T10:28:43
albertvillanova
[]
Temporarily pin fsspec < 2023.1.0 Fix #5445.
true
1,550,591,588
https://api.github.com/repos/huggingface/datasets/issues/5446
https://github.com/huggingface/datasets/pull/5446
5,446
test v0.12.0.rc0
closed
5
2023-01-20T10:05:19
2023-01-20T10:43:22
2023-01-20T10:13:48
Wauplin
[]
DO NOT MERGE. Only to test the CI. cc @lhoestq @albertvillanova
true
1,550,588,703
https://api.github.com/repos/huggingface/datasets/issues/5445
https://github.com/huggingface/datasets/issues/5445
5,445
CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target'
closed
0
2023-01-20T10:03:10
2023-01-20T10:28:44
2023-01-20T10:28:44
albertvillanova
[ "bug" ]
CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185 ``` ... ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_path...
false
1,550,185,071
https://api.github.com/repos/huggingface/datasets/issues/5444
https://github.com/huggingface/datasets/issues/5444
5,444
info messages logged as warnings
closed
7
2023-01-20T01:19:18
2023-07-12T17:19:31
2023-07-12T17:19:31
davidgilbertson
[]
### Describe the bug Code in `datasets` is using `logger.warning` when it should be using `logger.info`. Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category. Definitions from the Python docs for reference: * I...
false
1,550,178,914
https://api.github.com/repos/huggingface/datasets/issues/5443
https://github.com/huggingface/datasets/pull/5443
5,443
Update share tutorial
closed
2
2023-01-20T01:09:14
2023-01-20T15:44:45
2023-01-20T15:37:30
stevhliu
[]
Based on feedback from discussion #5423, this PR updates the sharing tutorial with a mention of writing your own dataset loading script to support more advanced dataset creation options like multiple configs. I'll open a separate PR to update the *Create a Dataset card* with the new Hub metadata UI update 😄
true