id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,628,225,544 | https://api.github.com/repos/huggingface/datasets/issues/5647 | https://github.com/huggingface/datasets/issues/5647 | 5,647 | Make all print statements optional | closed | 2 | 2023-03-16T20:30:07 | 2023-07-21T14:20:25 | 2023-07-21T14:20:24 | gagan3012 | [
"enhancement"
] | ### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute | false |
1,627,838,762 | https://api.github.com/repos/huggingface/datasets/issues/5646 | https://github.com/huggingface/datasets/pull/5646 | 5,646 | Allow self as key in `Features` | closed | 3 | 2023-03-16T16:17:03 | 2023-03-16T17:21:58 | 2023-03-16T17:14:50 | mariosasko | [] | Fix #5641 | true |
1,627,108,278 | https://api.github.com/repos/huggingface/datasets/issues/5645 | https://github.com/huggingface/datasets/issues/5645 | 5,645 | Datasets map and select(range()) is giving dill error | closed | 2 | 2023-03-16T10:01:28 | 2023-03-17T04:24:51 | 2023-03-17T04:24:51 | Tanya-11 | [] | ### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns... | false |
1,626,204,046 | https://api.github.com/repos/huggingface/datasets/issues/5644 | https://github.com/huggingface/datasets/pull/5644 | 5,644 | Allow direct cast from binary to Audio/Image | closed | 3 | 2023-03-15T20:02:54 | 2023-03-16T14:20:44 | 2023-03-16T14:12:55 | mariosasko | [] | To address https://github.com/huggingface/datasets/discussions/5593.
| true |
1,626,160,220 | https://api.github.com/repos/huggingface/datasets/issues/5643 | https://github.com/huggingface/datasets/pull/5643 | 5,643 | Support PyArrow arrays as column values in `from_dict` | closed | 3 | 2023-03-15T19:32:40 | 2023-03-16T17:23:06 | 2023-03-16T17:15:40 | mariosasko | [] | For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values.
"Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417 | true |
1,626,043,177 | https://api.github.com/repos/huggingface/datasets/issues/5642 | https://github.com/huggingface/datasets/pull/5642 | 5,642 | Bump hfh to 0.11.0 | closed | 6 | 2023-03-15T18:26:07 | 2023-03-20T12:34:09 | 2023-03-20T12:26:58 | lhoestq | [] | to fix errors like
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/...
```
(e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997))
0.11.0 is the current mini... | true |
1,625,942,730 | https://api.github.com/repos/huggingface/datasets/issues/5641 | https://github.com/huggingface/datasets/issues/5641 | 5,641 | Features cannot be named "self" | closed | 0 | 2023-03-15T17:16:40 | 2023-03-16T17:14:51 | 2023-03-16T17:14:51 | alialamiidrissi | [] | ### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0... | false |
1,625,896,057 | https://api.github.com/repos/huggingface/datasets/issues/5640 | https://github.com/huggingface/datasets/pull/5640 | 5,640 | Less zip false positives | closed | 6 | 2023-03-15T16:48:59 | 2023-03-16T13:47:37 | 2023-03-16T13:40:12 | lhoestq | [] | `zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile`
This is a known issue: https://github.com/python/cpython/issues/72680
At first I wanted to rely only on magic numbers, but t... | true |
1,625,737,098 | https://api.github.com/repos/huggingface/datasets/issues/5639 | https://github.com/huggingface/datasets/issues/5639 | 5,639 | Parquet file wrongly recognized as zip prevents loading a dataset | closed | 0 | 2023-03-15T15:20:45 | 2023-03-16T13:40:14 | 2023-03-16T13:40:14 | clefourrier | [] | ### Describe the bug
When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data... | false |
1,625,564,471 | https://api.github.com/repos/huggingface/datasets/issues/5638 | https://github.com/huggingface/datasets/issues/5638 | 5,638 | xPath to implement all operations for Path | closed | 5 | 2023-03-15T13:47:11 | 2023-03-17T13:21:12 | 2023-03-17T13:21:12 | thomasw21 | [
"enhancement"
] | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | false |
1,625,295,691 | https://api.github.com/repos/huggingface/datasets/issues/5637 | https://github.com/huggingface/datasets/issues/5637 | 5,637 | IterableDataset with_format does not support 'device' keyword for jax | open | 3 | 2023-03-15T11:04:12 | 2025-01-07T06:59:33 | null | Lime-Cakes | [] | ### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi... | false |
1,623,721,577 | https://api.github.com/repos/huggingface/datasets/issues/5636 | https://github.com/huggingface/datasets/pull/5636 | 5,636 | Fix CI: ignore C901 ("some_func" is to complex) in `ruff` | closed | 2 | 2023-03-14T15:29:11 | 2023-03-14T16:37:06 | 2023-03-14T16:29:52 | polinaeterna | [] | idk if I should have added this ignore to `ruff` too, but I added :) | true |
1,623,682,558 | https://api.github.com/repos/huggingface/datasets/issues/5635 | https://github.com/huggingface/datasets/pull/5635 | 5,635 | Pass custom metadata filename to Image/Audio folders | open | 4 | 2023-03-14T15:08:16 | 2023-03-22T17:50:31 | null | polinaeterna | [] | This is a quick fix.
Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter.
For example, with the structure like:
```
data
images_dir/
im1.jpg
im2.jpg
...
metadata_dir/
meta_file... | true |
1,622,424,174 | https://api.github.com/repos/huggingface/datasets/issues/5634 | https://github.com/huggingface/datasets/issues/5634 | 5,634 | Not all progress bars are showing up when they should for downloading dataset | closed | 2 | 2023-03-13T23:04:18 | 2023-10-11T16:30:16 | 2023-10-11T16:30:16 | garlandz-db | [] | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width=... | false |
1,621,469,970 | https://api.github.com/repos/huggingface/datasets/issues/5633 | https://github.com/huggingface/datasets/issues/5633 | 5,633 | Cannot import datasets | closed | 1 | 2023-03-13T13:14:44 | 2023-03-13T17:54:19 | 2023-03-13T17:54:19 | ruplet | [] | ### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Pl... | false |
1,621,177,391 | https://api.github.com/repos/huggingface/datasets/issues/5632 | https://github.com/huggingface/datasets/issues/5632 | 5,632 | Dataset cannot convert too large dictionnary | open | 1 | 2023-03-13T10:14:40 | 2023-03-16T15:28:57 | null | MaraLac | [] | ### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of m... | false |
1,620,442,854 | https://api.github.com/repos/huggingface/datasets/issues/5631 | https://github.com/huggingface/datasets/issues/5631 | 5,631 | Custom split names | closed | 1 | 2023-03-12T17:21:43 | 2023-03-24T14:13:00 | 2023-03-24T14:13:00 | ErfanMoosaviMonazzah | [
"enhancement"
] | ### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (curren... | false |
1,620,327,510 | https://api.github.com/repos/huggingface/datasets/issues/5630 | https://github.com/huggingface/datasets/pull/5630 | 5,630 | adds early exit if url is `PathLike` | open | 1 | 2023-03-12T11:23:28 | 2023-03-15T11:58:38 | null | vvvm23 | [] | Closes #4864
Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument. | true |
1,619,921,247 | https://api.github.com/repos/huggingface/datasets/issues/5629 | https://github.com/huggingface/datasets/issues/5629 | 5,629 | load_dataset gives "403" error when using Financial phrasebank | open | 1 | 2023-03-11T07:46:39 | 2023-03-13T18:27:26 | null | Jimchoo91 | [] | When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads ... | false |
1,619,641,810 | https://api.github.com/repos/huggingface/datasets/issues/5628 | https://github.com/huggingface/datasets/pull/5628 | 5,628 | add kwargs to index search | closed | 1 | 2023-03-10T21:24:58 | 2023-03-15T14:48:47 | 2023-03-15T14:46:04 | SaulLu | [] | This PR proposes to add kwargs to index search methods.
This is particularly useful for setting the timeout of a query on elasticsearch.
A typical use case would be:
```python
dset.add_elasticsearch_index("filename", es_client=es_client)
scores, examples = dset.get_nearest_examples("filename", "my_name-train_2... | true |
1,619,336,609 | https://api.github.com/repos/huggingface/datasets/issues/5627 | https://github.com/huggingface/datasets/issues/5627 | 5,627 | Unable to load AutoTrain-generated dataset from the hub | open | 2 | 2023-03-10T17:25:58 | 2023-03-11T15:44:42 | null | ijmiller2 | [] | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
... | false |
1,619,252,984 | https://api.github.com/repos/huggingface/datasets/issues/5626 | https://github.com/huggingface/datasets/pull/5626 | 5,626 | Support streaming datasets with numpy.load | closed | 2 | 2023-03-10T16:33:39 | 2023-03-21T06:36:05 | 2023-03-21T06:28:54 | albertvillanova | [] | Support streaming datasets with `numpy.load`.
See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1 | true |
1,618,971,855 | https://api.github.com/repos/huggingface/datasets/issues/5625 | https://github.com/huggingface/datasets/issues/5625 | 5,625 | Allow "jsonl" data type signifier | open | 2 | 2023-03-10T13:21:48 | 2023-03-11T10:35:39 | null | BramVanroy | [
"enhancement"
] | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset scri... | false |
1,617,400,192 | https://api.github.com/repos/huggingface/datasets/issues/5624 | https://github.com/huggingface/datasets/issues/5624 | 5,624 | glue datasets returning -1 for test split | closed | 1 | 2023-03-09T14:47:18 | 2023-03-09T16:49:29 | 2023-03-09T16:49:29 | lithafnium | [] | ### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
... | false |
1,616,712,665 | https://api.github.com/repos/huggingface/datasets/issues/5623 | https://github.com/huggingface/datasets/pull/5623 | 5,623 | Remove set_access_token usage + fail tests if FutureWarning | closed | 6 | 2023-03-09T08:46:01 | 2023-03-09T15:39:00 | 2023-03-09T15:31:59 | Wauplin | [] | `set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`.
This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere.
In the future, us... | true |
1,615,190,942 | https://api.github.com/repos/huggingface/datasets/issues/5622 | https://github.com/huggingface/datasets/pull/5622 | 5,622 | Update README template to better template | closed | 3 | 2023-03-08T12:30:23 | 2023-03-11T05:07:38 | 2023-03-11T05:07:38 | emiltj | [] | null | true |
1,615,029,615 | https://api.github.com/repos/huggingface/datasets/issues/5621 | https://github.com/huggingface/datasets/pull/5621 | 5,621 | Adding Oracle Cloud to docs | closed | 2 | 2023-03-08T10:22:50 | 2023-03-11T00:57:18 | 2023-03-11T00:49:56 | ahosler | [] | Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers. | true |
1,613,460,520 | https://api.github.com/repos/huggingface/datasets/issues/5620 | https://github.com/huggingface/datasets/pull/5620 | 5,620 | Bump pyarrow to 8.0.0 | closed | 12 | 2023-03-07T13:31:53 | 2023-03-08T14:01:27 | 2023-03-08T13:54:22 | lhoestq | [] | Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0):
```python
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory... | true |
1,613,439,709 | https://api.github.com/repos/huggingface/datasets/issues/5619 | https://github.com/huggingface/datasets/pull/5619 | 5,619 | unpin fsspec | closed | 3 | 2023-03-07T13:22:41 | 2023-03-07T13:47:01 | 2023-03-07T13:39:02 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/5618 | true |
1,612,977,934 | https://api.github.com/repos/huggingface/datasets/issues/5618 | https://github.com/huggingface/datasets/issues/5618 | 5,618 | Unpin fsspec < 2023.3.0 once issue fixed | closed | 0 | 2023-03-07T08:41:51 | 2023-03-07T13:39:03 | 2023-03-07T13:39:03 | albertvillanova | [] | Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614 | false |
1,612,947,422 | https://api.github.com/repos/huggingface/datasets/issues/5617 | https://github.com/huggingface/datasets/pull/5617 | 5,617 | Fix CI by temporarily pinning fsspec < 2023.3.0 | closed | 2 | 2023-03-07T08:18:20 | 2023-03-07T08:44:55 | 2023-03-07T08:37:28 | albertvillanova | [] | As a hotfix for our CI, temporarily pin `fsspec`:
Fix #5616.
Until root cause is fixed, see:
- #5614 | true |
1,612,932,508 | https://api.github.com/repos/huggingface/datasets/issues/5616 | https://github.com/huggingface/datasets/issues/5616 | 5,616 | CI is broken after fsspec-2023.3.0 release | closed | 0 | 2023-03-07T08:06:39 | 2023-03-07T08:37:29 | 2023-03-07T08:37:29 | albertvillanova | [
"bug"
] | As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677... | false |
1,612,552,653 | https://api.github.com/repos/huggingface/datasets/issues/5615 | https://github.com/huggingface/datasets/issues/5615 | 5,615 | IterableDataset.add_column is unable to accept another IterableDataset as a parameter. | closed | 1 | 2023-03-07T01:52:00 | 2023-03-09T15:24:05 | 2023-03-09T15:23:54 | zsaladin | [
"wontfix"
] | ### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
... | false |
1,611,896,357 | https://api.github.com/repos/huggingface/datasets/issues/5614 | https://github.com/huggingface/datasets/pull/5614 | 5,614 | Fix archive fs test | closed | 4 | 2023-03-06T17:28:09 | 2023-03-07T13:27:50 | 2023-03-07T13:20:57 | lhoestq | [] | null | true |
1,611,875,473 | https://api.github.com/repos/huggingface/datasets/issues/5613 | https://github.com/huggingface/datasets/issues/5613 | 5,613 | Version mismatch with multiprocess and dill on Python 3.10 | open | 6 | 2023-03-06T17:14:41 | 2024-04-05T20:13:52 | null | adampauls | [] | ### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/... | false |
1,611,262,510 | https://api.github.com/repos/huggingface/datasets/issues/5612 | https://github.com/huggingface/datasets/issues/5612 | 5,612 | Arrow map type in parquet files unsupported | open | 4 | 2023-03-06T12:03:24 | 2024-03-15T18:56:12 | null | TevenLeScao | [] | ### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce ... | false |
1,611,197,906 | https://api.github.com/repos/huggingface/datasets/issues/5611 | https://github.com/huggingface/datasets/pull/5611 | 5,611 | add Dataset.to_list | closed | 3 | 2023-03-06T11:21:57 | 2023-03-27T13:34:19 | 2023-03-27T13:26:38 | kyoto7250 | [] | close https://github.com/huggingface/datasets/issues/5606
This PR is for adding the `Dataset.to_list` method.
Thank you in advance.
| true |
1,610,698,006 | https://api.github.com/repos/huggingface/datasets/issues/5610 | https://github.com/huggingface/datasets/issues/5610 | 5,610 | use datasets streaming mode in trainer ddp mode cause memory leak | open | 3 | 2023-03-06T05:26:49 | 2024-03-07T01:11:32 | null | gromzhu | [] | ### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, Sequenti... | false |
1,610,062,862 | https://api.github.com/repos/huggingface/datasets/issues/5609 | https://github.com/huggingface/datasets/issues/5609 | 5,609 | `load_from_disk` vs `load_dataset` performance. | open | 4 | 2023-03-05T05:27:15 | 2023-07-13T18:48:05 | null | davidgilbertson | [] | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_di... | false |
1,609,996,563 | https://api.github.com/repos/huggingface/datasets/issues/5608 | https://github.com/huggingface/datasets/issues/5608 | 5,608 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files. | closed | 2 | 2023-03-05T00:14:45 | 2023-03-12T00:02:57 | 2023-03-12T00:02:57 | jcho19 | [] | ### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the b... | false |
1,609,166,035 | https://api.github.com/repos/huggingface/datasets/issues/5607 | https://github.com/huggingface/datasets/pull/5607 | 5,607 | Fix outdated `verification_mode` values | closed | 2 | 2023-03-03T19:50:29 | 2023-03-09T17:34:13 | 2023-03-09T17:27:07 | polinaeterna | [] | ~I think it makes sense not to save `dataset_info.json` file to a dataset cache directory when loading dataset with `verification_mode="no_checks"` because otherwise when next time the dataset is loaded **without** `verification_mode="no_checks"`, it will be loaded successfully, despite some values in info might not co... | true |
1,608,911,632 | https://api.github.com/repos/huggingface/datasets/issues/5606 | https://github.com/huggingface/datasets/issues/5606 | 5,606 | Add `Dataset.to_list` to the API | closed | 3 | 2023-03-03T16:17:10 | 2023-03-27T13:26:40 | 2023-03-27T13:26:40 | mariosasko | [
"enhancement",
"good first issue"
] | Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`. | false |
1,608,865,460 | https://api.github.com/repos/huggingface/datasets/issues/5605 | https://github.com/huggingface/datasets/pull/5605 | 5,605 | Update README logo | closed | 3 | 2023-03-03T15:46:31 | 2023-03-03T21:57:18 | 2023-03-03T21:50:17 | gary149 | [] | null | true |
1,608,304,775 | https://api.github.com/repos/huggingface/datasets/issues/5604 | https://github.com/huggingface/datasets/issues/5604 | 5,604 | Problems with downloading The Pile | closed | 7 | 2023-03-03T09:52:08 | 2023-10-14T02:15:52 | 2023-03-24T12:44:25 | sentialx | [] | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:
,
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingfa... | false |
1,606,585,596 | https://api.github.com/repos/huggingface/datasets/issues/5600 | https://github.com/huggingface/datasets/issues/5600 | 5,600 | Dataloader getitem not working for DreamboothDatasets | closed | 1 | 2023-03-02T11:00:27 | 2023-03-13T17:59:35 | 2023-03-13T17:59:35 | salahiguiliz | [] | ### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset ... | false |
1,605,018,478 | https://api.github.com/repos/huggingface/datasets/issues/5598 | https://github.com/huggingface/datasets/pull/5598 | 5,598 | Fix push_to_hub with no dataset_infos | closed | 2 | 2023-03-01T13:54:06 | 2023-03-02T13:47:13 | 2023-03-02T13:40:17 | lhoestq | [] | As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags
cc @clefourrier | true |
1,604,928,721 | https://api.github.com/repos/huggingface/datasets/issues/5597 | https://github.com/huggingface/datasets/issues/5597 | 5,597 | in-place dataset update | closed | 3 | 2023-03-01T12:58:18 | 2023-03-02T13:30:41 | 2023-03-02T03:47:00 | speedcell4 | [
"wontfix"
] | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | false |
1,604,919,993 | https://api.github.com/repos/huggingface/datasets/issues/5596 | https://github.com/huggingface/datasets/issues/5596 | 5,596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | closed | 5 | 2023-03-01T12:53:08 | 2023-12-05T03:22:00 | 2023-03-02T11:12:11 | loubnabnl | [] | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | false |
1,604,070,629 | https://api.github.com/repos/huggingface/datasets/issues/5595 | https://github.com/huggingface/datasets/pull/5595 | 5,595 | Unpins sqlAlchemy | closed | 3 | 2023-03-01T01:33:45 | 2023-04-04T08:20:19 | 2023-04-04T08:19:14 | lazarust | [] | Closes #5477 | true |
1,603,980,995 | https://api.github.com/repos/huggingface/datasets/issues/5594 | https://github.com/huggingface/datasets/issues/5594 | 5,594 | Error while downloading the xtreme udpos dataset | closed | 21 | 2023-02-28T23:40:53 | 2023-11-04T20:45:56 | 2023-07-24T14:22:18 | simran-khanuja | [] | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | false |
1,603,619,124 | https://api.github.com/repos/huggingface/datasets/issues/5592 | https://github.com/huggingface/datasets/pull/5592 | 5,592 | Fix docstring example | closed | 2 | 2023-02-28T18:42:37 | 2023-02-28T19:26:33 | 2023-02-28T19:19:15 | stevhliu | [] | Fixes #5581 to use the correct output for the `set_format` method. | true |
1,603,571,407 | https://api.github.com/repos/huggingface/datasets/issues/5591 | https://github.com/huggingface/datasets/pull/5591 | 5,591 | set dev version | closed | 3 | 2023-02-28T18:09:05 | 2023-02-28T18:16:31 | 2023-02-28T18:09:15 | lhoestq | [] | null | true |
1,603,549,504 | https://api.github.com/repos/huggingface/datasets/issues/5590 | https://github.com/huggingface/datasets/pull/5590 | 5,590 | Release: 2.10.1 | closed | 5 | 2023-02-28T17:58:11 | 2023-02-28T18:16:27 | 2023-02-28T18:06:08 | lhoestq | [] | null | true |
1,603,535,704 | https://api.github.com/repos/huggingface/datasets/issues/5589 | https://github.com/huggingface/datasets/pull/5589 | 5,589 | Revert "pass the dataset features to the IterableDataset.from_generator" | closed | 5 | 2023-02-28T17:52:04 | 2023-09-24T10:07:33 | 2023-03-21T14:18:18 | lhoestq | [] | This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily)
It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it
cc @mariosasko @Hubert-Bonisseur | true |
1,603,304,766 | https://api.github.com/repos/huggingface/datasets/issues/5588 | https://github.com/huggingface/datasets/pull/5588 | 5,588 | Flatten dataset on the fly in `save_to_disk` | closed | 3 | 2023-02-28T15:37:46 | 2023-02-28T17:28:35 | 2023-02-28T17:21:17 | mariosasko | [] | Flatten a dataset on the fly in `save_to_disk` instead of doing it with `flatten_indices` to avoid creating an additional cache file.
(this is one of the sub-tasks in https://github.com/huggingface/datasets/issues/5507) | true |
1,603,139,420 | https://api.github.com/repos/huggingface/datasets/issues/5587 | https://github.com/huggingface/datasets/pull/5587 | 5,587 | Fix `sort` with indices mapping | closed | 3 | 2023-02-28T14:05:08 | 2023-02-28T17:28:57 | 2023-02-28T17:21:58 | mariosasko | [] | Fixes the `key` range in the `query_table` call in `sort` to account for an indices mapping
Fix #5586 | true |
1,602,961,544 | https://api.github.com/repos/huggingface/datasets/issues/5586 | https://github.com/huggingface/datasets/issues/5586 | 5,586 | .sort() is broken when used after .filter(), only in 2.10.0 | closed | 1 | 2023-02-28T12:18:09 | 2023-02-28T18:17:26 | 2023-02-28T17:21:59 | MattYoon | [
"bug"
] | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
... | false |
1,602,190,030 | https://api.github.com/repos/huggingface/datasets/issues/5585 | https://github.com/huggingface/datasets/issues/5585 | 5,585 | Cache is not transportable | closed | 2 | 2023-02-28T00:53:06 | 2023-02-28T21:26:52 | 2023-02-28T21:26:52 | davidgilbertson | [] | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I... | false |
1,601,821,808 | https://api.github.com/repos/huggingface/datasets/issues/5584 | https://github.com/huggingface/datasets/issues/5584 | 5,584 | Unable to load coyo700M dataset | closed | 1 | 2023-02-27T19:35:03 | 2023-02-28T07:27:59 | 2023-02-28T07:27:58 | manuaero | [] | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coy... | false |
1,601,583,625 | https://api.github.com/repos/huggingface/datasets/issues/5583 | https://github.com/huggingface/datasets/pull/5583 | 5,583 | Do no write index by default when exporting a dataset | closed | 3 | 2023-02-27T17:04:46 | 2023-02-28T13:52:15 | 2023-02-28T13:44:04 | mariosasko | [] | Ensures all the writers that use Pandas for conversion (JSON, CSV, SQL) do not export `index` by default (https://github.com/huggingface/datasets/pull/5490 only did this for CSV) | true |
1,600,932,092 | https://api.github.com/repos/huggingface/datasets/issues/5582 | https://github.com/huggingface/datasets/pull/5582 | 5,582 | Add column_names to IterableDataset | closed | 2 | 2023-02-27T10:50:07 | 2023-03-13T19:10:22 | 2023-03-13T19:03:32 | patrickloeber | [] | This PR closes #5383
* Add column_names property to IterableDataset
* Add multiple tests for this new property | true |
1,600,675,489 | https://api.github.com/repos/huggingface/datasets/issues/5581 | https://github.com/huggingface/datasets/issues/5581 | 5,581 | [DOC] Mistaken docs on set_format | closed | 1 | 2023-02-27T08:03:09 | 2023-02-28T19:19:17 | 2023-02-28T19:19:17 | NightMachinery | [
"good first issue"
] | ### Describe the bug
https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format
<img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png">
While actually running it will result in:
<img w... | false |
1,600,431,792 | https://api.github.com/repos/huggingface/datasets/issues/5580 | https://github.com/huggingface/datasets/pull/5580 | 5,580 | Support cloud storage in load_dataset via fsspec | closed | 8 | 2023-02-27T04:06:05 | 2024-11-27T01:25:39 | 2023-03-11T00:55:40 | dwyatte | [] | Closes https://github.com/huggingface/datasets/issues/5281
This PR uses fsspec to support datasets on cloud storage (tested manually with GCS). ETags are currently unsupported for cloud storage. In general, a much larger refactor could be done to just use fsspec for all schemes (ftp, http/s, s3, gcs) to unify the in... | true |
1,599,732,211 | https://api.github.com/repos/huggingface/datasets/issues/5579 | https://github.com/huggingface/datasets/pull/5579 | 5,579 | Add instructions to create `DataLoader` from augmented dataset in object detection guide | closed | 3 | 2023-02-25T14:53:17 | 2023-03-23T19:24:59 | 2023-03-23T19:24:50 | Laurent2916 | [] | The following adds instructions on how to create a `DataLoader` from the guide on how to use object detection with augmentations (#4710). I am open to hearing any suggestions for improvement ! | true |
1,598,863,119 | https://api.github.com/repos/huggingface/datasets/issues/5578 | https://github.com/huggingface/datasets/pull/5578 | 5,578 | Add `huggingface_hub` version to env cli command | closed | 4 | 2023-02-24T15:37:43 | 2023-02-27T17:28:25 | 2023-02-27T17:21:09 | mariosasko | [] | Add the `huggingface_hub` version to the `env` command's output. | true |
1,598,587,665 | https://api.github.com/repos/huggingface/datasets/issues/5577 | https://github.com/huggingface/datasets/issues/5577 | 5,577 | Cannot load `the_pile_openwebtext2` | closed | 1 | 2023-02-24T13:01:48 | 2023-02-24T14:01:09 | 2023-02-24T14:01:09 | wjfwzzc | [] | ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load... | false |
1,598,582,744 | https://api.github.com/repos/huggingface/datasets/issues/5576 | https://github.com/huggingface/datasets/issues/5576 | 5,576 | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. | closed | 1 | 2023-02-24T12:57:49 | 2023-02-24T12:58:31 | 2023-02-24T12:58:18 | wjfwzzc | [] | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked aro... | false |
1,598,396,552 | https://api.github.com/repos/huggingface/datasets/issues/5575 | https://github.com/huggingface/datasets/issues/5575 | 5,575 | Metadata for each column | open | 5 | 2023-02-24T10:53:44 | 2024-01-05T21:48:35 | null | parsa-ra | [
"enhancement"
] | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | false |
1,598,104,691 | https://api.github.com/repos/huggingface/datasets/issues/5574 | https://github.com/huggingface/datasets/issues/5574 | 5,574 | c4 dataset streaming fails with `FileNotFoundError` | closed | 12 | 2023-02-24T07:57:32 | 2023-12-18T07:32:32 | 2023-02-27T04:03:38 | krasserm | [] | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | false |
1,597,400,836 | https://api.github.com/repos/huggingface/datasets/issues/5573 | https://github.com/huggingface/datasets/pull/5573 | 5,573 | Use soundfile for mp3 decoding instead of torchaudio | closed | 7 | 2023-02-23T19:19:44 | 2023-02-28T20:25:14 | 2023-02-28T20:16:02 | polinaeterna | [] | I've removed `torchaudio` completely and switched to use `soundfile` for everything. With the new version of `soundfile` package this should work smoothly because the `libsndfile` C library is bundled, in Linux wheels too.
Let me know if you think it's too harsh and we should continue to support `torchaudio` decodi... | true |
1,597,257,624 | https://api.github.com/repos/huggingface/datasets/issues/5572 | https://github.com/huggingface/datasets/issues/5572 | 5,572 | Datasets 2.10.0 does not reuse the dataset cache | closed | 0 | 2023-02-23T17:28:11 | 2023-02-23T18:03:55 | 2023-02-23T18:03:55 | lsb | [] | ### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-... | false |
1,597,198,953 | https://api.github.com/repos/huggingface/datasets/issues/5571 | https://github.com/huggingface/datasets/issues/5571 | 5,571 | load_dataset fails for JSON in windows | closed | 2 | 2023-02-23T16:50:11 | 2023-02-24T13:21:47 | 2023-02-24T13:21:47 | abinashsahu | [] | ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is di... | false |
1,597,190,926 | https://api.github.com/repos/huggingface/datasets/issues/5570 | https://github.com/huggingface/datasets/issues/5570 | 5,570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | closed | 2 | 2023-02-23T16:44:32 | 2023-07-24T15:18:50 | 2023-07-24T15:18:50 | buoi | [] | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | false |
1,597,132,383 | https://api.github.com/repos/huggingface/datasets/issues/5569 | https://github.com/huggingface/datasets/pull/5569 | 5,569 | pass the dataset features to the IterableDataset.from_generator function | closed | 3 | 2023-02-23T16:06:04 | 2023-02-24T14:06:37 | 2023-02-23T18:15:16 | bruno-hays | [] | [5558](https://github.com/huggingface/datasets/issues/5568) | true |
1,596,900,532 | https://api.github.com/repos/huggingface/datasets/issues/5568 | https://github.com/huggingface/datasets/issues/5568 | 5,568 | dataset.to_iterable_dataset() loses useful info like dataset features | closed | 3 | 2023-02-23T13:45:33 | 2023-02-24T13:22:36 | 2023-02-24T13:22:36 | bruno-hays | [
"enhancement",
"good first issue"
] | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | false |
1,595,916,674 | https://api.github.com/repos/huggingface/datasets/issues/5566 | https://github.com/huggingface/datasets/issues/5566 | 5,566 | Directly reading parquet files in a s3 bucket from the load_dataset method | open | 1 | 2023-02-22T22:13:40 | 2023-02-23T11:03:29 | null | shamanez | [
"duplicate",
"enhancement"
] | ### Feature request
Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial
### Motivation
In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage.
### Yo... | false |
1,595,281,752 | https://api.github.com/repos/huggingface/datasets/issues/5565 | https://github.com/huggingface/datasets/pull/5565 | 5,565 | Add writer_batch_size for ArrowBasedBuilder | closed | 6 | 2023-02-22T15:09:30 | 2023-03-10T13:53:03 | 2023-03-10T13:45:43 | lhoestq | [] | This way we can control the size of the record_batches/row_groups of arrow/parquet files.
This can be useful for `datasets-server` to keep control of the row groups size which can affect random access performance for audio/image/video datasets
Right now having 1,000 examples per row group cause some image dataset... | true |
1,595,064,698 | https://api.github.com/repos/huggingface/datasets/issues/5564 | https://github.com/huggingface/datasets/pull/5564 | 5,564 | Set dev version | closed | 3 | 2023-02-22T13:00:09 | 2023-02-22T13:09:26 | 2023-02-22T13:00:25 | lhoestq | [] | null | true |
1,595,049,025 | https://api.github.com/repos/huggingface/datasets/issues/5563 | https://github.com/huggingface/datasets/pull/5563 | 5,563 | Release: 2.10.0 | closed | 4 | 2023-02-22T12:48:52 | 2023-02-22T13:05:55 | 2023-02-22T12:56:48 | lhoestq | [] | null | true |
1,594,625,539 | https://api.github.com/repos/huggingface/datasets/issues/5562 | https://github.com/huggingface/datasets/pull/5562 | 5,562 | Update csv.py | closed | 4 | 2023-02-22T07:56:10 | 2023-02-23T11:07:49 | 2023-02-23T11:00:58 | xdoubleu | [] | Removed mangle_dup_cols=True from BuilderConfig.
It triggered following deprecation warning:
/usr/local/lib/python3.8/dist-packages/datasets/download/streaming_download_manager.py:776: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the ... | true |
1,593,862,388 | https://api.github.com/repos/huggingface/datasets/issues/5561 | https://github.com/huggingface/datasets/pull/5561 | 5,561 | Add pre-commit config yaml file to enable automatic code formatting | closed | 6 | 2023-02-21T17:35:07 | 2023-02-28T15:37:22 | 2023-02-23T18:23:29 | polinaeterna | [] | @huggingface/datasets do you think it would be useful? Motivation - sometimes PRs are like 30% "fix: style" commits :)
If so - I need to double check the config but for me locally it works as expected. | true |
1,593,809,978 | https://api.github.com/repos/huggingface/datasets/issues/5560 | https://github.com/huggingface/datasets/pull/5560 | 5,560 | Ensure last tqdm update in `map` | closed | 10 | 2023-02-21T16:56:17 | 2023-02-21T18:26:23 | 2023-02-21T18:19:09 | mariosasko | [] | This PR modifies `map` to:
* ensure the TQDM bar gets the last progress update
* when a map function fails, avoid throwing a chained exception in the single-proc mode | true |
1,593,676,489 | https://api.github.com/repos/huggingface/datasets/issues/5559 | https://github.com/huggingface/datasets/pull/5559 | 5,559 | Fix map suffix_template | closed | 4 | 2023-02-21T15:26:26 | 2023-02-21T17:21:37 | 2023-02-21T17:14:29 | lhoestq | [] | #5455 introduced a small bug that lead `map` to ignore the `suffix_template` argument and not put suffixes to cached files in multiprocessing.
I fixed this and also improved a few things:
- regarding logging: "Loading cached processed dataset" is now logged only once even in multiprocessing (it used to be logged ... | true |
1,593,655,815 | https://api.github.com/repos/huggingface/datasets/issues/5558 | https://github.com/huggingface/datasets/pull/5558 | 5,558 | Remove instructions for `ffmpeg` system package installation on Colab | closed | 2 | 2023-02-21T15:13:36 | 2023-03-01T13:46:04 | 2023-02-23T13:50:27 | polinaeterna | [] | Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version. | true |
1,593,545,324 | https://api.github.com/repos/huggingface/datasets/issues/5557 | https://github.com/huggingface/datasets/pull/5557 | 5,557 | Add filter desc | closed | 3 | 2023-02-21T14:04:42 | 2023-02-21T14:19:54 | 2023-02-21T14:12:39 | lhoestq | [] | Otherwise it would show a `Map` progress bar, since it uses `map` under the hood | true |
1,593,246,936 | https://api.github.com/repos/huggingface/datasets/issues/5556 | https://github.com/huggingface/datasets/pull/5556 | 5,556 | Use default audio resampling type | closed | 5 | 2023-02-21T10:45:50 | 2023-02-21T12:49:50 | 2023-02-21T12:42:52 | lhoestq | [] | ...instead of relying on the optional librosa dependency `resampy`.
It was only used for `_decode_non_mp3_file_like` anyway and not for the other ones - removing it fixes consistency between decoding methods (except torchaudio decoding)
Therefore I think it is a better solution than adding `resampy` as a dependen... | true |
1,592,469,938 | https://api.github.com/repos/huggingface/datasets/issues/5555 | https://github.com/huggingface/datasets/issues/5555 | 5,555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | open | 4 | 2023-02-20T21:33:45 | 2023-02-27T09:23:34 | null | prabhakar267 | [] | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | false |
1,592,285,062 | https://api.github.com/repos/huggingface/datasets/issues/5554 | https://github.com/huggingface/datasets/pull/5554 | 5,554 | Add resampy dep | closed | 5 | 2023-02-20T18:15:43 | 2023-09-24T10:07:29 | 2023-02-21T12:43:38 | lhoestq | [] | In librosa 0.10 they removed the `resmpy` dependency and set it to optional.
However it is necessary for resampling. I added it to the "audio" extra dependencies. | true |
1,592,236,998 | https://api.github.com/repos/huggingface/datasets/issues/5553 | https://github.com/huggingface/datasets/pull/5553 | 5,553 | improved message error row formatting | closed | 2 | 2023-02-20T17:29:14 | 2023-02-21T13:08:25 | 2023-02-21T12:58:12 | Plutone11011 | [] | Solves #5539 | true |
1,592,186,703 | https://api.github.com/repos/huggingface/datasets/issues/5552 | https://github.com/huggingface/datasets/pull/5552 | 5,552 | Make tiktoken tokenizers hashable | closed | 4 | 2023-02-20T16:50:09 | 2023-02-21T13:20:42 | 2023-02-21T13:13:05 | mariosasko | [] | Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
| true |
1,592,140,836 | https://api.github.com/repos/huggingface/datasets/issues/5551 | https://github.com/huggingface/datasets/pull/5551 | 5,551 | Suggest scikit-learn instead of sklearn | closed | 4 | 2023-02-20T16:16:57 | 2023-02-21T13:27:57 | 2023-02-21T13:21:07 | osbm | [] | This is kinda unimportant fix but, the suggested `pip install sklearn` does not work.
The current error message if sklearn is not installed:
```
ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn.
Please install it using 'pip install sklearn' for instance.
```
... | true |
1,591,409,475 | https://api.github.com/repos/huggingface/datasets/issues/5550 | https://github.com/huggingface/datasets/pull/5550 | 5,550 | Resolve four broken refs in the docs | closed | 3 | 2023-02-20T08:52:11 | 2023-02-20T15:16:13 | 2023-02-20T15:09:13 | tomaarsen | [] | Hello!
## Pull Request overview
* Resolve 4 broken references in the docs
## The problems
Two broken references [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.class_encode_column):
 it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, t... | false |
1,590,315,972 | https://api.github.com/repos/huggingface/datasets/issues/5545 | https://github.com/huggingface/datasets/pull/5545 | 5,545 | Added return methods for URL-references to the pushed dataset | open | 6 | 2023-02-18T11:26:25 | 2023-12-18T16:57:56 | null | davidberenstein1957 | [] | Hi,
I was missing the ability to easily open the pushed dataset and it seemed like a quick fix.
Maybe we also want to log this info somewhere, but let me know if I need to add that too.
Cheers,
David | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.