id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 β | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k β | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,291,118,869 | https://api.github.com/repos/huggingface/datasets/issues/6891 | https://github.com/huggingface/datasets/issues/6891 | 6,891 | Unable to load JSON saved using `to_json` | closed | 2 | 2024-05-12T01:02:51 | 2024-05-16T14:32:55 | 2024-05-12T07:02:02 | DarshanDeshpande | [] | ### Describe the bug
Datasets stored in the JSON format cannot be loaded using `json.load()`
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
dataset = load_dataset("squad")
train_dataset, test_dataset = dataset["train"], dataset["validation"]
test_dataset.to_json("full_dataset... | false |
2,288,699,041 | https://api.github.com/repos/huggingface/datasets/issues/6890 | https://github.com/huggingface/datasets/issues/6890 | 6,890 | add `with_transform` and/or `set_transform` to IterableDataset | open | 0 | 2024-05-10T01:00:12 | 2024-05-10T01:00:46 | null | not-lain | [
"enhancement"
] | ### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would ... | false |
2,287,720,539 | https://api.github.com/repos/huggingface/datasets/issues/6889 | https://github.com/huggingface/datasets/pull/6889 | 6,889 | fix bug #6877 | closed | 9 | 2024-05-09T13:38:40 | 2024-05-13T13:35:32 | 2024-05-13T13:35:32 | arthasking123 | [] | fix bug #6877 due to maybe f becomes invaild after yield process
the results are below:
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 828/828 [00:01<00:00, 420.41it/s]
Resolving data files: 100%|ββββββββ... | true |
2,287,169,676 | https://api.github.com/repos/huggingface/datasets/issues/6888 | https://github.com/huggingface/datasets/pull/6888 | 6,888 | Support WebDataset containing file basenames with dots | closed | 5 | 2024-05-09T08:25:30 | 2024-05-10T13:54:06 | 2024-05-10T13:54:06 | albertvillanova | [] | Support WebDataset containing file basenames with dots.
Fix #6880. | true |
2,286,786,396 | https://api.github.com/repos/huggingface/datasets/issues/6887 | https://github.com/huggingface/datasets/issues/6887 | 6,887 | FAISS load to None | open | 1 | 2024-05-09T02:43:50 | 2024-05-16T20:44:23 | null | brainer3220 | [] | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transf... | false |
2,286,328,984 | https://api.github.com/repos/huggingface/datasets/issues/6886 | https://github.com/huggingface/datasets/issues/6886 | 6,886 | load_dataset with data_dir and cache_dir set fail with not supported | open | 0 | 2024-05-08T19:52:35 | 2024-05-08T19:58:11 | null | fah | [] | ### Describe the bug
with python 3.11 I execute:
```py
from transformers import Wav2Vec2Processor, Data2VecAudioModel
import torch
from torch import nn
from datasets import load_dataset, concatenate_datasets
# load demo audio and set processor
dataset_clean = load_dataset("librispeech_asr", "clean", split="... | false |
2,285,115,400 | https://api.github.com/repos/huggingface/datasets/issues/6885 | https://github.com/huggingface/datasets/pull/6885 | 6,885 | Support jax 0.4.27 in CI tests | closed | 2 | 2024-05-08T09:19:37 | 2024-05-08T09:43:19 | 2024-05-08T09:35:16 | albertvillanova | [] | Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists).
Fix #6884. | true |
2,284,839,687 | https://api.github.com/repos/huggingface/datasets/issues/6884 | https://github.com/huggingface/datasets/issues/6884 | 6,884 | CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device' | closed | 0 | 2024-05-08T07:01:47 | 2024-05-08T09:35:17 | 2024-05-08T09:35:17 | albertvillanova | [
"bug"
] | After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error:
```Python traceback
AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'?
```
See: https://github.com/huggingface/datasets/actions/runs/8997488... | false |
2,284,808,399 | https://api.github.com/repos/huggingface/datasets/issues/6883 | https://github.com/huggingface/datasets/pull/6883 | 6,883 | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset | closed | 10 | 2024-05-08T06:43:29 | 2024-08-28T13:13:57 | 2024-05-16T14:34:02 | albertvillanova | [] | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset.
The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3
The bug #6881 was introduced in datasets-2.19.0 by this PR:
- #6739... | true |
2,284,803,158 | https://api.github.com/repos/huggingface/datasets/issues/6882 | https://github.com/huggingface/datasets/issues/6882 | 6,882 | Connection Error When Using By-pass Proxies | open | 1 | 2024-05-08T06:40:14 | 2024-05-17T06:38:30 | null | MRNOBODY-ZST | [] | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash providesπ€, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(M... | false |
2,284,794,009 | https://api.github.com/repos/huggingface/datasets/issues/6881 | https://github.com/huggingface/datasets/issues/6881 | 6,881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | closed | 3 | 2024-05-08T06:33:57 | 2024-07-18T06:49:30 | 2024-05-16T14:34:03 | albertvillanova | [
"bug"
] | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1... | false |
2,283,278,337 | https://api.github.com/repos/huggingface/datasets/issues/6880 | https://github.com/huggingface/datasets/issues/6880 | 6,880 | Webdataset: KeyError: 'png' on some datasets when streaming | open | 5 | 2024-05-07T13:09:02 | 2024-05-14T20:34:05 | null | lhoestq | [] | reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloadingβdata:β100%
β1.41G/1.41Gβ[00:48<00:00,β17.2MB/s]
Downloadingβdata:β100%
β619M/619Mβ[00:11<00:00,β57.4MB/s]
Generatingβtrainβsp... | false |
2,282,968,259 | https://api.github.com/repos/huggingface/datasets/issues/6879 | https://github.com/huggingface/datasets/issues/6879 | 6,879 | Batched mapping does not raise an error if values for an existing column are empty | open | 0 | 2024-05-07T11:02:40 | 2024-05-07T11:02:40 | null | felix-schneider | [] | ### Describe the bug
Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised.
This is not the case if the... | false |
2,282,879,491 | https://api.github.com/repos/huggingface/datasets/issues/6878 | https://github.com/huggingface/datasets/pull/6878 | 6,878 | Create function to convert to parquet | closed | 2 | 2024-05-07T10:27:07 | 2024-05-16T14:46:44 | 2024-05-16T14:38:23 | albertvillanova | [] | Analogously with `delete_from_hub`, this PR:
- creates the Python function `convert_to_parquet`
- makes the corresponding CLI command use that function.
This way, the functionality can be used both from a terminal and from a Python console.
This PR also implements a test for convert_to_parquet function. | true |
2,282,068,337 | https://api.github.com/repos/huggingface/datasets/issues/6877 | https://github.com/huggingface/datasets/issues/6877 | 6,877 | OSError: [Errno 24] Too many open files | closed | 5 | 2024-05-07T01:15:09 | 2024-06-02T14:22:23 | 2024-05-13T13:01:55 | loicmagne | [
"bug"
] | ### Describe the bug
I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb)
When trying to load it using the `load_dataset` function I get... | false |
2,281,450,743 | https://api.github.com/repos/huggingface/datasets/issues/6876 | https://github.com/huggingface/datasets/pull/6876 | 6,876 | Unpin hfh | closed | 12 | 2024-05-06T18:10:49 | 2024-05-27T10:20:42 | 2024-05-27T10:14:40 | lhoestq | [] | Needed to use those in dataset-viewer:
- dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests
- dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer
close https://github.com/hugging... | true |
2,281,428,826 | https://api.github.com/repos/huggingface/datasets/issues/6875 | https://github.com/huggingface/datasets/pull/6875 | 6,875 | Shorten long logs | closed | 2 | 2024-05-06T17:57:07 | 2024-05-07T12:31:46 | 2024-05-07T12:25:45 | lhoestq | [] | Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly).
In that case we should still be able to log something readable | true |
2,280,717,233 | https://api.github.com/repos/huggingface/datasets/issues/6874 | https://github.com/huggingface/datasets/pull/6874 | 6,874 | Use pandas ujson in JSON loader to improve performance | closed | 4 | 2024-05-06T12:01:27 | 2024-05-17T16:28:29 | 2024-05-17T16:22:27 | albertvillanova | [] | Use pandas ujson in JSON loader to improve performance.
Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`.
Fix #6867.
CC: @natolambert | true |
2,280,463,182 | https://api.github.com/repos/huggingface/datasets/issues/6873 | https://github.com/huggingface/datasets/pull/6873 | 6,873 | Set dev version | closed | 2 | 2024-05-06T09:43:18 | 2024-05-06T10:03:19 | 2024-05-06T09:57:12 | albertvillanova | [] | null | true |
2,280,438,432 | https://api.github.com/repos/huggingface/datasets/issues/6872 | https://github.com/huggingface/datasets/pull/6872 | 6,872 | Release 2.19.1 | closed | 0 | 2024-05-06T09:29:15 | 2024-05-06T09:35:33 | 2024-05-06T09:35:32 | albertvillanova | [] | null | true |
2,280,102,869 | https://api.github.com/repos/huggingface/datasets/issues/6871 | https://github.com/huggingface/datasets/pull/6871 | 6,871 | Fix download for dict of dicts of URLs | closed | 4 | 2024-05-06T06:06:52 | 2024-05-06T09:32:03 | 2024-05-06T09:25:52 | albertvillanova | [] | Fix download for a dict of dicts of URLs when batched (default), introduced by:
- #6794
This PR also implements regression tests.
Fix #6869, fix #6850. | true |
2,280,084,008 | https://api.github.com/repos/huggingface/datasets/issues/6870 | https://github.com/huggingface/datasets/pull/6870 | 6,870 | Update tqdm >= 4.66.3 to fix vulnerability | closed | 2 | 2024-05-06T05:49:36 | 2024-05-06T06:08:06 | 2024-05-06T06:02:00 | albertvillanova | [] | Update tqdm >= 4.66.3 to fix vulnerability, | true |
2,280,048,297 | https://api.github.com/repos/huggingface/datasets/issues/6869 | https://github.com/huggingface/datasets/issues/6869 | 6,869 | Download is broken for dict of dicts: FileNotFoundError | closed | 0 | 2024-05-06T05:13:36 | 2024-05-06T09:25:53 | 2024-05-06T09:25:53 | albertvillanova | [
"bug"
] | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-0000... | false |
2,279,385,159 | https://api.github.com/repos/huggingface/datasets/issues/6868 | https://github.com/huggingface/datasets/issues/6868 | 6,868 | datasets.BuilderConfig does not work. | closed | 1 | 2024-05-05T08:08:55 | 2024-05-05T12:15:02 | 2024-05-05T12:15:01 | jdm4pku | [] | ### Describe the bug
I custom a BuilderConfig and GeneratorBasedBuilder.
Here is the code for BuilderConfig
```
class UIEConfig(datasets.BuilderConfig):
def __init__(
self,
*args,
data_dir=None,
instruction_file=None,
instruction_strategy=None,... | false |
2,279,059,787 | https://api.github.com/repos/huggingface/datasets/issues/6867 | https://github.com/huggingface/datasets/issues/6867 | 6,867 | Improve performance of JSON loader | closed | 5 | 2024-05-04T15:04:16 | 2024-05-17T16:22:28 | 2024-05-17T16:22:28 | albertvillanova | [
"enhancement"
] | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that... | false |
2,278,736,221 | https://api.github.com/repos/huggingface/datasets/issues/6866 | https://github.com/huggingface/datasets/issues/6866 | 6,866 | DataFilesNotFoundError for datasets in the open-llm-leaderboard | closed | 3 | 2024-05-04T04:59:00 | 2024-05-14T08:09:56 | 2024-05-14T08:09:56 | jerome-white | [] | ### Describe the bug
When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started see... | false |
2,277,304,832 | https://api.github.com/repos/huggingface/datasets/issues/6865 | https://github.com/huggingface/datasets/issues/6865 | 6,865 | Example on Semantic segmentation contains bug | open | 0 | 2024-05-03T09:40:12 | 2024-05-03T09:40:12 | null | ducha-aiki | [] | ### Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59... | false |
2,276,986,981 | https://api.github.com/repos/huggingface/datasets/issues/6864 | https://github.com/huggingface/datasets/issues/6864 | 6,864 | Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub | closed | 1 | 2024-05-03T06:03:30 | 2024-05-06T06:36:42 | 2024-05-06T06:36:41 | vinodrajendran001 | [] | ### Describe the bug
The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub.
### Steps to reproduce the bug
```
from datasets import load_dataset
prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]... | false |
2,276,977,534 | https://api.github.com/repos/huggingface/datasets/issues/6863 | https://github.com/huggingface/datasets/issues/6863 | 6,863 | Revert temporary pin huggingface-hub < 0.23.0 | closed | 0 | 2024-05-03T05:53:55 | 2024-05-27T10:14:41 | 2024-05-27T10:14:41 | albertvillanova | [] | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | false |
2,276,763,745 | https://api.github.com/repos/huggingface/datasets/issues/6862 | https://github.com/huggingface/datasets/pull/6862 | 6,862 | Fix load_dataset for data_files with protocols other than HF | closed | 2 | 2024-05-03T01:43:47 | 2024-07-23T14:37:08 | 2024-07-23T14:30:09 | matstrand | [] | Fixes huggingface/datasets/issues/6598
I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue.
MRE:
```
pip install "datasets[s3]"
python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': ... | true |
2,275,988,990 | https://api.github.com/repos/huggingface/datasets/issues/6861 | https://github.com/huggingface/datasets/pull/6861 | 6,861 | Fix CI by temporarily pinning huggingface-hub < 0.23.0 | closed | 2 | 2024-05-02T16:40:04 | 2024-05-02T16:59:42 | 2024-05-02T16:53:42 | albertvillanova | [] | As a hotfix for CI, temporarily pin `huggingface-hub` upper version
Fix #6860.
Revert once root cause is fixed, see:
- https://github.com/huggingface/transformers/issues/30618 | true |
2,275,537,137 | https://api.github.com/repos/huggingface/datasets/issues/6860 | https://github.com/huggingface/datasets/issues/6860 | 6,860 | CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download" | closed | 3 | 2024-05-02T13:24:17 | 2024-05-02T16:53:45 | 2024-05-02T16:53:45 | albertvillanova | [
"bug"
] | CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume... | false |
2,274,996,774 | https://api.github.com/repos/huggingface/datasets/issues/6859 | https://github.com/huggingface/datasets/pull/6859 | 6,859 | Support folder-based datasets with large metadata.jsonl | open | 0 | 2024-05-02T09:07:26 | 2024-05-02T09:07:26 | null | gbenson | [] | I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests.
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="data-for-upload")
Traceback (mos... | true |
2,274,917,185 | https://api.github.com/repos/huggingface/datasets/issues/6858 | https://github.com/huggingface/datasets/issues/6858 | 6,858 | Segmentation fault | closed | 2 | 2024-05-02T08:28:49 | 2024-05-03T08:43:21 | 2024-05-03T08:42:36 | scampion | [] | ### Describe the bug
Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault.
Several others files are also concerned.
### Steps to reproduce the bug
# Create a new venv
python3 -m venv venv_test
source venv_test/bin/activate
# Install the latest versio... | false |
2,274,849,730 | https://api.github.com/repos/huggingface/datasets/issues/6857 | https://github.com/huggingface/datasets/pull/6857 | 6,857 | Fix line-endings in tests on Windows | closed | 2 | 2024-05-02T07:49:15 | 2024-05-02T11:49:35 | 2024-05-02T11:43:00 | albertvillanova | [] | EDIT:
~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~
Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it.
Note that local files created on Windows will have "\r\n" line ending... | true |
2,274,828,933 | https://api.github.com/repos/huggingface/datasets/issues/6856 | https://github.com/huggingface/datasets/issues/6856 | 6,856 | CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character | closed | 1 | 2024-05-02T07:37:03 | 2024-05-02T11:43:01 | 2024-05-02T11:43:01 | albertvillanova | [
"bug"
] | CI fails on Windows for test_delete_from_hub after the merge of:
- #6820
This is weird because the CI was green in the PR branch before merging to main.
```
FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')]
At index 1 ... | false |
2,274,777,812 | https://api.github.com/repos/huggingface/datasets/issues/6855 | https://github.com/huggingface/datasets/pull/6855 | 6,855 | Fix dataset name for community Hub script-datasets | closed | 6 | 2024-05-02T07:05:44 | 2024-05-03T15:58:00 | 2024-05-03T15:51:57 | albertvillanova | [] | Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript.
Fix #6854.
CC: @Wauplin | true |
2,274,767,686 | https://api.github.com/repos/huggingface/datasets/issues/6854 | https://github.com/huggingface/datasets/issues/6854 | 6,854 | Wrong example of usage when config name is missing for community script-datasets | closed | 0 | 2024-05-02T06:59:39 | 2024-05-03T15:51:59 | 2024-05-03T15:51:58 | albertvillanova | [
"bug"
] | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name i... | false |
2,272,570,000 | https://api.github.com/repos/huggingface/datasets/issues/6853 | https://github.com/huggingface/datasets/issues/6853 | 6,853 | Support soft links for load_datasets imagefolder | open | 0 | 2024-04-30T22:14:29 | 2024-04-30T22:14:29 | null | billytcl | [
"enhancement"
] | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from ... | false |
2,272,465,011 | https://api.github.com/repos/huggingface/datasets/issues/6852 | https://github.com/huggingface/datasets/issues/6852 | 6,852 | Write token isn't working while pushing to datasets | closed | 0 | 2024-04-30T21:18:20 | 2024-05-02T00:55:46 | 2024-05-02T00:55:46 | realzai | [] | ### Describe the bug
<img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc">
As you can see I logged in to my account and the write token is valid.
But I can't upload on my main account and I am getting that ... | false |
2,270,965,503 | https://api.github.com/repos/huggingface/datasets/issues/6851 | https://github.com/huggingface/datasets/issues/6851 | 6,851 | load_dataset('emotion') UnicodeDecodeError | open | 2 | 2024-04-30T09:25:01 | 2024-09-05T03:11:04 | null | L-Block-C | [] | ### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.... | false |
2,269,500,624 | https://api.github.com/repos/huggingface/datasets/issues/6850 | https://github.com/huggingface/datasets/issues/6850 | 6,850 | Problem loading voxpopuli dataset | closed | 3 | 2024-04-29T16:46:51 | 2024-05-06T09:25:54 | 2024-05-06T09:25:54 | Namangarg110 | [] | ### Describe the bug
```
Exception has occurred: FileNotFoundError
Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'}
```
Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/da... | false |
2,268,718,355 | https://api.github.com/repos/huggingface/datasets/issues/6849 | https://github.com/huggingface/datasets/pull/6849 | 6,849 | fix webdataset filename split | closed | 1 | 2024-04-29T10:57:18 | 2024-06-04T12:54:04 | 2024-06-04T12:54:04 | Bowser1704 | [] | use `os.path.splitext` to parse field_name.
fix filename which has dot. like:
```
a.b.jpeg
a.b.txt
``` | true |
2,268,622,609 | https://api.github.com/repos/huggingface/datasets/issues/6848 | https://github.com/huggingface/datasets/issues/6848 | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | open | 3 | 2024-04-29T10:06:02 | 2025-04-01T20:48:09 | null | mheryerznkanyan | [] | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/pyth... | false |
2,268,589,177 | https://api.github.com/repos/huggingface/datasets/issues/6847 | https://github.com/huggingface/datasets/issues/6847 | 6,847 | [Streaming] Only load requested splits without resolving files for the other splits | open | 2 | 2024-04-29T09:49:32 | 2024-05-07T04:43:59 | null | lhoestq | [] | e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split.
This is due to `load_dataset()` resolving the files of all the splits even if only one is needed.
In `dataset-view... | false |
2,267,352,120 | https://api.github.com/repos/huggingface/datasets/issues/6846 | https://github.com/huggingface/datasets/issues/6846 | 6,846 | Unimaginable super slow iteration | closed | 1 | 2024-04-28T05:24:14 | 2024-05-06T08:30:03 | 2024-05-06T08:30:03 | rangehow | [] | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the datasetβ¦β¦οΌIs there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
n... | false |
2,265,876,551 | https://api.github.com/repos/huggingface/datasets/issues/6845 | https://github.com/huggingface/datasets/issues/6845 | 6,845 | load_dataset doesn't support list column | open | 1 | 2024-04-26T14:11:44 | 2024-05-15T12:06:59 | null | arthasking123 | [] | ### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
... | false |
2,265,870,546 | https://api.github.com/repos/huggingface/datasets/issues/6844 | https://github.com/huggingface/datasets/pull/6844 | 6,844 | Retry on HF Hub error when streaming | closed | 2 | 2024-04-26T14:09:04 | 2024-04-26T15:37:42 | 2024-04-26T15:37:42 | mariosasko | [] | Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode.
Fix #6843 | true |
2,265,432,897 | https://api.github.com/repos/huggingface/datasets/issues/6843 | https://github.com/huggingface/datasets/issues/6843 | 6,843 | IterableDataset raises exception instead of retrying | open | 7 | 2024-04-26T10:00:43 | 2024-10-28T14:57:07 | null | bauwenst | [] | ### Describe the bug
In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Si... | false |
2,264,692,159 | https://api.github.com/repos/huggingface/datasets/issues/6842 | https://github.com/huggingface/datasets/issues/6842 | 6,842 | Datasets with files with colon : in filenames cannot be used on Windows | open | 0 | 2024-04-26T00:14:16 | 2024-04-26T00:14:16 | null | jacobjennings | [] | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCo... | false |
2,264,687,683 | https://api.github.com/repos/huggingface/datasets/issues/6841 | https://github.com/huggingface/datasets/issues/6841 | 6,841 | Unable to load wiki_auto_asset_turk from GEM | closed | 8 | 2024-04-26T00:08:47 | 2024-05-29T13:54:03 | 2024-04-26T16:12:29 | abhinavsethy | [] | ### Describe the bug
I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call
... | false |
2,264,604,766 | https://api.github.com/repos/huggingface/datasets/issues/6840 | https://github.com/huggingface/datasets/issues/6840 | 6,840 | Delete uploaded files from the UI | open | 1 | 2024-04-25T22:33:57 | 2025-01-21T09:44:22 | null | saicharan2804 | [
"enhancement"
] | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | false |
2,263,761,062 | https://api.github.com/repos/huggingface/datasets/issues/6839 | https://github.com/huggingface/datasets/pull/6839 | 6,839 | Remove token arg from CLI examples | closed | 2 | 2024-04-25T14:36:58 | 2024-04-26T17:03:51 | 2024-04-26T16:57:40 | albertvillanova | [] | Remove token arg from CLI examples.
Fix #6838.
CC: @Wauplin | true |
2,263,674,843 | https://api.github.com/repos/huggingface/datasets/issues/6838 | https://github.com/huggingface/datasets/issues/6838 | 6,838 | Remove token arg from CLI examples | closed | 0 | 2024-04-25T14:00:38 | 2024-04-26T16:57:41 | 2024-04-26T16:57:41 | albertvillanova | [] | As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603
> I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login) | false |
2,263,273,983 | https://api.github.com/repos/huggingface/datasets/issues/6837 | https://github.com/huggingface/datasets/issues/6837 | 6,837 | Cannot use cached dataset without Internet connection (or when servers are down) | open | 6 | 2024-04-25T10:48:20 | 2025-01-25T16:36:41 | null | DionisMuzenitov | [] | ### Describe the bug
I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues).
The problem why I can't use it:
`data_files` argument from `datasets.load_dataset()` function get it updates from the serve... | false |
2,262,249,919 | https://api.github.com/repos/huggingface/datasets/issues/6836 | https://github.com/huggingface/datasets/issues/6836 | 6,836 | ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0 | open | 3 | 2024-04-24T21:52:35 | 2024-05-14T04:08:19 | null | ebsmothers | [] | ### Describe the bug
Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us.
Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below.
### Steps to re... | false |
2,261,079,263 | https://api.github.com/repos/huggingface/datasets/issues/6835 | https://github.com/huggingface/datasets/pull/6835 | 6,835 | Support pyarrow LargeListType | closed | 3 | 2024-04-24T11:34:24 | 2024-08-12T14:43:47 | 2024-08-12T14:43:47 | Modexus | [] | Fixes #6834 | true |
2,261,078,104 | https://api.github.com/repos/huggingface/datasets/issues/6834 | https://github.com/huggingface/datasets/issues/6834 | 6,834 | largelisttype not supported (.from_polars()) | closed | 0 | 2024-04-24T11:33:43 | 2024-08-12T14:43:46 | 2024-08-12T14:43:46 | Modexus | [] | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_pola... | false |
2,259,731,274 | https://api.github.com/repos/huggingface/datasets/issues/6833 | https://github.com/huggingface/datasets/issues/6833 | 6,833 | Super slow iteration with trivial custom transform | open | 7 | 2024-04-23T20:40:59 | 2024-10-08T15:41:18 | null | xslittlegrass | [] | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"... | false |
2,258,761,447 | https://api.github.com/repos/huggingface/datasets/issues/6832 | https://github.com/huggingface/datasets/pull/6832 | 6,832 | Support downloading specific splits in `load_dataset` | open | 5 | 2024-04-23T12:32:27 | 2025-07-28T18:30:25 | null | mariosasko | [] | This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, t... | true |
2,258,537,405 | https://api.github.com/repos/huggingface/datasets/issues/6831 | https://github.com/huggingface/datasets/pull/6831 | 6,831 | Add docs about the CLI | closed | 3 | 2024-04-23T10:41:03 | 2024-04-26T16:51:09 | 2024-04-25T10:44:10 | albertvillanova | [] | Add docs about the CLI.
Close #6830.
CC: @severo | true |
2,258,433,178 | https://api.github.com/repos/huggingface/datasets/issues/6830 | https://github.com/huggingface/datasets/issues/6830 | 6,830 | Add a doc page for the convert_to_parquet CLI | closed | 0 | 2024-04-23T09:49:04 | 2024-04-25T10:44:11 | 2024-04-25T10:44:11 | severo | [
"documentation"
] | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | false |
2,258,424,577 | https://api.github.com/repos/huggingface/datasets/issues/6829 | https://github.com/huggingface/datasets/issues/6829 | 6,829 | Load and save from/to disk no longer accept pathlib.Path | open | 0 | 2024-04-23T09:44:45 | 2024-04-23T09:44:46 | null | albertvillanova | [
"bug"
] | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to... | false |
2,258,420,421 | https://api.github.com/repos/huggingface/datasets/issues/6828 | https://github.com/huggingface/datasets/pull/6828 | 6,828 | Support PathLike input in save_to_disk / load_from_disk | open | 1 | 2024-04-23T09:42:38 | 2024-04-23T11:05:52 | null | lhoestq | [] | null | true |
2,254,011,833 | https://api.github.com/repos/huggingface/datasets/issues/6827 | https://github.com/huggingface/datasets/issues/6827 | 6,827 | Loading a remote dataset fails in the last release (v2.19.0) | open | 0 | 2024-04-19T21:11:58 | 2024-04-19T21:13:42 | null | zrthxn | [] | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test... | false |
2,252,445,242 | https://api.github.com/repos/huggingface/datasets/issues/6826 | https://github.com/huggingface/datasets/pull/6826 | 6,826 | Set dev version | closed | 2 | 2024-04-19T08:51:42 | 2024-04-19T09:05:25 | 2024-04-19T08:52:14 | albertvillanova | [] | null | true |
2,252,404,599 | https://api.github.com/repos/huggingface/datasets/issues/6825 | https://github.com/huggingface/datasets/pull/6825 | 6,825 | Release: 2.19.0 | closed | 2 | 2024-04-19T08:29:02 | 2024-05-04T12:23:26 | 2024-04-19T08:44:57 | albertvillanova | [] | null | true |
2,251,076,197 | https://api.github.com/repos/huggingface/datasets/issues/6824 | https://github.com/huggingface/datasets/issues/6824 | 6,824 | Winogrande does not seem to be compatible with datasets version of 1.18.0 | closed | 2 | 2024-04-18T16:11:04 | 2024-04-19T09:53:15 | 2024-04-19T09:52:33 | spliew | [] | ### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line... | false |
2,250,775,569 | https://api.github.com/repos/huggingface/datasets/issues/6823 | https://github.com/huggingface/datasets/issues/6823 | 6,823 | Loading problems of Datasets with a single shard | open | 2 | 2024-04-18T13:59:00 | 2024-11-25T05:40:09 | null | andjoer | [] | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 bu... | false |
2,250,316,258 | https://api.github.com/repos/huggingface/datasets/issues/6822 | https://github.com/huggingface/datasets/pull/6822 | 6,822 | Fix parquet export infos | closed | 2 | 2024-04-18T10:21:41 | 2024-04-18T11:15:41 | 2024-04-18T11:09:13 | lhoestq | [] | Don't use the parquet export infos when USE_PARQUET_EXPORT is False.
Otherwise the `datasets-server` might reuse erroneous data when re-running a job
this follows https://github.com/huggingface/datasets/pull/6714 | true |
2,248,471,673 | https://api.github.com/repos/huggingface/datasets/issues/6820 | https://github.com/huggingface/datasets/pull/6820 | 6,820 | Allow deleting a subset/config from a no-script dataset | closed | 6 | 2024-04-17T14:41:12 | 2024-05-02T07:31:03 | 2024-04-30T09:44:24 | albertvillanova | [] | TODO:
- [x] Add docs
- [x] Delete token arg from CLI example
- See: #6839
Close #6810. | true |
2,248,043,797 | https://api.github.com/repos/huggingface/datasets/issues/6819 | https://github.com/huggingface/datasets/issues/6819 | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | open | 0 | 2024-04-17T11:19:47 | 2024-04-17T11:19:47 | null | severo | [
"enhancement"
] | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (support... | false |
2,246,578,480 | https://api.github.com/repos/huggingface/datasets/issues/6817 | https://github.com/huggingface/datasets/pull/6817 | 6,817 | Support indexable objects in `Dataset.__getitem__` | closed | 2 | 2024-04-16T17:41:27 | 2024-04-16T18:27:44 | 2024-04-16T18:17:29 | mariosasko | [] | As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`. | true |
2,246,264,911 | https://api.github.com/repos/huggingface/datasets/issues/6816 | https://github.com/huggingface/datasets/pull/6816 | 6,816 | Improve typing of Dataset.search, matching definition | closed | 3 | 2024-04-16T14:53:39 | 2024-04-16T15:54:10 | 2024-04-16T15:54:10 | Dref360 | [] | Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays.
The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type.
The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to... | true |
2,246,197,070 | https://api.github.com/repos/huggingface/datasets/issues/6815 | https://github.com/huggingface/datasets/pull/6815 | 6,815 | Remove `os.path.relpath` in `resolve_patterns` | closed | 2 | 2024-04-16T14:23:13 | 2024-04-16T16:06:48 | 2024-04-16T15:58:22 | mariosasko | [] | ... to save a few seconds when resolving repos with many data files. | true |
2,245,857,902 | https://api.github.com/repos/huggingface/datasets/issues/6814 | https://github.com/huggingface/datasets/issues/6814 | 6,814 | `map` with `num_proc` > 1 leads to OOM | open | 1 | 2024-04-16T11:56:03 | 2024-04-19T11:53:41 | null | bhavitvyamalik | [] | ### Describe the bug
When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this?
### Steps to reproduce the bug
```
ds = load_dataset("parquet", data... | false |
2,245,626,870 | https://api.github.com/repos/huggingface/datasets/issues/6813 | https://github.com/huggingface/datasets/pull/6813 | 6,813 | Add Dataset.take and Dataset.skip | closed | 2 | 2024-04-16T09:53:42 | 2024-04-16T14:12:14 | 2024-04-16T14:06:07 | lhoestq | [] | ...to be aligned with IterableDataset.take and IterableDataset.skip | true |
2,244,898,824 | https://api.github.com/repos/huggingface/datasets/issues/6812 | https://github.com/huggingface/datasets/pull/6812 | 6,812 | Run CI | closed | 1 | 2024-04-16T01:12:36 | 2024-04-16T01:14:16 | 2024-04-16T01:12:41 | charliermarsh | [] | null | true |
2,243,656,096 | https://api.github.com/repos/huggingface/datasets/issues/6811 | https://github.com/huggingface/datasets/pull/6811 | 6,811 | add allow_primitive_to_str and allow_decimal_to_str instead of allow_number_to_str | closed | 6 | 2024-04-15T13:14:38 | 2024-07-03T14:59:42 | 2024-04-16T17:03:17 | Modexus | [] | Fix #6805 | true |
2,242,968,745 | https://api.github.com/repos/huggingface/datasets/issues/6810 | https://github.com/huggingface/datasets/issues/6810 | 6,810 | Allow deleting a subset/config from a no-script dataset | closed | 3 | 2024-04-15T07:53:26 | 2025-01-11T18:40:40 | 2024-04-30T09:44:25 | albertvillanova | [
"enhancement"
] | As proposed by @BramVanroy, it would be neat to have this functionality through the API. | false |
2,242,956,297 | https://api.github.com/repos/huggingface/datasets/issues/6809 | https://github.com/huggingface/datasets/pull/6809 | 6,809 | Make convert_to_parquet CLI command create script branch | closed | 3 | 2024-04-15T07:47:26 | 2024-04-17T08:44:26 | 2024-04-17T08:38:18 | albertvillanova | [] | Make convert_to_parquet CLI command create a "script" branch and keep the script file on it.
This PR proposes the simplest UX approach: whenever `--revision` is not explicitly passed (i.e., when the script is in the main branch), try to create a "script" branch from the "main" branch; if the "script" branch exists a... | true |
2,242,843,611 | https://api.github.com/repos/huggingface/datasets/issues/6808 | https://github.com/huggingface/datasets/issues/6808 | 6,808 | Make convert_to_parquet CLI command create script branch | closed | 0 | 2024-04-15T06:46:07 | 2024-04-17T08:38:19 | 2024-04-17T08:38:19 | albertvillanova | [
"enhancement"
] | As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168
> When providing support, we sometimes suggest that users store their script in a script branch. What do you th... | false |
2,239,435,074 | https://api.github.com/repos/huggingface/datasets/issues/6806 | https://github.com/huggingface/datasets/pull/6806 | 6,806 | Fix hf-internal-testing/dataset_with_script commit SHA in CI test | closed | 2 | 2024-04-12T08:47:50 | 2024-04-12T09:08:23 | 2024-04-12T09:02:12 | albertvillanova | [] | Fix test using latest commit SHA in hf-internal-testing/dataset_with_script dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script/commits/refs%2Fconvert%2Fparquet
Fix #6796. | true |
2,239,034,951 | https://api.github.com/repos/huggingface/datasets/issues/6805 | https://github.com/huggingface/datasets/issues/6805 | 6,805 | Batched mapping of existing string column casts boolean to string | closed | 7 | 2024-04-12T04:21:41 | 2024-07-03T15:00:07 | 2024-07-03T15:00:07 | starmpcc | [] | ### Describe the bug
Let the dataset contain a column named 'a', which is of the string type.
If 'a' is converted to a boolean using batched mapping, the mapper automatically casts the boolean to a string (e.g., True -> 'true').
It only happens when the original column and the mapped column name are identical.
Th... | false |
2,238,035,124 | https://api.github.com/repos/huggingface/datasets/issues/6804 | https://github.com/huggingface/datasets/pull/6804 | 6,804 | Fix --repo-type order in cli upload docs | closed | 2 | 2024-04-11T15:39:09 | 2024-04-11T16:24:57 | 2024-04-11T16:18:47 | lhoestq | [] | null | true |
2,237,933,090 | https://api.github.com/repos/huggingface/datasets/issues/6803 | https://github.com/huggingface/datasets/pull/6803 | 6,803 | #6791 Improve type checking around FAISS | closed | 3 | 2024-04-11T14:54:30 | 2024-04-11T15:44:09 | 2024-04-11T15:38:04 | Dref360 | [] | Fixes #6791
Small PR to raise a better error when a dataset is not embedded properly. | true |
2,237,365,489 | https://api.github.com/repos/huggingface/datasets/issues/6802 | https://github.com/huggingface/datasets/pull/6802 | 6,802 | Fix typo in docs (upload CLI) | closed | 4 | 2024-04-11T10:05:05 | 2024-04-11T16:19:00 | 2024-04-11T13:19:43 | Wauplin | [] | Related to https://huggingface.slack.com/archives/C04RG8YRVB8/p1712643948574129 (interal)
Positional args must be placed before optional args.
Feel free to merge whenever it's ready. | true |
2,236,911,556 | https://api.github.com/repos/huggingface/datasets/issues/6801 | https://github.com/huggingface/datasets/issues/6801 | 6,801 | got fileNotFound | closed | 2 | 2024-04-11T04:57:41 | 2024-04-12T16:47:43 | 2024-04-12T16:47:43 | laoniandisko | [] | ### Describe the bug
When I use load_dataset to load the nyanko7/danbooru2023 data set, the cache is read in the form of a symlink. There may be a problem with the arrow_dataset initialization process and I get FileNotFoundError: [Errno 2] No such file or directory: '2945000.jpg'
### Steps to reproduce the bug
#code... | false |
2,236,431,288 | https://api.github.com/repos/huggingface/datasets/issues/6800 | https://github.com/huggingface/datasets/issues/6800 | 6,800 | High overhead when loading lots of subsets from the same dataset | open | 6 | 2024-04-10T21:08:57 | 2024-04-24T13:48:05 | null | loicmagne | [] | ### Describe the bug
I have a multilingual dataset that contains a lot of subsets. Each subset corresponds to a pair of languages, you can see here an example with 250 subsets: [https://hf.co/datasets/loicmagne/open-subtitles-250-bitext-mining](). As part of the MTEB benchmark, we may need to load all the subsets of t... | false |
2,236,124,531 | https://api.github.com/repos/huggingface/datasets/issues/6799 | https://github.com/huggingface/datasets/pull/6799 | 6,799 | fix `DatasetBuilder._split_generators` incomplete type annotation | closed | 3 | 2024-04-10T17:46:08 | 2024-04-11T15:41:06 | 2024-04-11T15:34:58 | JonasLoos | [] | solve #6798:
add missing `StreamingDownloadManager` type annotation to the `dl_manager` argument of the `DatasetBuilder._split_generators` function | true |
2,235,768,891 | https://api.github.com/repos/huggingface/datasets/issues/6798 | https://github.com/huggingface/datasets/issues/6798 | 6,798 | `DatasetBuilder._split_generators` incomplete type annotation | closed | 3 | 2024-04-10T14:38:50 | 2024-04-11T15:34:59 | 2024-04-11T15:34:59 | JonasLoos | [] | ### Describe the bug
The [`DatasetBuilder._split_generators`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/builder.py#L1449) function has currently the following signature:
```python
class DatasetBuilder:
def _split_generators(self, dl_manager: DownloadMan... | false |
2,234,890,097 | https://api.github.com/repos/huggingface/datasets/issues/6797 | https://github.com/huggingface/datasets/pull/6797 | 6,797 | Fix CI test_load_dataset_distributed_with_script | closed | 2 | 2024-04-10T06:57:48 | 2024-04-10T08:25:00 | 2024-04-10T08:18:01 | albertvillanova | [] | Fix #6796. | true |
2,234,887,618 | https://api.github.com/repos/huggingface/datasets/issues/6796 | https://github.com/huggingface/datasets/issues/6796 | 6,796 | CI is broken due to hf-internal-testing/dataset_with_script | closed | 4 | 2024-04-10T06:56:02 | 2024-04-12T09:02:13 | 2024-04-12T09:02:13 | albertvillanova | [
"bug"
] | CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127
```
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_scr... | false |
2,233,618,719 | https://api.github.com/repos/huggingface/datasets/issues/6795 | https://github.com/huggingface/datasets/pull/6795 | 6,795 | Add CLI function to convert script-dataset to Parquet | closed | 3 | 2024-04-09T14:45:12 | 2024-04-17T08:41:23 | 2024-04-12T15:27:04 | albertvillanova | [] | Close #6690. | true |
2,233,202,088 | https://api.github.com/repos/huggingface/datasets/issues/6794 | https://github.com/huggingface/datasets/pull/6794 | 6,794 | Multithreaded downloads | closed | 4 | 2024-04-09T11:13:19 | 2024-04-15T21:24:13 | 2024-04-15T21:18:08 | lhoestq | [] | ...for faster dataset download when there are many many small files (e.g. imagefolder, audiofolder)
### Behcnmark
for example on [lhoestq/tmp-images-writer_batch_size](https://hf.co/datasets/lhoestq/tmp-images-writer_batch_size) (128 images)
| | duration of the download step in `load_dataset()` |
|--| ----... | true |
2,231,400,200 | https://api.github.com/repos/huggingface/datasets/issues/6793 | https://github.com/huggingface/datasets/issues/6793 | 6,793 | Loading just one particular split is not possible for imagenet-1k | open | 2 | 2024-04-08T14:39:14 | 2025-06-23T09:55:08 | null | PaulPSta | [] | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work li... | false |
2,231,318,682 | https://api.github.com/repos/huggingface/datasets/issues/6792 | https://github.com/huggingface/datasets/pull/6792 | 6,792 | Fix cache conflict in `_check_legacy_cache2` | closed | 2 | 2024-04-08T14:05:42 | 2024-04-09T11:34:08 | 2024-04-09T11:27:58 | lhoestq | [] | It was reloading from the wrong cache dir because of a bug in `_check_legacy_cache2`. This function should not trigger if there are config_kwars like `sample_by=`
fix https://github.com/huggingface/datasets/issues/6758 | true |
2,230,102,332 | https://api.github.com/repos/huggingface/datasets/issues/6791 | https://github.com/huggingface/datasets/issues/6791 | 6,791 | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | closed | 3 | 2024-04-08T01:57:03 | 2024-04-11T15:38:05 | 2024-04-11T15:38:05 | NeuralFlux | [] | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 Th... | false |
2,229,915,236 | https://api.github.com/repos/huggingface/datasets/issues/6790 | https://github.com/huggingface/datasets/issues/6790 | 6,790 | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | open | 3 | 2024-04-07T19:25:39 | 2025-06-12T07:31:44 | null | lasuomela | [] | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggi... | false |
2,229,527,001 | https://api.github.com/repos/huggingface/datasets/issues/6789 | https://github.com/huggingface/datasets/issues/6789 | 6,789 | Issue with map | open | 8 | 2024-04-07T02:52:06 | 2024-07-23T12:41:38 | null | Nsohko | [] | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reaso... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.