id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,454,418,130 | https://api.github.com/repos/huggingface/datasets/issues/7094 | https://github.com/huggingface/datasets/pull/7094 | 7,094 | Add Arabic Docs to Datasets | open | 0 | 2024-08-07T21:53:06 | 2024-08-07T21:53:06 | null | AhmedAlmaghz | [] | Translate Docs into Arabic issue-number : #7093
[Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
[English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx)
@stevhliu | true |
2,454,413,074 | https://api.github.com/repos/huggingface/datasets/issues/7093 | https://github.com/huggingface/datasets/issues/7093 | 7,093 | Add Arabic Docs to datasets | open | 0 | 2024-08-07T21:48:05 | 2024-08-07T21:48:05 | null | AhmedAlmaghz | [
"enhancement"
] | ### Feature request
Add Arabic Docs to datasets
[Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
### Motivation
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
### Your contribution
@AhmedAlmaghz
https://github.com/AhmedAlma... | false |
2,451,393,658 | https://api.github.com/repos/huggingface/datasets/issues/7092 | https://github.com/huggingface/datasets/issues/7092 | 7,092 | load_dataset with multiple jsonlines files interprets datastructure too early | open | 5 | 2024-08-06T17:42:55 | 2024-08-08T16:35:01 | null | Vipitis | [] | ### Describe the bug
likely related to #6460
using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data.
### Steps to reproduce the bug
real world example:
data is available in this [PR-bra... | false |
2,449,699,490 | https://api.github.com/repos/huggingface/datasets/issues/7090 | https://github.com/huggingface/datasets/issues/7090 | 7,090 | The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name | open | 0 | 2024-08-06T00:35:05 | 2024-08-06T00:35:05 | null | yurivict | [] | ### Describe the bug
Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11
Failure:
```
if err_filename is not None:
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFo... | false |
2,449,479,500 | https://api.github.com/repos/huggingface/datasets/issues/7089 | https://github.com/huggingface/datasets/issues/7089 | 7,089 | Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped | open | 0 | 2024-08-05T21:05:11 | 2024-08-05T21:05:11 | null | yurivict | [] | ### Describe the bug
see the subject
### Steps to reproduce the bug
regular tests
### Expected behavior
n/a
### Environment info
version 2.20.0 | false |
2,447,383,940 | https://api.github.com/repos/huggingface/datasets/issues/7088 | https://github.com/huggingface/datasets/issues/7088 | 7,088 | Disable warning when using with_format format on tensors | open | 0 | 2024-08-05T00:45:50 | 2024-08-05T00:45:50 | null | Haislich | [
"enhancement"
] | ### Feature request
If we write this code:
```python
"""Get data and define datasets."""
from enum import StrEnum
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
class Split(StrEnum):
"""Describes what type of split to use in the dataloa... | false |
2,447,158,643 | https://api.github.com/repos/huggingface/datasets/issues/7087 | https://github.com/huggingface/datasets/issues/7087 | 7,087 | Unable to create dataset card for Lushootseed language | closed | 2 | 2024-08-04T14:27:04 | 2024-08-06T06:59:23 | 2024-08-06T06:59:22 | vaishnavsudarshan | [
"enhancement"
] | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering la... | false |
2,445,516,829 | https://api.github.com/repos/huggingface/datasets/issues/7086 | https://github.com/huggingface/datasets/issues/7086 | 7,086 | load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors | open | 1 | 2024-08-02T18:12:23 | 2025-06-16T18:43:29 | null | tginart | [] | ### Describe the bug
I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this.
### Steps to reproduce the bug
1. Be Me
2. Run `load_dataset("TAUR-Lab/MuSR")`
3. Hit rate limit error
4. Dataset... | false |
2,440,008,618 | https://api.github.com/repos/huggingface/datasets/issues/7085 | https://github.com/huggingface/datasets/issues/7085 | 7,085 | [Regression] IterableDataset is broken on 2.20.0 | closed | 3 | 2024-07-31T13:01:59 | 2024-08-22T14:49:37 | 2024-08-22T14:49:07 | AjayP13 | [] | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable Itera... | false |
2,439,519,534 | https://api.github.com/repos/huggingface/datasets/issues/7084 | https://github.com/huggingface/datasets/issues/7084 | 7,084 | More easily support streaming local files | open | 0 | 2024-07-31T09:03:15 | 2024-07-31T09:05:58 | null | fschlatt | [
"enhancement"
] | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the d... | false |
2,439,518,466 | https://api.github.com/repos/huggingface/datasets/issues/7083 | https://github.com/huggingface/datasets/pull/7083 | 7,083 | fix streaming from arrow files | closed | 0 | 2024-07-31T09:02:42 | 2024-08-30T15:17:03 | 2024-08-30T15:17:03 | fschlatt | [] | null | true |
2,437,354,975 | https://api.github.com/repos/huggingface/datasets/issues/7082 | https://github.com/huggingface/datasets/pull/7082 | 7,082 | Support HTTP authentication in non-streaming mode | closed | 2 | 2024-07-30T09:25:49 | 2024-08-08T08:29:55 | 2024-08-08T08:24:06 | albertvillanova | [] | Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode.
- Note that currently, HTTP authentication is supported only in streaming mode.
For example, this is necessary if a remote HTTP host requires authentication to download the data. | true |
2,437,059,657 | https://api.github.com/repos/huggingface/datasets/issues/7081 | https://github.com/huggingface/datasets/pull/7081 | 7,081 | Set load_from_disk path type as PathLike | closed | 2 | 2024-07-30T07:00:38 | 2024-07-30T08:30:37 | 2024-07-30T08:21:50 | albertvillanova | [] | Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`. | true |
2,434,275,664 | https://api.github.com/repos/huggingface/datasets/issues/7080 | https://github.com/huggingface/datasets/issues/7080 | 7,080 | Generating train split takes a long time | open | 2 | 2024-07-29T01:42:43 | 2024-10-02T15:31:22 | null | alexanderswerdlow | [] | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebD... | false |
2,433,363,298 | https://api.github.com/repos/huggingface/datasets/issues/7079 | https://github.com/huggingface/datasets/issues/7079 | 7,079 | HfHubHTTPError: 500 Server Error: Internal Server Error for url: | closed | 17 | 2024-07-27T08:21:03 | 2024-09-20T13:26:25 | 2024-07-27T19:52:30 | neoneye | [] | ### Describe the bug
newly uploaded datasets, since yesterday, yields an error.
old datasets, works fine.
Seems like the datasets api server returns a 500
I'm getting the same error, when I invoke `load_dataset` with my dataset.
Long discussion about it here, but I'm not sure anyone from huggingface have s... | false |
2,433,270,271 | https://api.github.com/repos/huggingface/datasets/issues/7078 | https://github.com/huggingface/datasets/pull/7078 | 7,078 | Fix CI test_convert_to_parquet | closed | 2 | 2024-07-27T05:32:40 | 2024-07-27T05:50:57 | 2024-07-27T05:44:32 | albertvillanova | [] | Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix:
- #7074 | true |
2,432,345,489 | https://api.github.com/repos/huggingface/datasets/issues/7077 | https://github.com/huggingface/datasets/issues/7077 | 7,077 | column_names ignored by load_dataset() when loading CSV file | open | 1 | 2024-07-26T14:18:04 | 2024-07-30T07:52:26 | null | luismsgomes | [] | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting da... | false |
2,432,275,393 | https://api.github.com/repos/huggingface/datasets/issues/7076 | https://github.com/huggingface/datasets/pull/7076 | 7,076 | 🧪 Do not mock create_commit | closed | 1 | 2024-07-26T13:44:42 | 2024-07-27T05:48:17 | 2024-07-27T05:48:17 | coyotte508 | [] | null | true |
2,432,027,412 | https://api.github.com/repos/huggingface/datasets/issues/7075 | https://github.com/huggingface/datasets/pull/7075 | 7,075 | Update required soxr version from pre-release to release | closed | 2 | 2024-07-26T11:24:35 | 2024-07-26T11:46:52 | 2024-07-26T11:40:49 | albertvillanova | [] | Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0 | true |
2,431,772,703 | https://api.github.com/repos/huggingface/datasets/issues/7074 | https://github.com/huggingface/datasets/pull/7074 | 7,074 | Fix CI by temporarily marking test_convert_to_parquet as expected to fail | closed | 2 | 2024-07-26T09:03:33 | 2024-07-26T09:23:33 | 2024-07-26T09:16:12 | albertvillanova | [] | As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail.
Fix #7073.
Revert once root cause is fixed. | true |
2,431,706,568 | https://api.github.com/repos/huggingface/datasets/issues/7073 | https://github.com/huggingface/datasets/issues/7073 | 7,073 | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | closed | 9 | 2024-07-26T08:27:41 | 2024-07-27T05:48:02 | 2024-07-26T09:16:13 | albertvillanova | [] | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision N... | false |
2,430,577,916 | https://api.github.com/repos/huggingface/datasets/issues/7072 | https://github.com/huggingface/datasets/issues/7072 | 7,072 | nm | closed | 0 | 2024-07-25T17:03:24 | 2024-07-25T20:36:11 | 2024-07-25T20:36:11 | brettdavies | [] | null | false |
2,430,313,011 | https://api.github.com/repos/huggingface/datasets/issues/7071 | https://github.com/huggingface/datasets/issues/7071 | 7,071 | Filter hangs | open | 0 | 2024-07-25T15:29:05 | 2024-07-25T15:36:59 | null | lucienwalewski | [] | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I hav... | false |
2,430,285,235 | https://api.github.com/repos/huggingface/datasets/issues/7070 | https://github.com/huggingface/datasets/issues/7070 | 7,070 | how set_transform affects batch size? | open | 0 | 2024-07-25T15:19:34 | 2024-07-25T15:19:34 | null | VafaKnm | [] | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_feat... | false |
2,429,281,339 | https://api.github.com/repos/huggingface/datasets/issues/7069 | https://github.com/huggingface/datasets/pull/7069 | 7,069 | Fix push_to_hub by not calling create_branch if PR branch | closed | 8 | 2024-07-25T07:50:04 | 2024-07-31T07:10:07 | 2024-07-30T10:51:01 | albertvillanova | [] | Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`).
Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`).
EDIT:
~~Fix push_to_hub by not calling create_branch if branch exists.~~
Note that currently create_branch raises a ... | true |
2,426,657,434 | https://api.github.com/repos/huggingface/datasets/issues/7068 | https://github.com/huggingface/datasets/pull/7068 | 7,068 | Fix prepare_single_hop_path_and_storage_options | closed | 2 | 2024-07-24T05:52:34 | 2024-07-29T07:02:07 | 2024-07-29T06:56:15 | albertvillanova | [] | Fix `_prepare_single_hop_path_and_storage_options`:
- Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs
- Do not overwrite passed `storage_options` nested values:
- Before, when passed
```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```,
... | true |
2,425,460,168 | https://api.github.com/repos/huggingface/datasets/issues/7067 | https://github.com/huggingface/datasets/issues/7067 | 7,067 | Convert_to_parquet fails for datasets with multiple configs | closed | 3 | 2024-07-23T15:09:33 | 2024-07-30T10:51:02 | 2024-07-30T10:51:02 | HuangZhen02 | [] | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
... | false |
2,425,125,160 | https://api.github.com/repos/huggingface/datasets/issues/7066 | https://github.com/huggingface/datasets/issues/7066 | 7,066 | One subset per file in repo ? | open | 1 | 2024-07-23T12:43:59 | 2025-06-26T08:24:50 | null | lhoestq | [] | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jso... | false |
2,424,734,953 | https://api.github.com/repos/huggingface/datasets/issues/7065 | https://github.com/huggingface/datasets/issues/7065 | 7,065 | Cannot get item after loading from disk and then converting to iterable. | open | 0 | 2024-07-23T09:37:56 | 2024-07-23T09:37:56 | null | happyTonakai | [] | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Au... | false |
2,424,613,104 | https://api.github.com/repos/huggingface/datasets/issues/7064 | https://github.com/huggingface/datasets/pull/7064 | 7,064 | Add `batch` method to `Dataset` class | closed | 6 | 2024-07-23T08:40:43 | 2024-07-25T13:51:25 | 2024-07-25T13:45:20 | lappemic | [] | This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples.
Key changes:
- Add `batch` method to `Dataset` class in `arrow_da... | true |
2,424,488,648 | https://api.github.com/repos/huggingface/datasets/issues/7063 | https://github.com/huggingface/datasets/issues/7063 | 7,063 | Add `batch` method to `Dataset` | closed | 0 | 2024-07-23T07:36:59 | 2024-07-25T13:45:21 | 2024-07-25T13:45:21 | lappemic | [
"enhancement"
] | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | false |
2,424,467,484 | https://api.github.com/repos/huggingface/datasets/issues/7062 | https://github.com/huggingface/datasets/pull/7062 | 7,062 | Avoid calling http_head for non-HTTP URLs | closed | 2 | 2024-07-23T07:25:09 | 2024-07-23T14:28:27 | 2024-07-23T14:21:08 | albertvillanova | [] | Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement.
Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,...
I discovered this while working in an unrelated issue. | true |
2,423,786,881 | https://api.github.com/repos/huggingface/datasets/issues/7061 | https://github.com/huggingface/datasets/issues/7061 | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | open | 0 | 2024-07-22T21:18:12 | 2024-09-09T14:48:07 | null | hahmad2008 | [] | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
`... | false |
2,423,188,419 | https://api.github.com/repos/huggingface/datasets/issues/7060 | https://github.com/huggingface/datasets/pull/7060 | 7,060 | WebDataset BuilderConfig | closed | 1 | 2024-07-22T15:41:07 | 2024-07-23T13:28:44 | 2024-07-23T13:28:44 | hlky | [] | This PR adds `WebDatasetConfig`.
Closes #7055 | true |
2,422,827,892 | https://api.github.com/repos/huggingface/datasets/issues/7059 | https://github.com/huggingface/datasets/issues/7059 | 7,059 | None values are skipped when reading jsonl in subobjects | open | 0 | 2024-07-22T13:02:42 | 2024-07-22T13:02:53 | null | PonteIneptique | [] | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
... | false |
2,422,560,355 | https://api.github.com/repos/huggingface/datasets/issues/7058 | https://github.com/huggingface/datasets/issues/7058 | 7,058 | New feature type: Document | open | 0 | 2024-07-22T10:49:20 | 2024-07-22T10:49:20 | null | severo | [] | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | false |
2,422,498,520 | https://api.github.com/repos/huggingface/datasets/issues/7057 | https://github.com/huggingface/datasets/pull/7057 | 7,057 | Update load_hub.mdx | closed | 2 | 2024-07-22T10:17:46 | 2024-07-22T10:34:14 | 2024-07-22T10:28:10 | severo | [] | null | true |
2,422,192,257 | https://api.github.com/repos/huggingface/datasets/issues/7056 | https://github.com/huggingface/datasets/pull/7056 | 7,056 | Make `BufferShuffledExamplesIterable` resumable | closed | 8 | 2024-07-22T07:50:02 | 2025-01-31T05:34:20 | 2025-01-31T05:34:19 | yzhangcs | [] | This PR aims to implement a resumable `BufferShuffledExamplesIterable`.
Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first e... | true |
2,421,708,891 | https://api.github.com/repos/huggingface/datasets/issues/7055 | https://github.com/huggingface/datasets/issues/7055 | 7,055 | WebDataset with different prefixes are unsupported | closed | 8 | 2024-07-22T01:14:19 | 2024-07-24T13:26:30 | 2024-07-23T13:28:46 | hlky | [] | ### Describe the bug
Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k)
Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules... | false |
2,418,548,995 | https://api.github.com/repos/huggingface/datasets/issues/7054 | https://github.com/huggingface/datasets/pull/7054 | 7,054 | Add batching to `IterableDataset` | closed | 5 | 2024-07-19T10:11:47 | 2024-07-23T13:25:13 | 2024-07-23T10:34:28 | lappemic | [] | I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class.
The main changes are:
1. A new `BatchedExamplesIterable` that groups examples into batches.
2. A `.batch()` method for... | true |
2,416,423,791 | https://api.github.com/repos/huggingface/datasets/issues/7053 | https://github.com/huggingface/datasets/issues/7053 | 7,053 | Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple` | closed | 2 | 2024-07-18T13:42:35 | 2024-07-18T15:17:42 | 2024-07-18T15:16:18 | MatthewYZhang | [] | ### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != ... | false |
2,411,682,730 | https://api.github.com/repos/huggingface/datasets/issues/7052 | https://github.com/huggingface/datasets/pull/7052 | 7,052 | Adding `Music` feature for symbolic music modality (MIDI, abc) | closed | 0 | 2024-07-16T17:26:04 | 2024-07-29T06:47:55 | 2024-07-29T06:47:55 | Natooz | [] | ⚠️ (WIP) ⚠️
### What this PR does
This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files.
### Motivations
These two file formats are widely used in th... | true |
2,409,353,929 | https://api.github.com/repos/huggingface/datasets/issues/7051 | https://github.com/huggingface/datasets/issues/7051 | 7,051 | How to set_epoch with interleave_datasets? | closed | 7 | 2024-07-15T18:24:52 | 2024-08-05T20:58:04 | 2024-08-05T20:58:04 | jonathanasdf | [] | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I... | false |
2,409,048,733 | https://api.github.com/repos/huggingface/datasets/issues/7050 | https://github.com/huggingface/datasets/pull/7050 | 7,050 | add checkpoint and resume title in docs | closed | 2 | 2024-07-15T15:38:04 | 2024-07-15T16:06:15 | 2024-07-15T15:59:56 | lhoestq | [] | (minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata | true |
2,408,514,366 | https://api.github.com/repos/huggingface/datasets/issues/7049 | https://github.com/huggingface/datasets/issues/7049 | 7,049 | Save nparray as list | closed | 5 | 2024-07-15T11:36:11 | 2024-07-18T11:33:34 | 2024-07-18T11:33:34 | Sakurakdx | [] | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, ... | false |
2,408,487,547 | https://api.github.com/repos/huggingface/datasets/issues/7048 | https://github.com/huggingface/datasets/issues/7048 | 7,048 | ImportError: numpy.core.multiarray when using `filter` | closed | 4 | 2024-07-15T11:21:04 | 2024-07-16T10:11:25 | 2024-07-16T10:11:25 | kamilakesbi | [] | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
... | false |
2,406,495,084 | https://api.github.com/repos/huggingface/datasets/issues/7047 | https://github.com/huggingface/datasets/issues/7047 | 7,047 | Save Dataset as Sharded Parquet | open | 2 | 2024-07-12T23:47:51 | 2024-07-17T12:07:08 | null | tom-p-reichel | [
"enhancement"
] | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_... | false |
2,405,485,582 | https://api.github.com/repos/huggingface/datasets/issues/7046 | https://github.com/huggingface/datasets/pull/7046 | 7,046 | Support librosa and numpy 2.0 for Python 3.10 | closed | 2 | 2024-07-12T12:42:47 | 2024-07-12T13:04:40 | 2024-07-12T12:58:17 | albertvillanova | [] | Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release:
- https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1
- https://github.com/dofuuz/python-soxr/issues/28 | true |
2,405,447,858 | https://api.github.com/repos/huggingface/datasets/issues/7045 | https://github.com/huggingface/datasets/pull/7045 | 7,045 | Fix tensorflow min version depending on Python version | closed | 2 | 2024-07-12T12:20:23 | 2024-07-12T12:38:53 | 2024-07-12T12:33:00 | albertvillanova | [] | Fix tensorflow min version depending on Python version.
Related to:
- #6991 | true |
2,405,002,987 | https://api.github.com/repos/huggingface/datasets/issues/7044 | https://github.com/huggingface/datasets/pull/7044 | 7,044 | Mark tests that require librosa | closed | 2 | 2024-07-12T08:06:59 | 2024-07-12T09:06:32 | 2024-07-12T09:00:09 | albertvillanova | [] | Mark tests that require `librosa`.
Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`:
-... | true |
2,404,951,714 | https://api.github.com/repos/huggingface/datasets/issues/7043 | https://github.com/huggingface/datasets/pull/7043 | 7,043 | Add decorator as explicit test dependency | closed | 2 | 2024-07-12T07:35:23 | 2024-07-12T08:12:55 | 2024-07-12T08:07:10 | albertvillanova | [] | Add decorator as explicit test dependency.
We use `decorator` library in our CI test since PR:
- #4845
However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies.
I discovered this while testing Numpy 2.0 and removing incompatible librarie... | true |
2,404,605,836 | https://api.github.com/repos/huggingface/datasets/issues/7042 | https://github.com/huggingface/datasets/pull/7042 | 7,042 | Improved the tutorial by adding a link for loading datasets | closed | 1 | 2024-07-12T03:49:54 | 2024-08-15T10:07:44 | 2024-08-15T10:01:59 | AmboThom | [] | Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets. | true |
2,404,576,038 | https://api.github.com/repos/huggingface/datasets/issues/7041 | https://github.com/huggingface/datasets/issues/7041 | 7,041 | `sort` after `filter` unreasonably slow | closed | 2 | 2024-07-12T03:29:27 | 2025-04-29T09:49:25 | 2025-04-29T09:49:25 | Tobin-rgb | [] | ### Describe the bug
as the tittle says ...
### Steps to reproduce the bug
`sort` seems to be normal.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
print("start sort")
ds = ds.sort("k")
print("f... | false |
2,402,918,335 | https://api.github.com/repos/huggingface/datasets/issues/7040 | https://github.com/huggingface/datasets/issues/7040 | 7,040 | load `streaming=True` dataset with downloaded cache | open | 2 | 2024-07-11T11:14:13 | 2024-07-11T14:11:56 | null | wanghaoyucn | [] | ### Describe the bug
We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 f... | false |
2,402,403,390 | https://api.github.com/repos/huggingface/datasets/issues/7039 | https://github.com/huggingface/datasets/pull/7039 | 7,039 | Fix export to JSON when dataset larger than batch size | open | 3 | 2024-07-11T06:52:22 | 2024-09-28T06:10:00 | null | albertvillanova | [] | Fix export to JSON (`lines=False`) when dataset larger than batch size.
Fix #7037. | true |
2,400,192,419 | https://api.github.com/repos/huggingface/datasets/issues/7037 | https://github.com/huggingface/datasets/issues/7037 | 7,037 | A bug of Dataset.to_json() function | open | 2 | 2024-07-10T09:11:22 | 2024-09-22T13:16:07 | null | LinglingGreat | [
"bug"
] | ### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the f... | false |
2,400,035,672 | https://api.github.com/repos/huggingface/datasets/issues/7036 | https://github.com/huggingface/datasets/pull/7036 | 7,036 | Fix doc generation when NamedSplit is used as parameter default value | closed | 2 | 2024-07-10T07:58:46 | 2024-07-26T07:58:00 | 2024-07-26T07:51:52 | albertvillanova | [] | Fix doc generation when `NamedSplit` is used as parameter default value.
Fix #7035. | true |
2,400,021,225 | https://api.github.com/repos/huggingface/datasets/issues/7035 | https://github.com/huggingface/datasets/issues/7035 | 7,035 | Docs are not generated when a parameter defaults to a NamedSplit value | closed | 0 | 2024-07-10T07:51:24 | 2024-07-26T07:51:53 | 2024-07-26T07:51:53 | albertvillanova | [
"maintenance"
] | While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like:
```python
def call_function(split=Split.TRAIN):
...
```
The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'>
See: https://github.com/huggingface/datasets/action... | false |
2,397,525,974 | https://api.github.com/repos/huggingface/datasets/issues/7034 | https://github.com/huggingface/datasets/pull/7034 | 7,034 | chore: fix typos in docs | closed | 1 | 2024-07-09T08:35:05 | 2024-08-13T08:22:25 | 2024-08-13T08:16:22 | hattizai | [] | null | true |
2,397,419,768 | https://api.github.com/repos/huggingface/datasets/issues/7033 | https://github.com/huggingface/datasets/issues/7033 | 7,033 | `from_generator` does not allow to specify the split name | closed | 2 | 2024-07-09T07:47:58 | 2024-07-26T12:56:16 | 2024-07-26T09:31:56 | pminervini | [] | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/g... | false |
2,395,531,699 | https://api.github.com/repos/huggingface/datasets/issues/7032 | https://github.com/huggingface/datasets/pull/7032 | 7,032 | Register `.zstd` extension for zstd-compressed files | closed | 8 | 2024-07-08T12:39:50 | 2024-07-12T15:07:03 | 2024-07-12T15:07:03 | polinaeterna | [] | For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered). | true |
2,395,401,692 | https://api.github.com/repos/huggingface/datasets/issues/7031 | https://github.com/huggingface/datasets/issues/7031 | 7,031 | CI quality is broken: use ruff check instead | closed | 0 | 2024-07-08T11:42:24 | 2024-07-08T11:47:29 | 2024-07-08T11:47:29 | albertvillanova | [] | CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027
```
error: `ruff <path>` has been removed. Use `ruff check <path>` instead.
``` | false |
2,393,411,631 | https://api.github.com/repos/huggingface/datasets/issues/7030 | https://github.com/huggingface/datasets/issues/7030 | 7,030 | Add option to disable progress bar when reading a dataset ("Loading dataset from disk") | closed | 2 | 2024-07-06T05:43:37 | 2024-07-13T14:35:59 | 2024-07-13T14:35:59 | yuvalkirstain | [
"enhancement"
] | ### Feature request
Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16.
### Motivation
I am reading a lot of datasets that it creates lots of logs.
<img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-... | false |
2,391,366,696 | https://api.github.com/repos/huggingface/datasets/issues/7029 | https://github.com/huggingface/datasets/issues/7029 | 7,029 | load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error | open | 1 | 2024-07-04T19:15:16 | 2024-07-17T12:44:03 | null | sugam-nexusflow | [] | ### Describe the bug
I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /... | false |
2,391,077,531 | https://api.github.com/repos/huggingface/datasets/issues/7028 | https://github.com/huggingface/datasets/pull/7028 | 7,028 | Fix ci | closed | 2 | 2024-07-04T15:11:08 | 2024-07-04T15:26:35 | 2024-07-04T15:19:16 | lhoestq | [] | ...after last pr errors | true |
2,391,013,330 | https://api.github.com/repos/huggingface/datasets/issues/7027 | https://github.com/huggingface/datasets/pull/7027 | 7,027 | Missing line from previous pr | closed | 2 | 2024-07-04T14:34:29 | 2024-07-04T14:40:46 | 2024-07-04T14:34:36 | lhoestq | [] | null | true |
2,390,983,889 | https://api.github.com/repos/huggingface/datasets/issues/7026 | https://github.com/huggingface/datasets/pull/7026 | 7,026 | Fix check_library_imports | closed | 2 | 2024-07-04T14:18:38 | 2024-07-04T14:28:36 | 2024-07-04T14:20:02 | lhoestq | [] | move it to after the `trust_remote_code` check
Note that it only affects local datasets that already exist on disk, not datasets loaded from HF directly | true |
2,390,488,546 | https://api.github.com/repos/huggingface/datasets/issues/7025 | https://github.com/huggingface/datasets/pull/7025 | 7,025 | feat: support non streamable arrow file binary format | closed | 7 | 2024-07-04T10:11:12 | 2024-07-31T06:15:50 | 2024-07-31T06:09:31 | kmehant | [] | Support Arrow files (`.arrow`) that are in non streamable binary file formats. | true |
2,390,141,626 | https://api.github.com/repos/huggingface/datasets/issues/7024 | https://github.com/huggingface/datasets/issues/7024 | 7,024 | Streaming dataset not returning data | open | 0 | 2024-07-04T07:21:47 | 2024-07-04T07:21:47 | null | johnwee1 | [] | ### Describe the bug
I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly.
I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning ... | false |
2,388,090,424 | https://api.github.com/repos/huggingface/datasets/issues/7023 | https://github.com/huggingface/datasets/pull/7023 | 7,023 | Remove dead code for pyarrow < 15.0.0 | closed | 2 | 2024-07-03T09:05:03 | 2024-07-03T09:24:46 | 2024-07-03T09:17:35 | albertvillanova | [] | Remove dead code for pyarrow < 15.0.0.
Code is dead since the merge of:
- #6892
Fix #7022. | true |
2,388,064,650 | https://api.github.com/repos/huggingface/datasets/issues/7022 | https://github.com/huggingface/datasets/issues/7022 | 7,022 | There is dead code after we require pyarrow >= 15.0.0 | closed | 0 | 2024-07-03T08:52:57 | 2024-07-03T09:17:36 | 2024-07-03T09:17:36 | albertvillanova | [
"maintenance"
] | There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed. | false |
2,387,948,935 | https://api.github.com/repos/huggingface/datasets/issues/7021 | https://github.com/huggingface/datasets/pull/7021 | 7,021 | Fix casting list array to fixed size list | closed | 2 | 2024-07-03T07:58:57 | 2024-07-03T08:47:49 | 2024-07-03T08:41:55 | albertvillanova | [] | Fix casting list array to fixed size list.
This bug was introduced in [datasets-2.17.0](https://github.com/huggingface/datasets/releases/tag/2.17.0) by PR: https://github.com/huggingface/datasets/pull/6283/files#diff-1cb2b66aa9311d729cfd83013dad56cf5afcda35b39dfd0bfe9c3813a049eab0R1899
- #6283
Fix #7020. | true |
2,387,940,990 | https://api.github.com/repos/huggingface/datasets/issues/7020 | https://github.com/huggingface/datasets/issues/7020 | 7,020 | Casting list array to fixed size list raises error | closed | 0 | 2024-07-03T07:54:49 | 2024-07-03T08:41:56 | 2024-07-03T08:41:56 | albertvillanova | [
"bug"
] | When trying to cast a list array to fixed size list, an AttributeError is raised:
> AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
Steps to reproduce the bug:
```python
import pyarrow as pa
from datasets.table import array_cast
arr = pa.array([[0, 1]])
array_cast(arr, pa.lis... | false |
2,385,793,897 | https://api.github.com/repos/huggingface/datasets/issues/7019 | https://github.com/huggingface/datasets/pull/7019 | 7,019 | Support pyarrow large_list | closed | 10 | 2024-07-02T09:52:52 | 2024-08-12T14:49:45 | 2024-08-12T14:43:45 | albertvillanova | [] | Allow Polars round trip by supporting pyarrow large list.
Fix #6834, fix #6984.
Supersede and close #4800, close #6835, close #6986. | true |
2,383,700,286 | https://api.github.com/repos/huggingface/datasets/issues/7018 | https://github.com/huggingface/datasets/issues/7018 | 7,018 | `load_dataset` fails to load dataset saved by `save_to_disk` | open | 5 | 2024-07-01T12:19:19 | 2025-05-24T05:21:12 | null | sliedes | [] | ### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_functi... | false |
2,383,647,419 | https://api.github.com/repos/huggingface/datasets/issues/7017 | https://github.com/huggingface/datasets/pull/7017 | 7,017 | Support fsspec 2024.6.1 | closed | 2 | 2024-07-01T11:57:15 | 2024-07-01T12:12:32 | 2024-07-01T12:06:24 | albertvillanova | [] | Support fsspec 2024.6.1. | true |
2,383,262,608 | https://api.github.com/repos/huggingface/datasets/issues/7016 | https://github.com/huggingface/datasets/issues/7016 | 7,016 | `drop_duplicates` method | open | 1 | 2024-07-01T09:01:06 | 2024-07-20T06:51:58 | null | MohamedAliRashad | [
"duplicate",
"enhancement"
] | ### Feature request
`drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one)
### Motivation
Ease of use
### Your contribution
I don't think i am good enough to help | false |
2,383,151,220 | https://api.github.com/repos/huggingface/datasets/issues/7015 | https://github.com/huggingface/datasets/pull/7015 | 7,015 | add split argument to Generator | closed | 5 | 2024-07-01T08:09:25 | 2024-07-26T09:37:51 | 2024-07-26T09:31:56 | piercus | [] | ## Actual
When creating a multi-split dataset using generators like
```python
datasets.DatasetDict({
"val": datasets.Dataset.from_generator(
generator=generator_val,
features=features
),
"test": datasets.Dataset.from_generator(
generator=generator_test,
features=features,
... | true |
2,382,985,847 | https://api.github.com/repos/huggingface/datasets/issues/7014 | https://github.com/huggingface/datasets/pull/7014 | 7,014 | Skip faiss tests on Windows to avoid running CI for 360 minutes | closed | 3 | 2024-07-01T06:45:35 | 2024-07-01T07:16:36 | 2024-07-01T07:10:27 | albertvillanova | [] | Skip faiss tests on Windows to avoid running CI for 360 minutes.
Fix #7013.
Revert once the underlying issue is fixed. | true |
2,382,976,738 | https://api.github.com/repos/huggingface/datasets/issues/7013 | https://github.com/huggingface/datasets/issues/7013 | 7,013 | CI is broken for faiss tests on Windows: node down: Not properly terminated | closed | 0 | 2024-07-01T06:40:03 | 2024-07-01T07:10:28 | 2024-07-01T07:10:28 | albertvillanova | [
"maintenance"
] | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time o... | false |
2,380,934,047 | https://api.github.com/repos/huggingface/datasets/issues/7012 | https://github.com/huggingface/datasets/pull/7012 | 7,012 | Raise an error when a nested object is expected to be a mapping that displays the object | closed | 0 | 2024-06-28T18:10:59 | 2024-07-11T02:06:16 | 2024-07-11T02:06:16 | sebbyjp | [] | null | true |
2,379,785,262 | https://api.github.com/repos/huggingface/datasets/issues/7011 | https://github.com/huggingface/datasets/pull/7011 | 7,011 | Re-enable raising error from huggingface-hub FutureWarning in CI | closed | 2 | 2024-06-28T07:28:32 | 2024-06-28T12:25:25 | 2024-06-28T12:19:28 | albertvillanova | [] | Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers
- https://github.com/huggingface/transformers/pull/31007
was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0
Fix #7010. | true |
2,379,777,480 | https://api.github.com/repos/huggingface/datasets/issues/7010 | https://github.com/huggingface/datasets/issues/7010 | 7,010 | Re-enable raising error from huggingface-hub FutureWarning in CI | closed | 0 | 2024-06-28T07:23:40 | 2024-06-28T12:19:30 | 2024-06-28T12:19:29 | albertvillanova | [
"maintenance"
] | Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR:
- #6876
Note that this can only be done once transformers releases the fix:
- https://github.com/huggingface/transformers/pull/31007 | false |
2,379,619,132 | https://api.github.com/repos/huggingface/datasets/issues/7009 | https://github.com/huggingface/datasets/pull/7009 | 7,009 | Support ruff 0.5.0 in CI | closed | 2 | 2024-06-28T05:37:36 | 2024-06-28T07:17:26 | 2024-06-28T07:11:17 | albertvillanova | [] | Support ruff 0.5.0 in CI and revert:
- #7007
Fix #7008. | true |
2,379,591,141 | https://api.github.com/repos/huggingface/datasets/issues/7008 | https://github.com/huggingface/datasets/issues/7008 | 7,008 | Support ruff 0.5.0 in CI | closed | 0 | 2024-06-28T05:11:26 | 2024-06-28T07:11:18 | 2024-06-28T07:11:18 | albertvillanova | [
"maintenance"
] | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | false |
2,379,588,676 | https://api.github.com/repos/huggingface/datasets/issues/7007 | https://github.com/huggingface/datasets/pull/7007 | 7,007 | Fix CI by temporarily pinning ruff < 0.5.0 | closed | 2 | 2024-06-28T05:09:17 | 2024-06-28T05:31:21 | 2024-06-28T05:25:17 | albertvillanova | [] | As a hotfix for CI, temporarily pin ruff upper version < 0.5.0.
Fix #7006.
Revert once root cause is fixed. | true |
2,379,581,543 | https://api.github.com/repos/huggingface/datasets/issues/7006 | https://github.com/huggingface/datasets/issues/7006 | 7,006 | CI is broken after ruff-0.5.0: E721 | closed | 0 | 2024-06-28T05:03:28 | 2024-06-28T05:25:18 | 2024-06-28T05:25:18 | albertvillanova | [
"maintenance"
] | After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstanc... | false |
2,378,424,349 | https://api.github.com/repos/huggingface/datasets/issues/7005 | https://github.com/huggingface/datasets/issues/7005 | 7,005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | closed | 3 | 2024-06-27T15:08:26 | 2024-06-28T09:56:19 | 2024-06-28T09:56:19 | Aki1991 | [] | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all ... | false |
2,376,064,264 | https://api.github.com/repos/huggingface/datasets/issues/7004 | https://github.com/huggingface/datasets/pull/7004 | 7,004 | Fix WebDatasets KeyError for user-defined Features when a field is missing in an example | closed | 3 | 2024-06-26T18:58:05 | 2024-06-29T00:15:49 | 2024-06-28T09:30:12 | ProGamerGov | [] | Fixes: https://github.com/huggingface/datasets/issues/6900
Not sure if this needs any addition stuff before merging | true |
2,373,084,132 | https://api.github.com/repos/huggingface/datasets/issues/7003 | https://github.com/huggingface/datasets/pull/7003 | 7,003 | minor fix for bfloat16 | closed | 2 | 2024-06-25T16:10:04 | 2024-06-25T16:16:11 | 2024-06-25T16:10:10 | lhoestq | [] | null | true |
2,373,010,351 | https://api.github.com/repos/huggingface/datasets/issues/7002 | https://github.com/huggingface/datasets/pull/7002 | 7,002 | Fix dump of bfloat16 torch tensor | closed | 2 | 2024-06-25T15:38:09 | 2024-06-25T16:10:16 | 2024-06-25T15:51:52 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7000 | true |
2,372,930,879 | https://api.github.com/repos/huggingface/datasets/issues/7001 | https://github.com/huggingface/datasets/issues/7001 | 7,001 | Datasetbuilder Local Download FileNotFoundError | open | 1 | 2024-06-25T15:02:34 | 2024-06-25T15:21:19 | null | purefall | [] | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems... | false |
2,372,887,585 | https://api.github.com/repos/huggingface/datasets/issues/7000 | https://github.com/huggingface/datasets/issues/7000 | 7,000 | IterableDataset: Unsupported ScalarType BFloat16 | closed | 3 | 2024-06-25T14:43:26 | 2024-06-25T16:04:00 | 2024-06-25T15:51:53 | stoical07 | [] | ### Describe the bug
`IterableDataset.from_generator` crashes when using BFloat16:
```
File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor
args = (obj.detach().cpu().numpy(),)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType ... | false |
2,372,124,589 | https://api.github.com/repos/huggingface/datasets/issues/6999 | https://github.com/huggingface/datasets/pull/6999 | 6,999 | Remove tasks | closed | 2 | 2024-06-25T09:06:16 | 2024-08-21T09:07:07 | 2024-08-21T09:01:18 | albertvillanova | [] | Remove tasks, as part of the 3.0 release. | true |
2,371,973,926 | https://api.github.com/repos/huggingface/datasets/issues/6998 | https://github.com/huggingface/datasets/pull/6998 | 6,998 | Fix tests using hf-internal-testing/librispeech_asr_dummy | closed | 2 | 2024-06-25T07:59:44 | 2024-06-25T08:22:38 | 2024-06-25T08:13:42 | albertvillanova | [] | Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet.
Fix #6997. | true |
2,371,966,127 | https://api.github.com/repos/huggingface/datasets/issues/6997 | https://github.com/huggingface/datasets/issues/6997 | 6,997 | CI is broken for tests using hf-internal-testing/librispeech_asr_dummy | closed | 0 | 2024-06-25T07:55:44 | 2024-06-25T08:13:43 | 2024-06-25T08:13:43 | albertvillanova | [
"maintenance"
] | CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996
```
FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other']
Right contains one more item: 'othe... | false |
2,371,841,671 | https://api.github.com/repos/huggingface/datasets/issues/6996 | https://github.com/huggingface/datasets/pull/6996 | 6,996 | Remove deprecated code | closed | 2 | 2024-06-25T06:54:40 | 2024-08-21T09:42:52 | 2024-08-21T09:35:06 | albertvillanova | [] | Remove deprecated code, as part of the 3.0 release.
First merge:
- [x] #6983
- [x] #6987
- [x] #6999 | true |
2,370,713,475 | https://api.github.com/repos/huggingface/datasets/issues/6995 | https://github.com/huggingface/datasets/issues/6995 | 6,995 | ImportError when importing datasets.load_dataset | closed | 9 | 2024-06-24T17:07:22 | 2024-11-14T01:42:09 | 2024-06-25T06:11:37 | Leo-Lsc | [] | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. f... | false |
2,370,491,689 | https://api.github.com/repos/huggingface/datasets/issues/6994 | https://github.com/huggingface/datasets/pull/6994 | 6,994 | Fix incorrect rank value in data splitting | closed | 3 | 2024-06-24T15:07:47 | 2024-06-26T04:37:35 | 2024-06-25T16:19:17 | yzhangcs | [] | Fix #6990. | true |
2,370,444,104 | https://api.github.com/repos/huggingface/datasets/issues/6993 | https://github.com/huggingface/datasets/pull/6993 | 6,993 | less script docs | closed | 6 | 2024-06-24T14:45:28 | 2024-07-08T13:10:53 | 2024-06-27T09:31:21 | lhoestq | [] | + mark as legacy in some parts of the docs since we'll not build new features for script datasets | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.