id int64 1.9B 3.25B | title stringlengths 2 244 | state stringclasses 2
values | body stringlengths 3 58.6k ⌀ | created_at timestamp[s]date 2023-09-15 14:23:33 2025-07-22 09:33:54 | updated_at timestamp[s]date 2023-09-18 16:20:09 2025-07-22 10:44:03 | closed_at timestamp[s]date 2023-09-18 16:20:09 2025-07-19 22:45:08 ⌀ | html_url stringlengths 49 51 | pull_request dict | number int64 6.24k 7.7k | is_pull_request bool 2
classes | comments listlengths 0 24 |
|---|---|---|---|---|---|---|---|---|---|---|---|
3,251,904,843 | Support downloading specific splits in load_dataset | open | This PR builds on #6832 by @mariosasko.
May close - #4101, #2538
Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
---
### Note - This PR is under work and frequent changes will be pushed. | 2025-07-22T09:33:54 | 2025-07-22T09:45:18 | null | https://github.com/huggingface/datasets/pull/7695 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7695",
"html_url": "https://github.com/huggingface/datasets/pull/7695",
"diff_url": "https://github.com/huggingface/datasets/pull/7695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7695.patch",
"merged_at": null
} | 7,695 | true | [
"Hi @lhoestq 👋\r\n\r\nI’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now co... |
3,247,600,408 | Dataset.to_json consumes excessive memory, appears to not be a streaming operation | open | ### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior ... | 2025-07-21T07:51:25 | 2025-07-21T07:51:25 | null | https://github.com/huggingface/datasets/issues/7694 | null | 7,694 | false | [] |
3,246,369,678 | Dataset scripts are no longer supported, but found superb.py | open | ### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
... | 2025-07-20T13:48:06 | 2025-07-22T10:44:03 | null | https://github.com/huggingface/datasets/issues/7693 | null | 7,693 | false | [
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)... |
3,246,268,635 | xopen: invalid start byte for streaming dataset with trust_remote_code=True | open | ### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid ... | 2025-07-20T11:08:20 | 2025-07-20T11:08:20 | null | https://github.com/huggingface/datasets/issues/7692 | null | 7,692 | false | [] |
3,245,547,170 | Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming | open | ### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically inclu... | 2025-07-19T18:40:27 | 2025-07-21T19:17:33 | null | https://github.com/huggingface/datasets/issues/7691 | null | 7,691 | false | [
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to j... |
3,244,380,691 | HDF5 support | open | This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by splitting them into several columns. All datasets within the HDF5 file should have rows on the first dim... | 2025-07-18T21:09:41 | 2025-07-19T06:09:00 | null | https://github.com/huggingface/datasets/pull/7690 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7690",
"html_url": "https://github.com/huggingface/datasets/pull/7690",
"diff_url": "https://github.com/huggingface/datasets/pull/7690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7690.patch",
"merged_at": null
} | 7,690 | true | [
"@lhoestq This is ready for review now. Note that it doesn't support *all* HDF5 files (and I don't think that's worth attempting)... the biggest assumption is that the first dimension of each dataset corresponds to rows in the split."
] |
3,242,580,301 | BadRequestError for loading dataset? | closed | ### Describe the bug
Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:
```
huggingface_hub.errors.BadRequestError: (Request ID: ...)
Bad request:
* Invalid input: expected array, received string * at paths * Invalid... | 2025-07-18T09:30:04 | 2025-07-18T11:59:51 | 2025-07-18T11:52:29 | https://github.com/huggingface/datasets/issues/7689 | null | 7,689 | false | [
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working su... |
3,238,851,443 | No module named "distributed" | open | ### Describe the bug
hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?
### Steps to reproduce the bug
1. pip install datasets
2. from datasets.di... | 2025-07-17T09:32:35 | 2025-07-21T13:50:27 | null | https://github.com/huggingface/datasets/issues/7688 | null | 7,688 | false | [
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to ... |
3,238,760,301 | Datasets keeps rebuilding the dataset every time i call the python script | open | ### Describe the bug
Every time it runs, somehow, samples increase.
This can cause a 12mb dataset to have other built versions of 400 mbs+
<img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" />
### Steps to reproduce the bug
`from datasets... | 2025-07-17T09:03:38 | 2025-07-17T09:03:38 | null | https://github.com/huggingface/datasets/issues/7687 | null | 7,687 | false | [] |
3,237,201,090 | load_dataset does not check .no_exist files in the hub cache | open | ### Describe the bug
I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack.
The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wr... | 2025-07-16T20:04:00 | 2025-07-16T20:04:00 | null | https://github.com/huggingface/datasets/issues/7686 | null | 7,686 | false | [] |
3,236,979,340 | Inconsistent range request behavior for parquet REST api | open | ### Describe the bug
First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere.
The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. Mor... | 2025-07-16T18:39:44 | 2025-07-16T18:41:53 | null | https://github.com/huggingface/datasets/issues/7685 | null | 7,685 | false | [] |
3,231,680,474 | fix audio cast storage from array + sampling_rate | closed | fix https://github.com/huggingface/datasets/issues/7682 | 2025-07-15T10:13:42 | 2025-07-15T10:24:08 | 2025-07-15T10:24:07 | https://github.com/huggingface/datasets/pull/7684 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7684",
"html_url": "https://github.com/huggingface/datasets/pull/7684",
"diff_url": "https://github.com/huggingface/datasets/pull/7684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7684.patch",
"merged_at": "2025-07-15T10:24... | 7,684 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,231,553,161 | Convert to string when needed + faster .zstd | closed | for https://huggingface.co/datasets/allenai/olmo-mix-1124 | 2025-07-15T09:37:44 | 2025-07-15T10:13:58 | 2025-07-15T10:13:56 | https://github.com/huggingface/datasets/pull/7683 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7683",
"html_url": "https://github.com/huggingface/datasets/pull/7683",
"diff_url": "https://github.com/huggingface/datasets/pull/7683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7683.patch",
"merged_at": "2025-07-15T10:13... | 7,683 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,229,687,253 | Fail to cast Audio feature for numpy arrays in datasets 4.0.0 | closed | ### Describe the bug
Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails
in version 4.0.0 but not in version 3.6.0
### Steps to reproduce the bug
The following `uv script` should be able to reproduce the bug in version 4.0.0
and pass in version 3.6.0 on a macOS ... | 2025-07-14T18:41:02 | 2025-07-15T12:10:39 | 2025-07-15T10:24:08 | https://github.com/huggingface/datasets/issues/7682 | null | 7,682 | false | [
"thanks for reporting, I opened a PR and I'll make a patch release soon ",
"> thanks for reporting, I opened a PR and I'll make a patch release soon\n\nThank you very much @lhoestq!"
] |
3,227,112,736 | Probabilistic High Memory Usage and Freeze on Python 3.10 | open | ### Describe the bug
A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, th... | 2025-07-14T01:57:16 | 2025-07-14T01:57:16 | null | https://github.com/huggingface/datasets/issues/7681 | null | 7,681 | false | [] |
3,224,824,151 | Question about iterable dataset and streaming | open | In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78
I am confused,
1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style datase... | 2025-07-12T04:48:30 | 2025-07-15T13:39:38 | null | https://github.com/huggingface/datasets/issues/7680 | null | 7,680 | false | [
"> If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n\nyes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\n> load_dataset(streaming=True) is useful for huge datase... |
3,220,787,371 | metric glue breaks with 4.0.0 | closed | ### Describe the bug
worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks.
The code that fails is:
https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84
```
def simple_accuracy(preds, labels):
print(preds, labels)
print(f"{preds==labels}")
r... | 2025-07-10T21:39:50 | 2025-07-11T17:42:01 | 2025-07-11T17:42:01 | https://github.com/huggingface/datasets/issues/7679 | null | 7,679 | false | [
"I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```",
"Thanks so much, @lhoestq!"
] |
3,218,625,544 | To support decoding audio data, please install 'torchcodec'. | closed |
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version.
!pip install -q -U datasets huggingface_hub fsspec
from datasets import load_dataset
downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train")
print(downloaded_datase... | 2025-07-10T09:43:13 | 2025-07-22T03:46:52 | 2025-07-11T05:05:42 | https://github.com/huggingface/datasets/issues/7678 | null | 7,678 | false | [
"Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio",
"Same issues on Colab.\n\n> !pip install -U datasets[audio]... |
3,218,044,656 | Toxicity fails with datasets 4.0.0 | closed | ### Describe the bug
With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).`
### Steps to reproduce the bug
Repro:... | 2025-07-10T06:15:22 | 2025-07-11T04:40:59 | 2025-07-11T04:40:59 | https://github.com/huggingface/datasets/issues/7677 | null | 7,677 | false | [
"Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```",
"Thanks, verified evaluate 0.4.5 works!"
] |
3,216,857,559 | Many things broken since the new 4.0.0 release | open | ### Describe the bug
The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness.
I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting:
``` Python
File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in genera... | 2025-07-09T18:59:50 | 2025-07-21T10:38:01 | null | https://github.com/huggingface/datasets/issues/7676 | null | 7,676 | false | [
"Happy to take a look, do you have a list of impacted datasets ?",
"Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ",
"Hi @mobicham ... |
3,216,699,094 | common_voice_11_0.py failure in dataset library | open | ### Describe the bug
I tried to download dataset but have got this error:
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
---------------------------------------------------------------------------
RuntimeError Tr... | 2025-07-09T17:47:59 | 2025-07-22T09:35:42 | null | https://github.com/huggingface/datasets/issues/7675 | null | 7,675 | false | [
"Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussio... |
3,216,251,069 | set dev version | closed | null | 2025-07-09T15:01:25 | 2025-07-09T15:04:01 | 2025-07-09T15:01:33 | https://github.com/huggingface/datasets/pull/7674 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7674",
"html_url": "https://github.com/huggingface/datasets/pull/7674",
"diff_url": "https://github.com/huggingface/datasets/pull/7674.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7674.patch",
"merged_at": "2025-07-09T15:01... | 7,674 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7674). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,216,075,633 | Release: 4.0.0 | closed | null | 2025-07-09T14:03:16 | 2025-07-09T14:36:19 | 2025-07-09T14:36:18 | https://github.com/huggingface/datasets/pull/7673 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7673",
"html_url": "https://github.com/huggingface/datasets/pull/7673",
"diff_url": "https://github.com/huggingface/datasets/pull/7673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7673.patch",
"merged_at": "2025-07-09T14:36... | 7,673 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,215,287,164 | Fix double sequence | closed | ```python
>>> Features({"a": Sequence(Sequence({"c": Value("int64")}))})
{'a': List({'c': List(Value('int64'))})}
```
instead of `{'a': {'c': List(List(Value('int64')))}}` | 2025-07-09T09:53:39 | 2025-07-09T09:56:29 | 2025-07-09T09:56:28 | https://github.com/huggingface/datasets/pull/7672 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7672",
"html_url": "https://github.com/huggingface/datasets/pull/7672",
"diff_url": "https://github.com/huggingface/datasets/pull/7672.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7672.patch",
"merged_at": "2025-07-09T09:56... | 7,672 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,213,223,886 | Mapping function not working if the first example is returned as None | closed | ### Describe the bug
https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length cons... | 2025-07-08T17:07:47 | 2025-07-09T12:30:32 | 2025-07-09T12:30:32 | https://github.com/huggingface/datasets/issues/7671 | null | 7,671 | false | [
"Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```",
"Realized this! Thanks a lot, I will close this issue then."
] |
3,208,962,372 | Fix audio bytes | closed | null | 2025-07-07T13:05:15 | 2025-07-07T13:07:47 | 2025-07-07T13:05:33 | https://github.com/huggingface/datasets/pull/7670 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7670",
"html_url": "https://github.com/huggingface/datasets/pull/7670",
"diff_url": "https://github.com/huggingface/datasets/pull/7670.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7670.patch",
"merged_at": "2025-07-07T13:05... | 7,670 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7670). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,203,541,091 | How can I add my custom data to huggingface datasets | open | I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that. | 2025-07-04T19:19:54 | 2025-07-05T18:19:37 | null | https://github.com/huggingface/datasets/issues/7669 | null | 7,669 | false | [
"Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom dat... |
3,199,039,322 | Broken EXIF crash the whole program | open | ### Describe the bug
When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag.

### Steps to reproduce the bug
Use the `datasets.Image.decod... | 2025-07-03T11:24:15 | 2025-07-03T12:27:16 | null | https://github.com/huggingface/datasets/issues/7668 | null | 7,668 | false | [
"There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)"
] |
3,196,251,707 | Fix infer list of images | closed | cc @kashif | 2025-07-02T15:07:58 | 2025-07-02T15:10:28 | 2025-07-02T15:08:03 | https://github.com/huggingface/datasets/pull/7667 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7667",
"html_url": "https://github.com/huggingface/datasets/pull/7667",
"diff_url": "https://github.com/huggingface/datasets/pull/7667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7667.patch",
"merged_at": "2025-07-02T15:08... | 7,667 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7667). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,196,220,722 | Backward compat list feature | closed | cc @kashif | 2025-07-02T14:58:00 | 2025-07-02T15:00:37 | 2025-07-02T14:59:40 | https://github.com/huggingface/datasets/pull/7666 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7666",
"html_url": "https://github.com/huggingface/datasets/pull/7666",
"diff_url": "https://github.com/huggingface/datasets/pull/7666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7666.patch",
"merged_at": "2025-07-02T14:59... | 7,666 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,193,239,955 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | closed | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action:... | 2025-07-01T17:14:53 | 2025-07-01T17:17:48 | 2025-07-01T17:17:48 | https://github.com/huggingface/datasets/issues/7665 | null | 7,665 | false | [
"Somehow I created the issue twice🙈 This one is an exact duplicate of #7664."
] |
3,193,239,035 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | open | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action:... | 2025-07-01T17:14:32 | 2025-07-09T13:14:11 | null | https://github.com/huggingface/datasets/issues/7664 | null | 7,664 | false | [
"Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stack... |
3,192,582,371 | Custom metadata filenames | closed | example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main
To make multiple subsets for an imagefolder (one metadata file per subset), e.g.
```yaml
configs:
- config_name: default
metadata_filenames:
- metadata.csv
- config_name: other
metadata_filenames:
... | 2025-07-01T13:50:36 | 2025-07-01T13:58:41 | 2025-07-01T13:58:39 | https://github.com/huggingface/datasets/pull/7663 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7663",
"html_url": "https://github.com/huggingface/datasets/pull/7663",
"diff_url": "https://github.com/huggingface/datasets/pull/7663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7663.patch",
"merged_at": "2025-07-01T13:58... | 7,663 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7663). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,190,805,531 | Applying map after transform with multiprocessing will cause OOM | open | ### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I f... | 2025-07-01T05:45:57 | 2025-07-10T06:17:40 | null | https://github.com/huggingface/datasets/issues/7662 | null | 7,662 | false | [
"Hi ! `add_column` loads the full column data in memory:\n\nhttps://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021\n\na workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a ... |
3,190,408,237 | fix del tqdm lock error | open | fixes https://github.com/huggingface/datasets/issues/7660 | 2025-07-01T02:04:02 | 2025-07-08T01:38:46 | null | https://github.com/huggingface/datasets/pull/7661 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7661",
"html_url": "https://github.com/huggingface/datasets/pull/7661",
"diff_url": "https://github.com/huggingface/datasets/pull/7661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7661.patch",
"merged_at": null
} | 7,661 | true | [] |
3,189,028,251 | AttributeError: type object 'tqdm' has no attribute '_lock' | open | ### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to f... | 2025-06-30T15:57:16 | 2025-07-03T15:14:27 | null | https://github.com/huggingface/datasets/issues/7660 | null | 7,660 | false | [
"Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n ... |
3,187,882,217 | Update the beans dataset link in Preprocess | closed | In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed. | 2025-06-30T09:58:44 | 2025-07-07T08:38:19 | 2025-07-01T14:01:42 | https://github.com/huggingface/datasets/pull/7659 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7659",
"html_url": "https://github.com/huggingface/datasets/pull/7659",
"diff_url": "https://github.com/huggingface/datasets/pull/7659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7659.patch",
"merged_at": "2025-07-01T14:01... | 7,659 | true | [] |
3,187,800,504 | Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None | closed | This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_name... | 2025-06-30T09:31:12 | 2025-07-01T16:26:30 | 2025-07-01T16:26:12 | https://github.com/huggingface/datasets/pull/7658 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7658",
"html_url": "https://github.com/huggingface/datasets/pull/7658",
"diff_url": "https://github.com/huggingface/datasets/pull/7658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7658.patch",
"merged_at": null
} | 7,658 | true | [
"Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!",
"we can't know in advance the `features` after map() (it transforms the data !), so you can reuse the `fea... |
3,186,036,016 | feat: add subset_name as alias for name in load_dataset | open | fixes #7637
This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users.
Supports `subset_name` in `load_dataset()`
Adds `.subset_name` propert... | 2025-06-29T10:39:00 | 2025-07-18T17:45:41 | null | https://github.com/huggingface/datasets/pull/7657 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7657",
"html_url": "https://github.com/huggingface/datasets/pull/7657",
"diff_url": "https://github.com/huggingface/datasets/pull/7657.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7657.patch",
"merged_at": null
} | 7,657 | true | [] |
3,185,865,686 | fix(iterable): ensure MappedExamplesIterable supports state_dict for resume | open | Fixes #7630
### Problem
When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable.
### What Thi... | 2025-06-29T07:50:13 | 2025-06-29T07:50:13 | null | https://github.com/huggingface/datasets/pull/7656 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7656",
"html_url": "https://github.com/huggingface/datasets/pull/7656",
"diff_url": "https://github.com/huggingface/datasets/pull/7656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7656.patch",
"merged_at": null
} | 7,656 | true | [] |
3,185,382,105 | Added specific use cases in Improve Performace | open | Fixes #2494 | 2025-06-28T19:00:32 | 2025-06-28T19:00:32 | null | https://github.com/huggingface/datasets/pull/7655 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7655",
"html_url": "https://github.com/huggingface/datasets/pull/7655",
"diff_url": "https://github.com/huggingface/datasets/pull/7655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7655.patch",
"merged_at": null
} | 7,655 | true | [] |
3,184,770,992 | fix(load): strip deprecated use_auth_token from config_kwargs | open | Fixes #7504
This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`.
**What was happening:**
Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have... | 2025-06-28T09:20:21 | 2025-06-28T09:20:21 | null | https://github.com/huggingface/datasets/pull/7654 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7654",
"html_url": "https://github.com/huggingface/datasets/pull/7654",
"diff_url": "https://github.com/huggingface/datasets/pull/7654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7654.patch",
"merged_at": null
} | 7,654 | true | [] |
3,184,746,093 | feat(load): fallback to `load_from_disk()` when loading a saved dataset directory | open | ### Related Issue
Fixes #7503
Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets.
---
### What does this PR do?
This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `p... | 2025-06-28T08:47:36 | 2025-06-28T08:47:36 | null | https://github.com/huggingface/datasets/pull/7653 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7653",
"html_url": "https://github.com/huggingface/datasets/pull/7653",
"diff_url": "https://github.com/huggingface/datasets/pull/7653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7653.patch",
"merged_at": null
} | 7,653 | true | [] |
3,183,372,055 | Add columns support to JSON loader for selective key filtering | open | Fixes #7594
This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet.
As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading v... | 2025-06-27T16:18:42 | 2025-07-14T10:41:53 | null | https://github.com/huggingface/datasets/pull/7652 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7652",
"html_url": "https://github.com/huggingface/datasets/pull/7652",
"diff_url": "https://github.com/huggingface/datasets/pull/7652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7652.patch",
"merged_at": null
} | 7,652 | true | [
"I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.",
"> I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.\r\n\r\nHi @aihao2000, Just... |
3,182,792,775 | fix: Extended metadata file names for folder_based_builder | open | Fixes #7650.
The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650.
This PR adds these filenames to the builder, allowing correct loading. | 2025-06-27T13:12:11 | 2025-06-30T08:19:37 | null | https://github.com/huggingface/datasets/pull/7651 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7651",
"html_url": "https://github.com/huggingface/datasets/pull/7651",
"diff_url": "https://github.com/huggingface/datasets/pull/7651.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7651.patch",
"merged_at": null
} | 7,651 | true | [] |
3,182,745,315 | `load_dataset` defaults to json file format for datasets with 1 shard | open | ### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for st... | 2025-06-27T12:54:25 | 2025-06-27T12:54:25 | null | https://github.com/huggingface/datasets/issues/7650 | null | 7,650 | false | [] |
3,181,481,444 | Enable parallel shard upload in push_to_hub() using num_proc | closed | Fixes #7591
### Add num_proc support to `push_to_hub()` for parallel shard upload
This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`.
📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_p... | 2025-06-27T05:59:03 | 2025-07-07T18:13:53 | 2025-07-07T18:13:52 | https://github.com/huggingface/datasets/pull/7649 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7649",
"html_url": "https://github.com/huggingface/datasets/pull/7649",
"diff_url": "https://github.com/huggingface/datasets/pull/7649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7649.patch",
"merged_at": null
} | 7,649 | true | [
"it was already added in https://github.com/huggingface/datasets/pull/7606 actually ^^'",
"Oh sure sure, Closing this one as redundant."
] |
3,181,409,736 | Fix misleading add_column() usage example in docstring | closed | Fixes #7611
This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place.
Why:
The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change.
This should make... | 2025-06-27T05:27:04 | 2025-07-20T16:07:49 | 2025-07-17T13:14:17 | https://github.com/huggingface/datasets/pull/7648 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7648",
"html_url": "https://github.com/huggingface/datasets/pull/7648",
"diff_url": "https://github.com/huggingface/datasets/pull/7648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7648.patch",
"merged_at": "2025-07-17T13:14... | 7,648 | true | [
"I believe there are other occurences of cases like this, like select_columns, select, filter, shard and flatten, could you also fix the docstring for them as well before we merge ?",
"Done! @lhoestq! I've updated the docstring examples for the following methods to clarify that they return new datasets instead of... |
3,178,952,517 | loading mozilla-foundation--common_voice_11_0 fails | open | ### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/py... | 2025-06-26T12:23:48 | 2025-07-10T14:49:30 | null | https://github.com/huggingface/datasets/issues/7647 | null | 7,647 | false | [
"@claude Could you please address this issue",
"kinda related: https://github.com/huggingface/datasets/issues/7675"
] |
3,178,036,854 | Introduces automatic subset-level grouping for folder-based dataset builders #7066 | open | Fixes #7066
This PR introduces automatic **subset-level grouping** for folder-based dataset builders by:
1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes).
2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one... | 2025-06-26T07:01:37 | 2025-07-14T10:42:56 | null | https://github.com/huggingface/datasets/pull/7646 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7646",
"html_url": "https://github.com/huggingface/datasets/pull/7646",
"diff_url": "https://github.com/huggingface/datasets/pull/7646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7646.patch",
"merged_at": null
} | 7,646 | true | [
"It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!",
"Hi ! I believ... |
3,176,810,164 | `ClassLabel` docs: Correct value for unknown labels | open | This small change fixes the documentation to to be compliant with what happens in `encode_example`.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129 | 2025-06-25T20:01:35 | 2025-06-25T20:01:35 | null | https://github.com/huggingface/datasets/pull/7645 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7645",
"html_url": "https://github.com/huggingface/datasets/pull/7645",
"diff_url": "https://github.com/huggingface/datasets/pull/7645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7645.patch",
"merged_at": null
} | 7,645 | true | [] |
3,176,363,492 | fix sequence ci | closed | fix error from https://github.com/huggingface/datasets/pull/7643 | 2025-06-25T17:07:55 | 2025-06-25T17:10:30 | 2025-06-25T17:08:01 | https://github.com/huggingface/datasets/pull/7644 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7644",
"html_url": "https://github.com/huggingface/datasets/pull/7644",
"diff_url": "https://github.com/huggingface/datasets/pull/7644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7644.patch",
"merged_at": "2025-06-25T17:08... | 7,644 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,176,354,431 | Backward compat sequence instance | closed | useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate | 2025-06-25T17:05:09 | 2025-06-25T17:07:40 | 2025-06-25T17:05:44 | https://github.com/huggingface/datasets/pull/7643 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7643",
"html_url": "https://github.com/huggingface/datasets/pull/7643",
"diff_url": "https://github.com/huggingface/datasets/pull/7643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7643.patch",
"merged_at": "2025-06-25T17:05... | 7,643 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,176,025,890 | fix length for ci | closed | null | 2025-06-25T15:10:38 | 2025-06-25T15:11:53 | 2025-06-25T15:11:51 | https://github.com/huggingface/datasets/pull/7642 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7642",
"html_url": "https://github.com/huggingface/datasets/pull/7642",
"diff_url": "https://github.com/huggingface/datasets/pull/7642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7642.patch",
"merged_at": "2025-06-25T15:11... | 7,642 | true | [] |
3,175,953,405 | update docs and docstrings | closed | null | 2025-06-25T14:48:58 | 2025-06-25T14:51:46 | 2025-06-25T14:49:33 | https://github.com/huggingface/datasets/pull/7641 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7641",
"html_url": "https://github.com/huggingface/datasets/pull/7641",
"diff_url": "https://github.com/huggingface/datasets/pull/7641.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7641.patch",
"merged_at": "2025-06-25T14:49... | 7,641 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,175,914,924 | better features repr | closed | following the addition of List in #7634
before:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value(dtype='string', id=None),
'metadata:transcript': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'transcript': Value(dtype='string', id=None),
'wor... | 2025-06-25T14:37:32 | 2025-06-25T14:46:47 | 2025-06-25T14:46:45 | https://github.com/huggingface/datasets/pull/7640 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7640",
"html_url": "https://github.com/huggingface/datasets/pull/7640",
"diff_url": "https://github.com/huggingface/datasets/pull/7640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7640.patch",
"merged_at": "2025-06-25T14:46... | 7,640 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,175,616,169 | fix save_infos | closed | null | 2025-06-25T13:16:26 | 2025-06-25T13:19:33 | 2025-06-25T13:16:33 | https://github.com/huggingface/datasets/pull/7639 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7639",
"html_url": "https://github.com/huggingface/datasets/pull/7639",
"diff_url": "https://github.com/huggingface/datasets/pull/7639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7639.patch",
"merged_at": "2025-06-25T13:16... | 7,639 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,172,645,391 | Add ignore_decode_errors option to Image feature for robust decoding #7612 | open | This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612.
## 🔧 What was added
- A new boolean field: `ignore_decode_errors` (default: `False`)
- If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error
... | 2025-06-24T16:47:51 | 2025-07-04T07:07:30 | null | https://github.com/huggingface/datasets/pull/7638 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7638",
"html_url": "https://github.com/huggingface/datasets/pull/7638",
"diff_url": "https://github.com/huggingface/datasets/pull/7638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7638.patch",
"merged_at": null
} | 7,638 | true | [
"cc @lhoestq",
"I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n\r\nThe [`deco... |
3,171,883,522 | Introduce subset_name as an alias of config_name | open | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call... | 2025-06-24T12:49:01 | 2025-07-01T16:08:33 | null | https://github.com/huggingface/datasets/issues/7637 | null | 7,637 | false | [
"I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly.",
"I've submitted PR [#7657](https://github.com/huggingface/datasets/pull/7657) to introduce subset_name as a user-facing al... |
3,170,878,167 | "open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable" | open | When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable"
```python
print("open" in globals()["__builtins__"])
```
Traceback (most recent call last):
File "./main.py", line 2, in <module>
print("open" in globals()["__builtins__"])
^^^^^^^^^^^^^^^^^^^^^^
TypeE... | 2025-06-24T08:09:39 | 2025-07-10T04:13:16 | null | https://github.com/huggingface/datasets/issues/7636 | null | 7,636 | false | [
"@kuanyan9527 Your query is indeed valid. Following could be its reasoning:\n\nQuoting from https://stackoverflow.com/a/11181607:\n\"By default, when in the `__main__` module,` __builtins__` is the built-in module `__builtin__` (note: no 's'); when in any other module, `__builtins__` is an alias for the dictionary ... |
3,170,486,408 | Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0) | open | This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference.
This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` inst... | 2025-06-24T06:16:48 | 2025-06-24T06:16:48 | null | https://github.com/huggingface/datasets/pull/7635 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7635",
"html_url": "https://github.com/huggingface/datasets/pull/7635",
"diff_url": "https://github.com/huggingface/datasets/pull/7635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7635.patch",
"merged_at": null
} | 7,635 | true | [] |
3,169,389,653 | Replace Sequence by List | closed | Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list.
This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead.
before: `Sequence(Value("int64"))` or `[Value("int64")]`
no... | 2025-06-23T20:35:48 | 2025-06-25T13:59:13 | 2025-06-25T13:59:11 | https://github.com/huggingface/datasets/pull/7634 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7634",
"html_url": "https://github.com/huggingface/datasets/pull/7634",
"diff_url": "https://github.com/huggingface/datasets/pull/7634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7634.patch",
"merged_at": "2025-06-25T13:59... | 7,634 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,168,399,637 | Proposal: Small Tamil Discourse Coherence Dataset. | open | I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages.
- Size: 50 samples
- Format: CSV with columns (text1, text2, label)
- Use case: Training NLP models for coherence
I’ll use GitHub’s web edit... | 2025-06-23T14:24:40 | 2025-06-23T14:24:40 | null | https://github.com/huggingface/datasets/issues/7633 | null | 7,633 | false | [] |
3,168,283,589 | Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets | open | ### Feature request
Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples a... | 2025-06-23T13:49:24 | 2025-07-08T06:52:53 | null | https://github.com/huggingface/datasets/issues/7632 | null | 7,632 | false | [
"Hi! This is now handled in PR #7638",
"Thank you for implementing the suggestion it would be great help in our use case. "
] |
End of preview. Expand in Data Studio
github issues
- Downloads last month
- 9