id
int64
1.9B
3.25B
title
stringlengths
2
244
state
stringclasses
2 values
body
stringlengths
3
58.6k
created_at
timestamp[s]date
2023-09-15 14:23:33
2025-07-22 09:33:54
updated_at
timestamp[s]date
2023-09-18 16:20:09
2025-07-22 10:44:03
closed_at
timestamp[s]date
2023-09-18 16:20:09
2025-07-19 22:45:08
html_url
stringlengths
49
51
pull_request
dict
number
int64
6.24k
7.7k
is_pull_request
bool
2 classes
comments
listlengths
0
24
1,959,004,835
Incorrect example code in 'Create a dataset' docs
closed
### Describe the bug On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect. Currently, examples are: ``` python from datasets import ImageFolder dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon") ``` and ``` python...
2023-10-24T11:01:21
2023-10-25T13:05:21
2023-10-25T13:05:21
https://github.com/huggingface/datasets/issues/6347
null
6,347
false
[ "This was fixed in https://github.com/huggingface/datasets/pull/6247. You can find the fix in the `main` version of the docs", "Ah great, thanks :)" ]
1,958,777,076
Fix UnboundLocalError if preprocessing returns an empty list
closed
If this tokenization function is used with IterableDatasets and no sample is as big as the context length, `input_batch` will be an empty list. ``` def tokenize(batch, tokenizer, context_length): outputs = tokenizer( batch["text"], truncation=True, max_length=context_length, r...
2023-10-24T08:38:43
2023-10-25T17:39:17
2023-10-25T16:36:38
https://github.com/huggingface/datasets/pull/6346
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6346", "html_url": "https://github.com/huggingface/datasets/pull/6346", "diff_url": "https://github.com/huggingface/datasets/pull/6346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6346.patch", "merged_at": "2023-10-25T16:36...
6,346
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,957,707,870
support squad structure datasets using a YAML parameter
open
### Feature request Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter. could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure. The dataset structure should...
2023-10-23T17:55:37
2023-10-23T17:55:37
null
https://github.com/huggingface/datasets/issues/6345
null
6,345
false
[]
1,957,412,169
set dev version
closed
null
2023-10-23T15:13:28
2023-10-23T15:24:31
2023-10-23T15:13:38
https://github.com/huggingface/datasets/pull/6344
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6344", "html_url": "https://github.com/huggingface/datasets/pull/6344", "diff_url": "https://github.com/huggingface/datasets/pull/6344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6344.patch", "merged_at": "2023-10-23T15:13...
6,344
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6344). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
1,957,370,711
Remove unused argument in `_get_data_files_patterns`
closed
null
2023-10-23T14:54:18
2023-11-16T09:09:42
2023-11-16T09:03:39
https://github.com/huggingface/datasets/pull/6343
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6343", "html_url": "https://github.com/huggingface/datasets/pull/6343", "diff_url": "https://github.com/huggingface/datasets/pull/6343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6343.patch", "merged_at": "2023-11-16T09:03...
6,343
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,957,344,445
Release: 2.14.6
closed
null
2023-10-23T14:43:26
2023-10-23T15:21:54
2023-10-23T15:07:25
https://github.com/huggingface/datasets/pull/6342
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6342", "html_url": "https://github.com/huggingface/datasets/pull/6342", "diff_url": "https://github.com/huggingface/datasets/pull/6342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6342.patch", "merged_at": "2023-10-23T15:07...
6,342
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,956,917,893
Release 2.14.5
closed
(wrong release number - I was continuing the 2.14 branch but 2.14.5 was released from `main`)
2023-10-23T11:10:22
2023-10-23T14:20:46
2023-10-23T11:12:40
https://github.com/huggingface/datasets/pull/6340
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6340", "html_url": "https://github.com/huggingface/datasets/pull/6340", "diff_url": "https://github.com/huggingface/datasets/pull/6340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6340.patch", "merged_at": null }
6,340
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6340). All of your documentation changes will be reflected on that endpoint." ]
1,956,912,627
minor release step improvement
closed
null
2023-10-23T11:07:04
2023-11-07T10:38:54
2023-11-07T10:32:41
https://github.com/huggingface/datasets/pull/6339
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6339", "html_url": "https://github.com/huggingface/datasets/pull/6339", "diff_url": "https://github.com/huggingface/datasets/pull/6339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6339.patch", "merged_at": "2023-11-07T10:32...
6,339
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,956,886,072
pin fsspec before it switches to glob.glob
closed
null
2023-10-23T10:50:54
2024-01-11T06:32:56
2023-10-23T10:51:52
https://github.com/huggingface/datasets/pull/6338
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6338", "html_url": "https://github.com/huggingface/datasets/pull/6338", "diff_url": "https://github.com/huggingface/datasets/pull/6338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6338.patch", "merged_at": null }
6,338
true
[ "closing in favor of https://github.com/huggingface/datasets/pull/6337", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6338). All of your documentation changes will be reflected on that endpoint." ]
1,956,875,259
Pin supported upper version of fsspec
closed
Pin upper version of `fsspec` to avoid disruptions introduced by breaking changes (and the need of urgent patch releases with hotfixes) on each release on their side. See: - #6331 - #6210 - #5731 - #5617 - #5447 I propose that we explicitly test, introduce fixes and support each new `fsspec` version release. ...
2023-10-23T10:44:16
2023-10-23T12:13:20
2023-10-23T12:04:36
https://github.com/huggingface/datasets/pull/6337
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6337", "html_url": "https://github.com/huggingface/datasets/pull/6337", "diff_url": "https://github.com/huggingface/datasets/pull/6337.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6337.patch", "merged_at": "2023-10-23T12:04...
6,337
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,956,827,232
unpin-fsspec
closed
Close #6333.
2023-10-23T10:16:46
2024-02-07T12:41:35
2023-10-23T10:17:48
https://github.com/huggingface/datasets/pull/6336
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6336", "html_url": "https://github.com/huggingface/datasets/pull/6336", "diff_url": "https://github.com/huggingface/datasets/pull/6336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6336.patch", "merged_at": "2023-10-23T10:17...
6,336
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6336). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
1,956,740,818
Support fsspec 2023.10.0
closed
Fix #6333.
2023-10-23T09:29:17
2024-01-11T06:33:35
2023-11-14T14:17:40
https://github.com/huggingface/datasets/pull/6335
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6335", "html_url": "https://github.com/huggingface/datasets/pull/6335", "diff_url": "https://github.com/huggingface/datasets/pull/6335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6335.patch", "merged_at": null }
6,335
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,956,719,774
datasets.filesystems: fix is_remote_filesystems
closed
Close #6330, close #6333. `fsspec.implementations.LocalFilesystem.protocol` was changed from `str` "file" to `tuple[str,...]` ("file", "local") in `fsspec>=2023.10.0` This commit supports both styles.
2023-10-23T09:17:54
2024-02-07T12:41:15
2023-10-23T10:14:10
https://github.com/huggingface/datasets/pull/6334
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6334", "html_url": "https://github.com/huggingface/datasets/pull/6334", "diff_url": "https://github.com/huggingface/datasets/pull/6334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6334.patch", "merged_at": "2023-10-23T10:14...
6,334
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,956,714,423
Support fsspec 2023.10.0
closed
Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by: - #6331 Related to issue: - #6330 As @ZachNagengast suggested, the issue might be related to: - https://github.com/fsspec/filesystem_spec/pull/1381
2023-10-23T09:14:53
2024-02-07T12:39:58
2024-02-07T12:39:58
https://github.com/huggingface/datasets/issues/6333
null
6,333
false
[ "Hi @albertvillanova @lhoestq \r\n\r\nI believe the pull request that pins the fsspec version (https://github.com/huggingface/datasets/pull/6331) was merged by mistake. Another fix for the issue was merged on the same day an hour apart. See https://github.com/huggingface/datasets/pull/6334\r\n\r\nI'm now having an ...
1,956,697,328
Replace deprecated license_file in setup.cfg
closed
Replace deprecated license_file in `setup.cfg`. See: https://github.com/huggingface/datasets/actions/runs/6610930650/job/17953825724?pr=6331 ``` /tmp/pip-build-env-a51hls20/overlay/lib/python3.8/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg` !! ...
2023-10-23T09:05:26
2023-11-07T08:23:10
2023-11-07T08:09:06
https://github.com/huggingface/datasets/pull/6332
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6332", "html_url": "https://github.com/huggingface/datasets/pull/6332", "diff_url": "https://github.com/huggingface/datasets/pull/6332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6332.patch", "merged_at": "2023-11-07T08:09...
6,332
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,956,671,256
Temporarily pin fsspec < 2023.10.0
closed
Temporarily pin fsspec < 2023.10.0 until permanent solution is found. Hot fix #6330. See: https://github.com/huggingface/datasets/actions/runs/6610904287/job/17953774987 ``` ... ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NotImplementedError: Loading a dataset cached in a LocalFileS...
2023-10-23T08:51:50
2023-10-23T09:26:42
2023-10-23T09:17:55
https://github.com/huggingface/datasets/pull/6331
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6331", "html_url": "https://github.com/huggingface/datasets/pull/6331", "diff_url": "https://github.com/huggingface/datasets/pull/6331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6331.patch", "merged_at": "2023-10-23T09:17...
6,331
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,956,053,294
Latest fsspec==2023.10.0 issue with streaming datasets
closed
### Describe the bug Loading a streaming dataset with this version of fsspec fails with the following error: `NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.` I suspect the issue is with this PR https://github.com/fsspec/filesystem_spec/pull/1381 ### Steps ...
2023-10-22T20:57:10
2025-06-09T22:00:16
2023-10-23T09:17:56
https://github.com/huggingface/datasets/issues/6330
null
6,330
false
[ "I also encountered a similar error below.\r\nAppreciate the team could shed some light on this issue.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n[/home/ubuntu/work/EveryDream2trainer/pre...
1,955,858,020
شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
closed
شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
2023-10-22T11:07:46
2023-10-23T09:22:58
2023-10-23T09:22:58
https://github.com/huggingface/datasets/issues/6329
null
6,329
false
[]
1,955,857,904
شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
closed
null
2023-10-22T11:07:21
2023-10-23T09:22:38
2023-10-23T09:22:38
https://github.com/huggingface/datasets/issues/6328
null
6,328
false
[ "شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی" ]
1,955,470,755
FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)`
closed
### Describe the bug Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs. ### Steps to reproduce the bug I've downloaded the dataset and save it to the cache dir in advance. My hope is loadi...
2023-10-21T12:27:03
2023-10-23T18:50:07
2023-10-23T18:50:07
https://github.com/huggingface/datasets/issues/6327
null
6,327
false
[ "You can clone the `togethercomputer/RedPajama-Data-1T-Sample` repo and load the dataset with `load_dataset(\"path/to/cloned_repo\")` to use it offline.", "@mariosasko Thank you for your kind reply! I'll try it as a workaround.\r\nDoes that mean that currently it's not supported to simply load with a short name?"...
1,955,420,536
Create battery_analysis.py
closed
null
2023-10-21T10:07:48
2023-10-23T14:56:20
2023-10-23T14:56:20
https://github.com/huggingface/datasets/pull/6326
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6326", "html_url": "https://github.com/huggingface/datasets/pull/6326", "diff_url": "https://github.com/huggingface/datasets/pull/6326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6326.patch", "merged_at": null }
6,326
true
[]
1,955,420,178
Create battery_analysis.py
closed
null
2023-10-21T10:06:37
2023-10-23T14:55:58
2023-10-23T14:55:58
https://github.com/huggingface/datasets/pull/6325
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6325", "html_url": "https://github.com/huggingface/datasets/pull/6325", "diff_url": "https://github.com/huggingface/datasets/pull/6325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6325.patch", "merged_at": null }
6,325
true
[]
1,955,126,687
Conversion to Arrow fails due to wrong type heuristic
closed
### Describe the bug I have a list of dictionaries with valid/JSON-serializable values. One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on. If trying to convert this list to a dataset with `Dataset.from_list()`, I always get `ArrowI...
2023-10-20T23:20:58
2023-10-23T20:52:57
2023-10-23T20:52:57
https://github.com/huggingface/datasets/issues/6324
null
6,324
false
[ "Unlike Pandas, Arrow is strict with types, so converting the problematic strings to ints (or ints to strings) to ensure all the values have the same type is the only fix. \r\n\r\nJSON support has been requested in Arrow [here](https://github.com/apache/arrow/issues/32538), but I don't expect this to be implemented...
1,954,245,980
Loading dataset from large GCS bucket very slow since 2.14
open
### Describe the bug Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change: https://github.com/huggingface/datasets/blame/bade7af74437347a76083...
2023-10-20T12:59:55
2024-09-03T18:42:33
null
https://github.com/huggingface/datasets/issues/6323
null
6,323
false
[ "I've also encountered this issue recently and want to ask if this has been seen.\r\n\r\n@albertvillanova for visibility - I'm not sure who the right person is to tag, but I noticed you were active recently so perhaps you can direct this to the right person.\r\n\r\nThanks!" ]
1,952,947,461
Fix regex `get_data_files` formatting for base paths
closed
With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`: - Input: `hf://datasets/...` - Output: `hf:/datasets/...`...
2023-10-19T19:45:10
2023-10-23T14:40:45
2023-10-23T14:31:21
https://github.com/huggingface/datasets/pull/6322
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6322", "html_url": "https://github.com/huggingface/datasets/pull/6322", "diff_url": "https://github.com/huggingface/datasets/pull/6322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6322.patch", "merged_at": "2023-10-23T14:31...
6,322
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "> The reason why I used the the glob_pattern_to_regex in the entire pattern is because otherwise I got an error for Windows local paths: a base_path like 'C:\\\\Users\\\\runneradmin... made the function string_to_dict raise re.error:...
1,952,643,483
Fix typos
closed
null
2023-10-19T16:24:35
2023-10-19T17:18:00
2023-10-19T17:07:35
https://github.com/huggingface/datasets/pull/6321
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6321", "html_url": "https://github.com/huggingface/datasets/pull/6321", "diff_url": "https://github.com/huggingface/datasets/pull/6321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6321.patch", "merged_at": "2023-10-19T17:07...
6,321
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,952,618,316
Dataset slice splits can't load training and validation at the same time
closed
### Describe the bug According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command: `train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")` to load the train and test sets from the dataset. However ex...
2023-10-19T16:09:22
2023-11-30T16:21:15
2023-11-30T16:21:15
https://github.com/huggingface/datasets/issues/6320
null
6,320
false
[ "The expression \"train+test\" concatenates the splits.\r\n\r\nThe individual splits as separate datasets can be obtained as follows:\r\n```python\r\ntrain_ds, test_ds = load_dataset(\"<dataset_name>\", split=[\"train\", \"test\"])\r\ntrain_10pct_ds, test_10pct_ds = load_dataset(\"<dataset_name>\", split=[\"train[:...
1,952,101,717
Datasets.map is severely broken
open
### Describe the bug Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs. After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing s...
2023-10-19T12:19:33
2024-08-08T17:05:08
null
https://github.com/huggingface/datasets/issues/6319
null
6,319
false
[ "Hi! Instead of processing a single example at a time, you should use the batched `map` for the best performance (with `num_proc=1`) - the fast tokenizers can process a batch's samples in parallel in that scenario.\r\n\r\nE.g., the following code in Colab takes an hour to complete:\r\n```python\r\n# !pip install da...
1,952,100,706
Deterministic set hash
closed
Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets. This is useful to get deterministic hashes of tokenizers that use a trie based on python sets. reported in https://github.com/huggingface/datasets/issues/3847
2023-10-19T12:19:13
2023-10-19T16:27:20
2023-10-19T16:16:31
https://github.com/huggingface/datasets/pull/6318
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6318", "html_url": "https://github.com/huggingface/datasets/pull/6318", "diff_url": "https://github.com/huggingface/datasets/pull/6318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6318.patch", "merged_at": "2023-10-19T16:16...
6,318
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,951,965,668
sentiment140 dataset unavailable
closed
### Describe the bug loading the dataset using load_dataset("sentiment140") returns the following error ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403) ### Steps to reproduce the bug Run the following code (version should not matter). ``` from data...
2023-10-19T11:25:21
2023-10-19T13:04:56
2023-10-19T13:04:56
https://github.com/huggingface/datasets/issues/6317
null
6,317
false
[ "Thanks for reporting. We are investigating the issue.", "We have opened an issue in the corresponding Hub dataset: https://huggingface.co/datasets/sentiment140/discussions/3\r\n\r\nLet's continue the discussion there." ]
1,951,819,869
Fix loading Hub datasets with CSV metadata file
closed
Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example: - the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl` - correspon...
2023-10-19T10:21:34
2023-10-20T06:23:21
2023-10-20T06:14:09
https://github.com/huggingface/datasets/pull/6316
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6316", "html_url": "https://github.com/huggingface/datasets/pull/6316", "diff_url": "https://github.com/huggingface/datasets/pull/6316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6316.patch", "merged_at": "2023-10-20T06:14...
6,316
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,951,800,819
Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0
closed
When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error: ``` E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0 pyarrow/error.pxi:100: ArrowInvalid ``` See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1
2023-10-19T10:11:29
2023-10-20T06:14:10
2023-10-20T06:14:10
https://github.com/huggingface/datasets/issues/6315
null
6,315
false
[]
1,951,684,763
Support creating new branch in push_to_hub
closed
This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created.
2023-10-19T09:12:39
2023-10-19T09:20:06
2023-10-19T09:19:48
https://github.com/huggingface/datasets/pull/6314
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6314", "html_url": "https://github.com/huggingface/datasets/pull/6314", "diff_url": "https://github.com/huggingface/datasets/pull/6314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6314.patch", "merged_at": null }
6,314
true
[]
1,951,527,712
Fix commit message formatting in multi-commit uploads
closed
Currently, the commit message keeps on adding: - `Upload dataset (part 00000-of-00002)` - `Upload dataset (part 00000-of-00002) (part 00001-of-00002)` Introduced in https://github.com/huggingface/datasets/pull/6269 This PR fixes this issue to have - `Upload dataset (part 00000-of-00002)` - `Upload dataset...
2023-10-19T07:53:56
2023-10-20T14:06:13
2023-10-20T13:57:39
https://github.com/huggingface/datasets/pull/6313
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6313", "html_url": "https://github.com/huggingface/datasets/pull/6313", "diff_url": "https://github.com/huggingface/datasets/pull/6313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6313.patch", "merged_at": "2023-10-20T13:57...
6,313
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,950,128,416
docs: resolving namespace conflict, refactored variable
closed
In docs of about_arrow.md, in the below example code ![image](https://github.com/huggingface/datasets/assets/74114936/fc70e152-e15f-422e-949a-1c4c4c9aa116) The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good conven...
2023-10-18T16:10:59
2023-10-19T16:31:59
2023-10-19T16:23:07
https://github.com/huggingface/datasets/pull/6312
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6312", "html_url": "https://github.com/huggingface/datasets/pull/6312", "diff_url": "https://github.com/huggingface/datasets/pull/6312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6312.patch", "merged_at": "2023-10-19T16:23...
6,312
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,949,304,993
cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146
closed
### Describe the bug i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test. here is my code : ``` import os from datasets import load_dataset from datasets.features import Sequence, Value def add_new_path(example): example["ais_bbox"] =...
2023-10-18T09:38:05
2024-02-06T19:24:20
2024-02-06T19:24:20
https://github.com/huggingface/datasets/issues/6311
null
6,311
false
[ "Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in https://github.com/huggingface/datasets/pull/6283 (should be part of the next release).", "> Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in #6283 (should be p...
1,947,457,988
Add return_file_name in load_dataset
closed
Proposition to fix #5806. Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output. There is a difference between arrow-based and folder-based datasets to return the file name: - fo...
2023-10-17T13:36:57
2024-08-09T11:51:55
2024-07-31T13:56:50
https://github.com/huggingface/datasets/pull/6310
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6310", "html_url": "https://github.com/huggingface/datasets/pull/6310", "diff_url": "https://github.com/huggingface/datasets/pull/6310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6310.patch", "merged_at": null }
6,310
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6310). All of your documentation changes will be reflected on that endpoint.", "> Thanks for the change !\r\n> \r\n> Since `return` in python often refers to what is actually returned by the function (here `load_dataset`), I th...
1,946,916,969
Fix get_data_patterns for directories with the word data twice
closed
Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice: - For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`) - The in...
2023-10-17T09:00:39
2023-10-18T14:01:52
2023-10-18T13:50:35
https://github.com/huggingface/datasets/pull/6309
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6309", "html_url": "https://github.com/huggingface/datasets/pull/6309", "diff_url": "https://github.com/huggingface/datasets/pull/6309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6309.patch", "merged_at": "2023-10-18T13:50...
6,309
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,946,810,625
module 'resource' has no attribute 'error'
closed
### Describe the bug just run import: `from datasets import load_dataset` and then: ``` File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module> from .arrow_dataset import Dataset File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow...
2023-10-17T08:08:54
2023-10-25T17:09:22
2023-10-25T17:09:22
https://github.com/huggingface/datasets/issues/6308
null
6,308
false
[ "This (Windows) issue was fixed in `fsspec` in https://github.com/fsspec/filesystem_spec/pull/1275. So, to avoid the error, update the `fsspec` installation with `pip install -U fsspec`.", "> This (Windows) issue was fixed in `fsspec` in [fsspec/filesystem_spec#1275](https://github.com/fsspec/filesystem_spec/pul...
1,946,414,808
Fix typo in code example in docs
closed
null
2023-10-17T02:28:50
2023-10-17T12:59:26
2023-10-17T06:36:19
https://github.com/huggingface/datasets/pull/6307
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6307", "html_url": "https://github.com/huggingface/datasets/pull/6307", "diff_url": "https://github.com/huggingface/datasets/pull/6307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6307.patch", "merged_at": "2023-10-17T06:36...
6,307
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,946,363,452
pyinstaller : OSError: could not get source code
closed
### Describe the bug I ran a package with pyinstaller and got the following error: ### Steps to reproduce the bug ``` ... File "datasets\__init__.py", line 52, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_an...
2023-10-17T01:41:51
2023-11-02T07:24:51
2023-10-18T14:03:42
https://github.com/huggingface/datasets/issues/6306
null
6,306
false
[ "more information:\r\n``` \r\nFile \"text2vec\\__init__.py\", line 8, in <module>\r\nFile \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\nFile \"...
1,946,010,912
Cannot load dataset with `2.14.5`: `FileNotFound` error
closed
### Describe the bug I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`. ### Steps to reproduce the bug [Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg) ```python D...
2023-10-16T20:11:27
2023-10-18T13:50:36
2023-10-18T13:50:36
https://github.com/huggingface/datasets/issues/6305
null
6,305
false
[ "Thanks for reporting, @finiteautomata.\r\n\r\nWe are investigating it. ", "There is a bug in `datasets`. You can see our proposed fix:\r\n- #6309 " ]
1,945,913,521
Update README.md
closed
Fixed typos in ReadMe and added punctuation marks Tensorflow --> TensorFlow
2023-10-16T19:10:39
2023-10-17T15:13:37
2023-10-17T15:04:52
https://github.com/huggingface/datasets/pull/6304
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6304", "html_url": "https://github.com/huggingface/datasets/pull/6304", "diff_url": "https://github.com/huggingface/datasets/pull/6304.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6304.patch", "merged_at": "2023-10-17T15:04...
6,304
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,943,466,532
Parquet uploads off-by-one naming scheme
open
### Describe the bug I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored? <img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71...
2023-10-14T18:31:03
2023-10-16T16:33:21
null
https://github.com/huggingface/datasets/issues/6303
null
6,303
false
[ "You can find the reasoning behind this naming scheme [here](https://github.com/huggingface/transformers/pull/16343#discussion_r931182168).\r\n\r\nThis point has been raised several times, so I'd be okay with starting with `00001-` (also to be consistent with the `transformers` sharding), but I'm not sure @lhoestq ...
1,942,096,078
ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size`
closed
### Describe the bug An example from [1], does not work when limiting shards with `max_shard_size`. Try the following example with low `max_shard_size`, such as: ```python builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB") ``` The reason f...
2023-10-13T14:43:36
2023-10-17T06:52:12
2023-10-17T06:52:11
https://github.com/huggingface/datasets/issues/6302
null
6,302
false
[ "`writer._num_bytes` is updated every `writer_batch_size`-th call to the `write` method (default `writer_batch_size` is 1000 (examples)). You should be able to see the update by passing a smaller `writer_batch_size` to the `load_dataset_builder`.\r\n\r\nWe could improve this by supporting the string `writer_batch_s...
1,940,183,999
Unpin `tensorflow` maximum version
closed
Removes the temporary pin introduced in #6264
2023-10-12T14:58:07
2023-10-12T15:58:20
2023-10-12T15:49:54
https://github.com/huggingface/datasets/pull/6301
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6301", "html_url": "https://github.com/huggingface/datasets/pull/6301", "diff_url": "https://github.com/huggingface/datasets/pull/6301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6301.patch", "merged_at": "2023-10-12T15:49...
6,301
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,940,153,432
Unpin `jax` maximum version
closed
fix #6299 fix #6202
2023-10-12T14:42:40
2023-10-12T16:37:55
2023-10-12T16:28:57
https://github.com/huggingface/datasets/pull/6300
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6300", "html_url": "https://github.com/huggingface/datasets/pull/6300", "diff_url": "https://github.com/huggingface/datasets/pull/6300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6300.patch", "merged_at": "2023-10-12T16:28...
6,300
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,939,649,238
Support for newer versions of JAX
closed
### Feature request Hi, I like your idea of adapting the datasets library to be usable with JAX. Thank you for that. However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome ! What is the rationale for such a lim...
2023-10-12T10:03:46
2023-10-12T16:28:59
2023-10-12T16:28:59
https://github.com/huggingface/datasets/issues/6299
null
6,299
false
[]
1,938,797,389
Doc readme improvements
closed
Changes in the doc READMe: * adds two new sections (to be aligned with `transformers` and `hfh`): "Previewing the documentation" and "Writing documentation examples" * replaces the mentions of `transformers` with `datasets` * fixes some dead links
2023-10-11T21:51:12
2023-10-12T12:47:15
2023-10-12T12:38:19
https://github.com/huggingface/datasets/pull/6298
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6298", "html_url": "https://github.com/huggingface/datasets/pull/6298", "diff_url": "https://github.com/huggingface/datasets/pull/6298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6298.patch", "merged_at": "2023-10-12T12:38...
6,298
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,938,752,707
Fix ArrayXD cast
closed
Fix #6291
2023-10-11T21:14:59
2023-10-13T13:54:00
2023-10-13T13:45:30
https://github.com/huggingface/datasets/pull/6297
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6297", "html_url": "https://github.com/huggingface/datasets/pull/6297", "diff_url": "https://github.com/huggingface/datasets/pull/6297.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6297.patch", "merged_at": "2023-10-13T13:45...
6,297
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,938,453,845
Move `exceptions.py` to `utils/exceptions.py`
closed
I didn't notice the path while reviewing the PR yesterday :(
2023-10-11T18:28:00
2024-09-03T16:00:04
2024-09-03T16:00:03
https://github.com/huggingface/datasets/pull/6296
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6296", "html_url": "https://github.com/huggingface/datasets/pull/6296", "diff_url": "https://github.com/huggingface/datasets/pull/6296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6296.patch", "merged_at": null }
6,296
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,937,362,102
Fix parquet columns argument in streaming mode
closed
It was failing when there's a DatasetInfo with non-None info.features from the YAML (therefore containing columns that should be ignored) Fix https://github.com/huggingface/datasets/issues/6293
2023-10-11T10:01:01
2023-10-11T16:30:24
2023-10-11T16:21:36
https://github.com/huggingface/datasets/pull/6295
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6295", "html_url": "https://github.com/huggingface/datasets/pull/6295", "diff_url": "https://github.com/huggingface/datasets/pull/6295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6295.patch", "merged_at": "2023-10-11T16:21...
6,295
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,937,359,605
IndexError: Invalid key is out of bounds for size 0 despite having a populated dataset
closed
### Describe the bug I am encountering an `IndexError` when trying to access data from a DataLoader which wraps around a dataset I've loaded using the `datasets` library. The error suggests that the dataset size is `0`, but when I check the length and print the dataset, it's clear that it has `1166` entries. ### Step...
2023-10-11T09:59:38
2023-10-17T11:24:06
2023-10-17T11:24:06
https://github.com/huggingface/datasets/issues/6294
null
6,294
false
[ "It looks to be the same issue as the one reported in https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0.\r\n\r\nCan you check the length of `train_dataset` before the `train_sampler = self._get_train_sampler()` (and after `_remove_unused_columns`) line?" ]
1,937,238,047
Choose columns to stream parquet data in streaming mode
closed
Currently passing columns= to load_dataset in streaming mode fails ``` Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=...
2023-10-11T08:59:36
2023-10-11T16:21:38
2023-10-11T16:21:38
https://github.com/huggingface/datasets/issues/6293
null
6,293
false
[]
1,937,050,470
how to load the image of dtype float32 or float64
open
_FEATURES = datasets.Features( { "image": datasets.Image(), "text": datasets.Value("string"), }, ) The datasets builder seems only support the unit8 data. How to load the float dtype data?
2023-10-11T07:27:16
2023-10-11T13:19:11
null
https://github.com/huggingface/datasets/issues/6292
null
6,292
false
[ "Hi! Can you provide a code that reproduces the issue?\r\n\r\nAlso, which version of `datasets` are you using? You can check this by running `python -c \"import datasets; print(datasets.__version__)\"` inside the env. We added support for \"float images\" in `datasets 2.9`." ]
1,936,129,871
Casting type from Array2D int to Array2D float crashes
closed
### Describe the bug I am on a school project and the initial type for feature annotations are `Array2D(shape=(None, 4))`. I am trying to cast this type to a `float64` and pyarrow gives me this error : ``` Traceback (most recent call last): File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearnin...
2023-10-10T20:10:10
2023-10-13T13:45:31
2023-10-13T13:45:31
https://github.com/huggingface/datasets/issues/6291
null
6,291
false
[ "Thanks for reporting! I've opened a PR with a fix" ]
1,935,629,679
Incremental dataset (e.g. `.push_to_hub(..., append=True)`)
open
### Feature request Have the possibility to do `ds.push_to_hub(..., append=True)`. ### Motivation Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675...
2023-10-10T15:18:03
2025-03-12T13:41:26
null
https://github.com/huggingface/datasets/issues/6290
null
6,290
false
[ "Yea I think waiting for #6269 would be best, or branching from it. For reference, this [PR](https://github.com/LAION-AI/Discord-Scrapers/pull/2) is progressing pretty well which will do similar using the hf hub for our LAION dataset bot https://github.com/LAION-AI/Discord-Scrapers/pull/2. ", "Is there any update...
1,935,628,506
testing doc-builder
closed
testing https://github.com/huggingface/doc-builder/pull/426
2023-10-10T15:17:29
2023-10-13T08:57:14
2023-10-13T08:56:48
https://github.com/huggingface/datasets/pull/6289
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6289", "html_url": "https://github.com/huggingface/datasets/pull/6289", "diff_url": "https://github.com/huggingface/datasets/pull/6289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6289.patch", "merged_at": null }
6,289
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,935,005,457
Dataset.from_pandas with a DataFrame of PIL.Images
open
Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way
2023-10-10T10:29:16
2024-11-29T16:35:30
null
https://github.com/huggingface/datasets/issues/6288
null
6,288
false
[ "A duplicate of https://github.com/huggingface/datasets/issues/4796.\r\n\r\nWe could get this for free by implementing the `Image` feature as an extension type, as shown in [this](https://colab.research.google.com/drive/1Uzm_tXVpGTwbzleDConWcNjacwO1yxE4?usp=sharing) Colab (example with UUIDs).\r\n", "+1 to this\r...
1,932,758,192
map() not recognizing "text"
closed
### Describe the bug The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads: ` ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)` I have been trying to reproduce it in my code as: `tokenizedData...
2023-10-09T10:27:30
2023-10-11T20:28:45
2023-10-11T20:28:45
https://github.com/huggingface/datasets/issues/6287
null
6,287
false
[ "There is no \"text\" column in the `amazon_reviews_multi`, hence the `KeyError`. You can get the column names by running `dataset.column_names`." ]
1,932,640,128
Create DefunctDatasetError
closed
Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible. See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b
2023-10-09T09:23:23
2023-10-10T07:13:22
2023-10-10T07:03:04
https://github.com/huggingface/datasets/pull/6286
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6286", "html_url": "https://github.com/huggingface/datasets/pull/6286", "diff_url": "https://github.com/huggingface/datasets/pull/6286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6286.patch", "merged_at": "2023-10-10T07:03...
6,286
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,932,306,325
TypeError: expected str, bytes or os.PathLike object, not dict
open
### Describe the bug my dataset is in form : train- image /n -labels and tried the code: ``` from datasets import load_dataset data_files = { "train": "/content/datasets/PotholeDetectionYOLOv8-1/train/", "validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/", "test": "/content/dat...
2023-10-09T04:56:26
2023-10-10T13:17:33
null
https://github.com/huggingface/datasets/issues/6285
null
6,285
false
[ "You should be able to load the images by modifying the `load_dataset` call like this:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_dir=\"/content/datasets/PotholeDetectionYOLOv8-1\")\r\n```\r\n\r\nThe `imagefolder` builder expects the image files to be in `path/label/image_file` (e.g. .`.../train/d...
1,929,551,712
Add Belebele multiple-choice machine reading comprehension (MRC) dataset
closed
### Feature request Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short pass...
2023-10-06T06:58:03
2023-10-06T13:26:51
2023-10-06T13:26:51
https://github.com/huggingface/datasets/issues/6284
null
6,284
false
[ "This dataset is already available on the Hub: https://huggingface.co/datasets/facebook/belebele.\r\n" ]
1,928,552,257
Fix array cast/embed with null values
closed
Fixes issues with casting/embedding PyArrow list arrays with null values. It also bumps the required PyArrow version to 12.0.0 (over 9 months old) to simplify the implementation. Fix #6280, fix #6311, fix #6360 (Also fixes https://github.com/huggingface/datasets/issues/5430 to make Beam compatible with PyArrow>=...
2023-10-05T15:24:05
2024-07-04T07:24:20
2024-02-06T19:24:19
https://github.com/huggingface/datasets/pull/6283
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6283", "html_url": "https://github.com/huggingface/datasets/pull/6283", "diff_url": "https://github.com/huggingface/datasets/pull/6283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6283.patch", "merged_at": "2024-02-06T19:24...
6,283
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,928,473,630
Drop data_files duplicates
closed
I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order close https://github.com/huggingface/datasets/issues/6259 close https://github.com/huggingface/datasets/issues/6272
2023-10-05T14:43:08
2024-09-02T14:08:35
2024-09-02T14:08:35
https://github.com/huggingface/datasets/pull/6282
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6282", "html_url": "https://github.com/huggingface/datasets/pull/6282", "diff_url": "https://github.com/huggingface/datasets/pull/6282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6282.patch", "merged_at": null }
6,282
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,928,456,959
Improve documentation of dataset.from_generator
closed
Improve documentation to clarify sharding behavior (#6270)
2023-10-05T14:34:49
2023-10-05T19:09:07
2023-10-05T18:57:41
https://github.com/huggingface/datasets/pull/6281
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6281", "html_url": "https://github.com/huggingface/datasets/pull/6281", "diff_url": "https://github.com/huggingface/datasets/pull/6281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6281.patch", "merged_at": "2023-10-05T18:57...
6,281
true
[ "I have looked at the doc failures, and I do not think that my change caused the doc build failure, but I'm not 100% sure about that.\r\nI have high confidence that the integration test failures are not something I introduced:-)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<sum...
1,928,215,278
Couldn't cast array of type fixed_size_list to Sequence(Value(float64))
closed
### Describe the bug I have a dataset with an embedding column, when I try to map that dataset I get the following exception: ``` Traceback (most recent call last): File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map for rank, done, content...
2023-10-05T12:48:31
2024-02-06T19:24:20
2024-02-06T19:24:20
https://github.com/huggingface/datasets/issues/6280
null
6,280
false
[ "Thanks for reporting! I've opened a PR with a fix.", "Thanks for the quick response @mariosasko! I just installed your branch via `poetry add 'git+https://github.com/huggingface/datasets#fix-array_values'` and I can confirm it works on the example provided.\r\n\r\nFollow up question for you, should `None`s be s...
1,928,028,226
Batched IterableDataset
open
### Feature request Hi, could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator. ### Motivation The current implementation load...
2023-10-05T11:12:49
2024-11-07T10:01:22
null
https://github.com/huggingface/datasets/issues/6279
null
6,279
false
[ "This is exactly what I was looking for. It would also be very useful for me :-)", "This issue is really smashing the selling point of HF datasets... The only workaround I've found so far is to create a customized IterableDataloader which improves the loading speed to some extent.\r\n\r\nFor example I've a HF dat...
1,927,957,877
No data files duplicates
closed
I added a new DataFilesSet class to disallow duplicate data files. I also deprecated DataFilesList. EDIT: actually I might just add drop_duplicates=True to `.from_patterns` close https://github.com/huggingface/datasets/issues/6259 close https://github.com/huggingface/datasets/issues/6272 TODO: - [ ] tests ...
2023-10-05T10:31:58
2024-01-11T06:32:49
2023-10-05T14:43:17
https://github.com/huggingface/datasets/pull/6278
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6278", "html_url": "https://github.com/huggingface/datasets/pull/6278", "diff_url": "https://github.com/huggingface/datasets/pull/6278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6278.patch", "merged_at": null }
6,278
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,927,044,546
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
closed
### Describe the bug I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows: FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub eit...
2023-10-04T22:01:25
2023-10-08T17:05:46
2023-10-08T17:05:46
https://github.com/huggingface/datasets/issues/6277
null
6,277
false
[ "`evaluate.load(\"paws-x\", \"es\")` throws the error because there is no such metric in the `evaluate` lib.\r\n\r\nSo, this is unrelated to our lib." ]
1,925,961,878
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error
open
### Describe the bug I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post https://huggingface.co/blog/fine-tune-whisper I tried google collab and it works but because I'm on the free version the training ...
2023-10-04T11:03:41
2023-11-27T10:39:16
null
https://github.com/huggingface/datasets/issues/6276
null
6,276
false
[ "Since you are using Windows, maybe moving the `map` call inside `if __name__ == \"__main__\"` can fix the issue:\r\n```python\r\nif __name__ == \"__main__\":\r\n common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=4)\r\n```\r\n\r\nOtherwise, the only s...
1,921,354,680
Would like to Contribute a dataset
closed
I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community
2023-10-02T07:00:21
2023-10-10T16:27:54
2023-10-10T16:27:54
https://github.com/huggingface/datasets/issues/6275
null
6,275
false
[ "Hi! The process of contributing a dataset is explained here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingface.co/docs/datasets/image_dataset for a more detailed explanation of how to share an image dataset." ]
1,921,036,328
FileNotFoundError for dataset with multiple builder config
closed
### Describe the bug When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen. FileNotFoundError: [Errno 2...
2023-10-01T23:45:56
2024-08-14T04:42:02
2023-10-02T20:09:38
https://github.com/huggingface/datasets/issues/6274
null
6,274
false
[ "Please tell me if the above info is not enough for solving the problem. I will then make my dataset public temporarily so that you can really reproduce the bug. ", "Hi! \r\nCould you share how to solve this problem? \r\nI faced this same error. " ]
1,920,922,260
Broken Link to PubMed Abstracts dataset .
open
### Describe the bug The link provided for the dataset is broken, data_files = [https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url) The ### Steps to reproduce the bug Steps to reproduce: 1) Head over to [https://huggingface.co/learn/nlp-course/chapt...
2023-10-01T19:08:48
2024-04-28T02:30:42
null
https://github.com/huggingface/datasets/issues/6273
null
6,273
false
[ "This has already been reported in the HF Course repo (https://github.com/huggingface/course/issues/623).", "@lhoestq @albertvillanova @lewtun I don't think we are allowed to host these data files on the Hub (due to DMCA), which means the only option is to use a different dataset in the course (and to re-record t...
1,920,831,487
Duplicate `data_files` when named `<split>/<split>.parquet`
closed
e.g. with `u23429/stock_1_minute_ticker` ```ipython In [1]: from datasets import * In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker") Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s] In [3]: b.config.data_files Out[3]: {NamedSplit('train'): ['hf://datasets/...
2023-10-01T15:43:56
2024-03-15T15:22:05
2024-03-15T15:22:05
https://github.com/huggingface/datasets/issues/6272
null
6,272
false
[ "Also reported in https://github.com/huggingface/datasets/issues/6259", "I think it's best to drop duplicates with a `set` (as a temporary fix) and improve the patterns when/if https://github.com/fsspec/filesystem_spec/pull/1382 gets merged. @lhoestq Do you have some other ideas?", "Alternatively we could just...
1,920,420,295
Overwriting Split overwrites data but not metadata, corrupting dataset
closed
### Describe the bug I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below. **Current Behavior** Whe...
2023-09-30T22:37:31
2023-10-16T13:30:50
2023-10-16T13:30:50
https://github.com/huggingface/datasets/issues/6271
null
6,271
false
[]
1,920,329,373
Dataset.from_generator raises with sharded gen_args
closed
### Describe the bug According to the docs of Datasets.from_generator: ``` gen_kwargs(`dict`, *optional*): Keyword arguments to be passed to the `generator` callable. You can define a sharded dataset by passing the list of shards in `gen_kwargs`. ``` So I'd expect that if gen_kwar...
2023-09-30T16:50:06
2023-10-11T20:29:12
2023-10-11T20:29:11
https://github.com/huggingface/datasets/issues/6270
null
6,270
false
[ "`gen_kwargs` should be a `dict`, as stated in the docstring, but you are passing a `list`.\r\n\r\nSo, to fix the error, replace the list of dicts with a dict of lists (and slightly modify the generator function):\r\n```python\r\nfrom pathlib import Path\r\nimport datasets\r\n\r\ndef process_yaml(files):\r\n for...
1,919,572,790
Reduce the number of commits in `push_to_hub`
closed
Reduces the number of commits in `push_to_hub` by using the `preupload` API from https://github.com/huggingface/huggingface_hub/pull/1699. Each commit contains a maximum of 50 uploaded files. A shard's fingerprint no longer needs to be added as a suffix to support resuming an upload, meaning the shards' naming schem...
2023-09-29T16:22:31
2023-10-16T16:03:18
2023-10-16T13:30:46
https://github.com/huggingface/datasets/pull/6269
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6269", "html_url": "https://github.com/huggingface/datasets/pull/6269", "diff_url": "https://github.com/huggingface/datasets/pull/6269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6269.patch", "merged_at": "2023-10-16T13:30...
6,269
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,919,010,645
Add repo_id to DatasetInfo
open
```python from datasets import load_dataset ds = load_dataset("lhoestq/demo1", split="train") ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"]) print(ds.repo_id) # lhoestq/demo1 ``` - repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict - ...
2023-09-29T10:24:55
2023-10-01T15:29:45
null
https://github.com/huggingface/datasets/pull/6268
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6268", "html_url": "https://github.com/huggingface/datasets/pull/6268", "diff_url": "https://github.com/huggingface/datasets/pull/6268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6268.patch", "merged_at": null }
6,268
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6268). All of your documentation changes will be reflected on that endpoint.", "In https://github.com/huggingface/datasets/issues/4129 we want to track the origin of a dataset, e.g. if it comes from multiple datasets.\r\n\r\nI ...
1,916,443,262
Multi label class encoding
open
### Feature request I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels. Here's an example of what I'd like to encode: ``` data = { ...
2023-09-27T22:48:08
2023-10-26T18:46:08
null
https://github.com/huggingface/datasets/issues/6267
null
6,267
false
[ "You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n ...
1,916,334,394
Use LibYAML with PyYAML if available
open
PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibY...
2023-09-27T21:13:36
2023-09-28T14:29:24
null
https://github.com/huggingface/datasets/pull/6266
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6266", "html_url": "https://github.com/huggingface/datasets/pull/6266", "diff_url": "https://github.com/huggingface/datasets/pull/6266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6266.patch", "merged_at": null }
6,266
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6266). All of your documentation changes will be reflected on that endpoint.", "On Ubuntu, if `libyaml-dev` is installed, you can install PyYAML 6.0.1 with LibYAML with the following command (as it's automatically detected):\r\...
1,915,651,566
Remove `apache_beam` import in `BeamBasedBuilder._save_info`
closed
... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS) Fix https://github.com/huggingface/datasets/issues/6260
2023-09-27T13:56:34
2023-09-28T18:34:02
2023-09-28T18:23:35
https://github.com/huggingface/datasets/pull/6265
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6265", "html_url": "https://github.com/huggingface/datasets/pull/6265", "diff_url": "https://github.com/huggingface/datasets/pull/6265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6265.patch", "merged_at": "2023-09-28T18:23...
6,265
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,914,958,781
Temporarily pin tensorflow < 2.14.0
closed
Temporarily pin tensorflow < 2.14.0 until permanent solution is found. Hot fix #6263.
2023-09-27T08:16:06
2023-09-27T08:45:24
2023-09-27T08:36:39
https://github.com/huggingface/datasets/pull/6264
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6264", "html_url": "https://github.com/huggingface/datasets/pull/6264", "diff_url": "https://github.com/huggingface/datasets/pull/6264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6264.patch", "merged_at": "2023-09-27T08:36...
6,264
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,914,951,043
CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python'
closed
Python 3.10 CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262 ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/li...
2023-09-27T08:12:05
2023-09-27T08:36:40
2023-09-27T08:36:40
https://github.com/huggingface/datasets/issues/6263
null
6,263
false
[]
1,914,895,459
Fix CI 404 errors
closed
Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884 ``` FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.u...
2023-09-27T07:40:18
2023-09-28T15:39:16
2023-09-28T15:30:40
https://github.com/huggingface/datasets/pull/6262
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6262", "html_url": "https://github.com/huggingface/datasets/pull/6262", "diff_url": "https://github.com/huggingface/datasets/pull/6262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6262.patch", "merged_at": "2023-09-28T15:30...
6,262
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,913,813,178
Can't load a dataset
closed
### Describe the bug Can't seem to load the JourneyDB dataset. It throws the following error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[15], line 2 1 # If the dataset is gated/priv...
2023-09-26T15:46:25
2023-10-05T10:23:23
2023-10-05T10:23:22
https://github.com/huggingface/datasets/issues/6261
null
6,261
false
[ "I believe is due to the fact that doesn't work with .tgz files.", "`JourneyDB/JourneyDB` is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.\r\n\r\n> I believe is due to the fact that d...
1,912,593,466
REUSE_DATASET_IF_EXISTS don't work
closed
### Describe the bug I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/da...
2023-09-26T03:02:16
2023-09-28T18:23:36
2023-09-28T18:23:36
https://github.com/huggingface/datasets/issues/6260
null
6,260
false
[ "Hi! Unfortunately, the current behavior is to delete the downloaded data when this error happens. So, I've opened a PR that removes the problematic import to avoid losing data due to `apache_beam` not being installed (we host the preprocessed version of `natual_questions` on the HF GCS, so requiring `apache_beam` ...
1,911,965,758
Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories
closed
### Describe the bug When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets. ### Steps to reproduce the bug...
2023-09-25T17:20:54
2024-03-15T15:22:04
2024-03-15T15:22:04
https://github.com/huggingface/datasets/issues/6259
null
6,259
false
[ "Thanks for reporting this issue! We should be able to avoid this by making our `glob` patterns more precise. In the meantime, you can load the dataset by directly assigning splits to the data files: \r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files={\"train\": \"testin...
1,911,445,373
[DOCS] Fix typo: Elasticsearch
closed
Not ElasticSearch :)
2023-09-25T12:50:59
2023-09-26T14:55:35
2023-09-26T13:36:40
https://github.com/huggingface/datasets/pull/6258
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6258", "html_url": "https://github.com/huggingface/datasets/pull/6258", "diff_url": "https://github.com/huggingface/datasets/pull/6258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6258.patch", "merged_at": "2023-09-26T13:36...
6,258
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,910,741,044
HfHubHTTPError - exceeded our hourly quotas for action: commit
closed
### Describe the bug I try to upload a very large dataset of images, and get the following error: ``` File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo...
2023-09-25T06:11:43
2023-10-16T13:30:49
2023-10-16T13:30:48
https://github.com/huggingface/datasets/issues/6257
null
6,257
false
[ "how is your dataset structured? (file types, how many commits and files are you trying to push, etc)", "I succeeded in uploading it after several attempts with an hour gap between each attempt (inconvenient but worked). The final dataset is [here](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2), code ...
1,910,275,199
load_dataset() function's cache_dir does not seems to work
closed
### Describe the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_...
2023-09-24T15:34:06
2025-05-14T10:08:53
2024-10-08T15:45:18
https://github.com/huggingface/datasets/issues/6256
null
6,256
false
[ "Can you share the error message?\r\n\r\nAlso, it would help if you could check whether `huggingface_hub`'s download behaves the same:\r\n```python\r\nfrom huggingface_hub import snapshot_download\r\nsnapshot_download(\"trec\", repo_type=\"dataset\", cache_dir='/path/to/my/dir)\r\n```\r\n\r\nIn the next major relea...
1,909,842,977
Parallelize builder configs creation
closed
For datasets with lots of configs defined in YAML E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec
2023-09-23T11:56:20
2024-01-11T06:32:34
2023-09-26T15:44:19
https://github.com/huggingface/datasets/pull/6255
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6255", "html_url": "https://github.com/huggingface/datasets/pull/6255", "diff_url": "https://github.com/huggingface/datasets/pull/6255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6255.patch", "merged_at": null }
6,255
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,909,672,104
Dataset.from_generator() cost much more time in vscode debugging mode then running mode
closed
### Describe the bug Hey there, I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset. However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal. ### Steps to reproduce the bu...
2023-09-23T02:07:26
2023-10-03T14:42:53
2023-10-03T14:42:53
https://github.com/huggingface/datasets/issues/6254
null
6,254
false
[ "Answered on the forum: https://discuss.huggingface.co/t/dataset-from-generator-cost-much-more-time-in-vscode-debugging-mode-then-running-mode/56005/2" ]
1,906,618,910
Check builder cls default config name in inspect
closed
Fix https://github.com/huggingface/datasets-server/issues/1812 this was causing this issue: ```ipython In [1]: from datasets import * In [2]: inspect.get_dataset_config_names("aakanksha/udpos") Out[2]: ['default'] In [3]: load_dataset_builder("aakanksha/udpos").config.name Out[3]: 'en' ```
2023-09-21T10:15:32
2023-09-21T14:16:44
2023-09-21T14:08:00
https://github.com/huggingface/datasets/pull/6253
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6253", "html_url": "https://github.com/huggingface/datasets/pull/6253", "diff_url": "https://github.com/huggingface/datasets/pull/6253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6253.patch", "merged_at": "2023-09-21T14:08...
6,253
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,906,375,378
exif_transpose not done to Image (PIL problem)
closed
### Feature request I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading. Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca...
2023-09-21T08:11:46
2024-03-19T15:29:43
2024-03-19T15:29:43
https://github.com/huggingface/datasets/issues/6252
null
6,252
false
[ "Indeed, it makes sense to do this by default. \r\n\r\nIn the meantime, you can use `.with_transform` to transpose the images when accessing them:\r\n\r\n```python\r\nimport PIL.ImageOps\r\n\r\ndef exif_transpose_transform(batch):\r\n batch[\"image\"] = [PIL.ImageOps.exif_transpose(image) for image in batch[\"imag...
1,904,418,426
Support streaming datasets with pyarrow.parquet.read_table
closed
Support streaming datasets with `pyarrow.parquet.read_table`. See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2 CC: @AndreaFrancis
2023-09-20T08:07:02
2023-09-27T06:37:03
2023-09-27T06:26:24
https://github.com/huggingface/datasets/pull/6251
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6251", "html_url": "https://github.com/huggingface/datasets/pull/6251", "diff_url": "https://github.com/huggingface/datasets/pull/6251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6251.patch", "merged_at": "2023-09-27T06:26...
6,251
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "This function reads an entire Arrow table in one go, which is not ideal memory-wise, so I don't think we should encourage using this function, considering we want to keep RAM usage as low as possible in the streaming mode. \r\n\r\n(N...
1,901,390,945
Update create_dataset.mdx
closed
modified , as AudioFolder and ImageFolder not in Dataset Library. ``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset``` ``` cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site...
2023-09-18T17:06:29
2023-09-19T18:51:49
2023-09-19T18:40:10
https://github.com/huggingface/datasets/pull/6247
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6247", "html_url": "https://github.com/huggingface/datasets/pull/6247", "diff_url": "https://github.com/huggingface/datasets/pull/6247.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6247.patch", "merged_at": "2023-09-19T18:40...
6,247
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,899,848,414
Add new column to dataset
closed
### Describe the bug ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>() ----> 1 dataset['train']['/workspace/data'] 3 frames [/...
2023-09-17T16:59:48
2023-09-18T16:20:09
2023-09-18T16:20:09
https://github.com/huggingface/datasets/issues/6246
null
6,246
false
[ "I think it's an issue with the code.\r\n\r\nSpecifically:\r\n```python\r\ndataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nNow `dataset` is the train set with a new column. \r\nTo fix this, you can do:\r\n\r\n```python\r\ndataset['train'] = dataset['train'].add_column(\"/workspa...
1,898,861,422
Add support for `fsspec>=2023.9.0`
closed
Fix #6214
2023-09-15T17:58:25
2023-09-26T15:41:38
2023-09-26T15:32:51
https://github.com/huggingface/datasets/pull/6244
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6244", "html_url": "https://github.com/huggingface/datasets/pull/6244", "diff_url": "https://github.com/huggingface/datasets/pull/6244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6244.patch", "merged_at": "2023-09-26T15:32...
6,244
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,898,532,784
Fix cast from fixed size list to variable size list
closed
Fix #6242
2023-09-15T14:23:33
2023-09-19T18:02:21
2023-09-19T17:53:17
https://github.com/huggingface/datasets/pull/6243
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6243", "html_url": "https://github.com/huggingface/datasets/pull/6243", "diff_url": "https://github.com/huggingface/datasets/pull/6243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6243.patch", "merged_at": "2023-09-19T17:53...
6,243
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...