id int64 953M 3.35B | number int64 2.72k 7.75k | title stringlengths 1 290 | state stringclasses 2
values | created_at timestamp[s]date 2021-07-26 12:21:17 2025-08-23 00:18:43 | updated_at timestamp[s]date 2021-07-26 13:27:59 2025-08-23 12:34:39 | closed_at timestamp[s]date 2021-07-26 13:27:59 2025-08-20 16:35:55 ⌀ | html_url stringlengths 49 51 | pull_request dict | user_login stringlengths 3 26 | is_pull_request bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|
1,557,510,618 | 5,465 | audiofolder creates empty dataset even though the dataset passed in follows the correct structure | closed | 2023-01-26T01:45:45 | 2023-01-26T08:48:45 | 2023-01-26T08:48:45 | https://github.com/huggingface/datasets/issues/5465 | null | jcho19 | false | [] |
1,557,462,104 | 5,464 | NonMatchingChecksumError for hendrycks_test | closed | 2023-01-26T00:43:23 | 2023-01-27T05:44:31 | 2023-01-26T07:41:58 | https://github.com/huggingface/datasets/issues/5464 | null | sarahwie | false | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```",
"Oops, missed that I needed to upgrade. Thanks!"
] |
1,557,021,041 | 5,463 | Imagefolder docs: mention support of CSV and ZIP | closed | 2023-01-25T17:24:01 | 2023-01-25T18:33:35 | 2023-01-25T18:26:15 | https://github.com/huggingface/datasets/pull/5463 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5463",
"html_url": "https://github.com/huggingface/datasets/pull/5463",
"diff_url": "https://github.com/huggingface/datasets/pull/5463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5463.patch",
"merged_at": "2023-01-25T18:26:15"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,556,572,144 | 5,462 | Concatenate on axis=1 with misaligned blocks | closed | 2023-01-25T12:33:22 | 2023-01-26T09:37:00 | 2023-01-26T09:27:19 | https://github.com/huggingface/datasets/pull/5462 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5462",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"merged_at": "2023-01-26T09:27:19"
} | lhoestq | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,555,532,719 | 5,461 | Discrepancy in `nyu_depth_v2` dataset | open | 2023-01-24T19:15:46 | 2023-02-06T20:52:00 | null | https://github.com/huggingface/datasets/issues/5461 | null | awsaf49 | false | [
"Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/ny... |
1,555,387,532 | 5,460 | Document that removing all the columns returns an empty document and the num_row is lost | closed | 2023-01-24T17:33:38 | 2023-01-25T16:11:10 | 2023-01-25T16:04:03 | https://github.com/huggingface/datasets/pull/5460 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5460",
"html_url": "https://github.com/huggingface/datasets/pull/5460",
"diff_url": "https://github.com/huggingface/datasets/pull/5460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5460.patch",
"merged_at": "2023-01-25T16:04:03"
} | thomasw21 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,555,367,504 | 5,459 | Disable aiohttp requoting of redirection URL | closed | 2023-01-24T17:18:59 | 2024-09-01T18:08:31 | 2023-01-31T08:37:54 | https://github.com/huggingface/datasets/pull/5459 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5459",
"html_url": "https://github.com/huggingface/datasets/pull/5459",
"diff_url": "https://github.com/huggingface/datasets/pull/5459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5459.patch",
"merged_at": "2023-01-31T08:37:54"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ",
"The lib `requests` does not perform that requote on redir... |
1,555,054,737 | 5,458 | slice split while streaming | closed | 2023-01-24T14:08:17 | 2023-01-24T15:11:47 | 2023-01-24T15:11:47 | https://github.com/huggingface/datasets/issues/5458 | null | SvenDS9 | false | [
"Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\... |
1,554,171,264 | 5,457 | prebuilt dataset relies on `downloads/extracted` | open | 2023-01-24T02:09:32 | 2024-11-18T07:43:51 | null | https://github.com/huggingface/datasets/issues/5457 | null | stas00 | false | [
"Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to e... |
1,553,905,148 | 5,456 | feat: tqdm for `to_parquet` | closed | 2023-01-23T22:05:38 | 2023-01-24T11:26:47 | 2023-01-24T11:17:12 | https://github.com/huggingface/datasets/pull/5456 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5456",
"html_url": "https://github.com/huggingface/datasets/pull/5456",
"diff_url": "https://github.com/huggingface/datasets/pull/5456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5456.patch",
"merged_at": "2023-01-24T11:17:12"
} | zanussbaum | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,553,040,080 | 5,455 | Single TQDM bar in multi-proc map | closed | 2023-01-23T12:49:40 | 2023-02-13T20:23:34 | 2023-02-13T20:16:38 | https://github.com/huggingface/datasets/pull/5455 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5455",
"html_url": "https://github.com/huggingface/datasets/pull/5455",
"diff_url": "https://github.com/huggingface/datasets/pull/5455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5455.patch",
"merged_at": "2023-02-13T20:16:38"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,552,890,419 | 5,454 | Save and resume the state of a DataLoader | open | 2023-01-23T10:58:54 | 2024-11-27T01:19:21 | null | https://github.com/huggingface/datasets/issues/5454 | null | lhoestq | false | [
"Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.",
"Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra fe... |
1,552,727,425 | 5,453 | Fix base directory while extracting insecure TAR files | closed | 2023-01-23T08:57:40 | 2023-01-24T01:34:20 | 2023-01-23T10:10:42 | https://github.com/huggingface/datasets/pull/5453 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5453",
"html_url": "https://github.com/huggingface/datasets/pull/5453",
"diff_url": "https://github.com/huggingface/datasets/pull/5453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5453.patch",
"merged_at": "2023-01-23T10:10:42"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,552,655,939 | 5,452 | Swap log messages for symbolic/hard links in tar extractor | closed | 2023-01-23T07:53:38 | 2023-01-23T09:40:55 | 2023-01-23T08:31:17 | https://github.com/huggingface/datasets/pull/5452 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5452",
"html_url": "https://github.com/huggingface/datasets/pull/5452",
"diff_url": "https://github.com/huggingface/datasets/pull/5452.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5452.patch",
"merged_at": "2023-01-23T08:31:17"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,552,336,300 | 5,451 | ImageFolder BadZipFile: Bad offset for central directory | closed | 2023-01-22T23:50:12 | 2023-05-23T10:35:48 | 2023-02-10T16:31:36 | https://github.com/huggingface/datasets/issues/5451 | null | hmartiro | false | [
"Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640",
"The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.",
"For others that find ... |
1,551,109,365 | 5,450 | to_tf_dataset with a TF collator causes bizarrely persistent slowdown | closed | 2023-01-20T16:08:37 | 2023-02-13T14:13:34 | 2023-02-13T14:13:34 | https://github.com/huggingface/datasets/issues/5450 | null | Rocketknight1 | false | [
"wtf",
"Couldn't find what's causing this, this will need more investigation",
"A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://githu... |
1,548,417,594 | 5,441 | resolving a weird tar extract issue | open | 2023-01-19T02:17:21 | 2023-01-20T16:49:22 | null | https://github.com/huggingface/datasets/pull/5441 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5441",
"html_url": "https://github.com/huggingface/datasets/pull/5441",
"diff_url": "https://github.com/huggingface/datasets/pull/5441.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5441.patch",
"merged_at": null
} | stas00 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,538,361,143 | 5,440 | Fix documentation about batch samplers | closed | 2023-01-18T17:04:27 | 2023-01-18T17:57:29 | 2023-01-18T17:50:04 | https://github.com/huggingface/datasets/pull/5440 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5440",
"html_url": "https://github.com/huggingface/datasets/pull/5440",
"diff_url": "https://github.com/huggingface/datasets/pull/5440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5440.patch",
"merged_at": "2023-01-18T17:50:04"
} | thomasw21 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,537,973,564 | 5,439 | [dataset request] Add Common Voice 12.0 | closed | 2023-01-18T13:07:05 | 2023-07-21T14:26:10 | 2023-07-21T14:26:09 | https://github.com/huggingface/datasets/issues/5439 | null | MohammedRakib | false | [
"@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?",
"This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0"
] |
1,537,489,730 | 5,438 | Update actions/checkout in CD Conda release | closed | 2023-01-18T06:53:15 | 2023-01-18T13:49:51 | 2023-01-18T13:42:49 | https://github.com/huggingface/datasets/pull/5438 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5438",
"html_url": "https://github.com/huggingface/datasets/pull/5438",
"diff_url": "https://github.com/huggingface/datasets/pull/5438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5438.patch",
"merged_at": "2023-01-18T13:42:48"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,536,837,144 | 5,437 | Can't load png dataset with 4 channel (RGBA) | closed | 2023-01-17T18:22:27 | 2023-01-18T20:20:15 | 2023-01-18T20:20:15 | https://github.com/huggingface/datasets/issues/5437 | null | WiNE-iNEFF | false | [
"Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n",
"> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode... |
1,536,633,173 | 5,436 | Revert container image pin in CI benchmarks | closed | 2023-01-17T15:59:50 | 2023-01-18T09:05:49 | 2023-01-18T06:29:06 | https://github.com/huggingface/datasets/pull/5436 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5436",
"html_url": "https://github.com/huggingface/datasets/pull/5436",
"diff_url": "https://github.com/huggingface/datasets/pull/5436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5436.patch",
"merged_at": "2023-01-18T06:29:06"
} | 0x2b3bfa0 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,536,099,300 | 5,435 | Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage | closed | 2023-01-17T10:04:16 | 2023-01-19T09:56:03 | 2023-01-19T09:56:03 | https://github.com/huggingface/datasets/issues/5435 | null | DanielYang59 | false | [
"Just for your information, Tensorflow confirmed this issue [here.](https://github.com/tensorflow/tensorflow/issues/59279)",
"Thanks for reporting, @HaoyuYang59.\r\n\r\nPlease note that these are different \"dataset\" objects: our docs refer to Hugging Face `datasets.Dataset` and not to TensorFlow `tf.data.Datase... |
1,536,090,042 | 5,434 | sample_dataset module not found | closed | 2023-01-17T09:57:54 | 2023-01-19T13:52:12 | 2023-01-19T07:55:11 | https://github.com/huggingface/datasets/issues/5434 | null | nickums | false | [
"Hi! Can you describe what the actual error is?",
"working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from t... |
1,536,017,901 | 5,433 | Support latest Docker image in CI benchmarks | closed | 2023-01-17T09:06:08 | 2023-01-18T06:29:08 | 2023-01-18T06:29:08 | https://github.com/huggingface/datasets/issues/5433 | null | albertvillanova | false | [
"Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.",
"Opened htt... |
1,535,893,019 | 5,432 | Fix CI benchmarks by temporarily pinning Docker image version | closed | 2023-01-17T07:15:31 | 2023-01-17T08:58:22 | 2023-01-17T08:51:17 | https://github.com/huggingface/datasets/pull/5432 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5432",
"html_url": "https://github.com/huggingface/datasets/pull/5432",
"diff_url": "https://github.com/huggingface/datasets/pull/5432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5432.patch",
"merged_at": "2023-01-17T08:51:17"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,535,862,621 | 5,431 | CI benchmarks are broken: Unknown arguments: runnerPath, path | closed | 2023-01-17T06:49:57 | 2023-01-18T06:33:24 | 2023-01-17T08:51:18 | https://github.com/huggingface/datasets/issues/5431 | null | albertvillanova | false | [] |
1,535,856,503 | 5,430 | Support Apache Beam >= 2.44.0 | closed | 2023-01-17T06:42:12 | 2024-02-06T19:24:21 | 2024-02-06T19:24:21 | https://github.com/huggingface/datasets/issues/5430 | null | albertvillanova | false | [
"Some of the shard files now have 0 number of rows.\r\n\r\nWe have opened an issue in the Apache Beam repo:\r\n- https://github.com/apache/beam/issues/25041"
] |
1,535,192,687 | 5,429 | Fix CI by temporarily pinning apache-beam < 2.44.0 | closed | 2023-01-16T16:20:09 | 2023-01-16T16:51:42 | 2023-01-16T16:49:03 | https://github.com/huggingface/datasets/pull/5429 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5429",
"html_url": "https://github.com/huggingface/datasets/pull/5429",
"diff_url": "https://github.com/huggingface/datasets/pull/5429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5429.patch",
"merged_at": "2023-01-16T16:49:03"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,535,166,139 | 5,428 | Load/Save FAISS index using fsspec | closed | 2023-01-16T16:08:12 | 2023-03-27T15:18:22 | 2023-03-27T15:18:22 | https://github.com/huggingface/datasets/issues/5428 | null | Dref360 | false | [
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a gr... |
1,535,162,889 | 5,427 | Unable to download dataset id_clickbait | closed | 2023-01-16T16:05:36 | 2023-01-18T09:51:28 | 2023-01-18T09:25:19 | https://github.com/huggingface/datasets/issues/5427 | null | ilos-vigil | false | [
"Thanks for reporting, @ilos-vigil.\r\n\r\nWe have transferred this issue to the corresponding dataset on the Hugging Face Hub: https://huggingface.co/datasets/id_clickbait/discussions/1 "
] |
1,535,158,555 | 5,426 | CI tests are broken: SchemaInferenceError | closed | 2023-01-16T16:02:07 | 2023-06-02T06:40:32 | 2023-01-16T16:49:04 | https://github.com/huggingface/datasets/issues/5426 | null | albertvillanova | false | [] |
1,534,581,850 | 5,425 | Sort on multiple keys with datasets.Dataset.sort() | closed | 2023-01-16T09:22:26 | 2023-02-24T16:15:11 | 2023-02-24T16:15:11 | https://github.com/huggingface/datasets/issues/5425 | null | rocco-fortuna | false | [
"Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multipl... |
1,534,394,756 | 5,424 | When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset? | closed | 2023-01-16T06:54:28 | 2023-02-24T16:19:00 | 2023-02-24T16:19:00 | https://github.com/huggingface/datasets/issues/5424 | null | macabdul9 | false | [
"Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n R... |
1,533,385,239 | 5,422 | Datasets load error for saved github issues | open | 2023-01-14T17:29:38 | 2023-09-14T11:39:57 | null | https://github.com/huggingface/datasets/issues/5422 | null | folterj | false | [
"I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.p... |
1,532,278,307 | 5,421 | Support case-insensitive Hub dataset name in load_dataset | closed | 2023-01-13T13:07:07 | 2023-01-13T20:12:32 | 2023-01-13T20:12:32 | https://github.com/huggingface/datasets/issues/5421 | null | severo | false | [
"Closing as case-insensitivity should be only for URL redirection on the Hub. In the APIs, we will only support the canonical name (https://github.com/huggingface/moon-landing/pull/2399#issuecomment-1382085611)"
] |
1,532,265,742 | 5,420 | ci: 🎡 remove two obsolete issue templates | closed | 2023-01-13T12:58:43 | 2023-01-13T13:36:00 | 2023-01-13T13:29:01 | https://github.com/huggingface/datasets/pull/5420 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5420",
"html_url": "https://github.com/huggingface/datasets/pull/5420",
"diff_url": "https://github.com/huggingface/datasets/pull/5420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5420.patch",
"merged_at": "2023-01-13T13:29:01"
} | severo | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,531,999,850 | 5,419 | label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator | closed | 2023-01-13T09:40:07 | 2023-07-21T14:27:08 | 2023-07-21T14:27:08 | https://github.com/huggingface/datasets/issues/5419 | null | CreatixEA | false | [
"Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_inde... |
1,530,111,184 | 5,418 | Add ProgressBar for `to_parquet` | closed | 2023-01-12T05:06:20 | 2023-01-24T18:18:24 | 2023-01-24T18:18:24 | https://github.com/huggingface/datasets/issues/5418 | null | zanussbaum | false | [
"Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!",
"@albertvillanova I’m happy to make a quick PR for the feature! let me know ",
"That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review",
... |
1,526,988,113 | 5,416 | Fix RuntimeError: Sharding is ambiguous for this dataset | closed | 2023-01-10T08:43:19 | 2023-01-18T17:12:17 | 2023-01-18T14:09:02 | https://github.com/huggingface/datasets/pull/5416 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5416",
"html_url": "https://github.com/huggingface/datasets/pull/5416",
"diff_url": "https://github.com/huggingface/datasets/pull/5416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5416.patch",
"merged_at": "2023-01-18T14:09:02"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"By the way, do we know how many datasets are impacted by this issue?\r\n\r\nMaybe we should do a patch release with this fix.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated be... |
1,526,904,861 | 5,415 | RuntimeError: Sharding is ambiguous for this dataset | closed | 2023-01-10T07:36:11 | 2023-01-18T14:09:04 | 2023-01-18T14:09:03 | https://github.com/huggingface/datasets/issues/5415 | null | albertvillanova | false | [] |
1,525,733,818 | 5,414 | Sharding error with Multilingual LibriSpeech | closed | 2023-01-09T14:45:31 | 2023-01-18T14:09:04 | 2023-01-18T14:09:04 | https://github.com/huggingface/datasets/issues/5414 | null | Nithin-Holla | false | [
"Thanks for reporting, @Nithin-Holla.\r\n\r\nThis is a known issue for multiple datasets and we are investigating it:\r\n- See e.g.: https://huggingface.co/datasets/ami/discussions/3",
"Main issue:\r\n- #5415",
"@albertvillanova Thanks! As a workaround for now, can I use the dataset in streaming mode?",
"Yes,... |
1,524,591,837 | 5,413 | concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers | closed | 2023-01-08T17:01:52 | 2023-01-26T09:27:21 | 2023-01-26T09:27:21 | https://github.com/huggingface/datasets/issues/5413 | null | ZeguanXiao | false | [
"Hi ! Thanks for reporting :)\r\n\r\nI managed to reproduce the hub using\r\n```python\r\n\r\nfrom datasets import concatenate_datasets, Dataset, load_from_disk\r\n\r\nDataset.from_dict({\"a\": range(9)}).save_to_disk(\"tmp/ds1\")\r\nds1 = load_from_disk(\"tmp/ds1\")\r\nds1 = concatenate_datasets([ds1, ds1])\r\n\r\... |
1,524,250,269 | 5,412 | load_dataset() cannot find dataset_info.json with multiple training runs in parallel | closed | 2023-01-08T00:44:32 | 2023-01-19T20:28:43 | 2023-01-19T20:28:43 | https://github.com/huggingface/datasets/issues/5412 | null | mtoles | false | [
"Hi ! It fails because the dataset is already being prepared by your first run. I'd encourage you to prepare your dataset before using it for multiple trainings.\r\n\r\nYou can also specify another cache directory by passing `cache_dir=` to `load_dataset()`.",
"Thank you! What do you mean by prepare it beforehand... |
1,523,297,786 | 5,411 | Update docs of S3 filesystem with async aiobotocore | closed | 2023-01-06T23:19:17 | 2023-01-18T11:18:59 | 2023-01-18T11:12:04 | https://github.com/huggingface/datasets/pull/5411 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5411",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"merged_at": "2023-01-18T11:12:04"
} | maheshpec | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,521,168,032 | 5,410 | Map-style Dataset to IterableDataset | closed | 2023-01-05T18:12:17 | 2023-02-01T18:11:45 | 2023-02-01T16:36:01 | https://github.com/huggingface/datasets/pull/5410 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5410",
"html_url": "https://github.com/huggingface/datasets/pull/5410",
"diff_url": "https://github.com/huggingface/datasets/pull/5410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5410.patch",
"merged_at": "2023-02-01T16:36:01"
} | lhoestq | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,520,374,219 | 5,409 | Fix deprecation warning when use_auth_token passed to download_and_prepare | closed | 2023-01-05T09:10:58 | 2023-01-06T11:06:16 | 2023-01-06T10:59:13 | https://github.com/huggingface/datasets/pull/5409 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5409",
"html_url": "https://github.com/huggingface/datasets/pull/5409",
"diff_url": "https://github.com/huggingface/datasets/pull/5409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5409.patch",
"merged_at": "2023-01-06T10:59:13"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,519,890,752 | 5,408 | dataset map function could not be hash properly | closed | 2023-01-05T01:59:59 | 2023-01-06T13:22:19 | 2023-01-06T13:22:18 | https://github.com/huggingface/datasets/issues/5408 | null | Tungway1990 | false | [
"Hi ! On macos I tried with\r\n- py 3.9.11\r\n- datasets 2.8.0\r\n- transformers 4.25.1\r\n- dill 0.3.4\r\n\r\nand I was able to hash `prepare_dataset` correctly:\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nHasher.hash(prepare_dataset)\r\n```\r\n\r\nWhat version of transformers do you have ? Can you ... |
1,519,797,345 | 5,407 | Datasets.from_sql() generates deprecation warning | closed | 2023-01-05T00:43:17 | 2023-01-06T10:59:14 | 2023-01-06T10:59:14 | https://github.com/huggingface/datasets/issues/5407 | null | msummerfield | false | [
"Thanks for reporting @msummerfield. We are fixing it."
] |
1,519,140,544 | 5,406 | [2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | open | 2023-01-04T15:10:04 | 2023-06-21T18:45:38 | null | https://github.com/huggingface/datasets/issues/5406 | null | lhoestq | false | [
"I still get this error on 2.9.0\r\n<img width=\"1925\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7208470/215597359-2f253c76-c472-4612-8099-d3a74d16eb29.png\">\r\n",
"Hi ! I just tested locally and or colab and it works fine for 2.9 on `sst2`.\r\n\r\nAlso the code that is shown in your stack t... |
1,517,879,386 | 5,405 | size_in_bytes the same for all splits | open | 2023-01-03T20:25:48 | 2023-01-04T09:22:59 | null | https://github.com/huggingface/datasets/issues/5405 | null | Breakend | false | [
"Hi @Breakend,\r\n\r\nIndeed, the attribute `size_in_bytes` refers to the size of the entire dataset configuration, for all splits (size of downloaded files + Arrow files), not the specific split.\r\nThis is also the case for `download_size` (downloaded files) and `dataset_size` (Arrow files).\r\n\r\nThe size of th... |
1,517,566,331 | 5,404 | Better integration of BIG-bench | open | 2023-01-03T15:37:57 | 2023-02-09T20:30:26 | null | https://github.com/huggingface/datasets/issues/5404 | null | albertvillanova | false | [
"Hi, I made my version : https://huggingface.co/datasets/tasksource/bigbench"
] |
1,517,466,492 | 5,403 | Replace one letter import in docs | closed | 2023-01-03T14:26:32 | 2023-01-03T15:06:18 | 2023-01-03T14:59:01 | https://github.com/huggingface/datasets/pull/5403 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5403",
"html_url": "https://github.com/huggingface/datasets/pull/5403",
"diff_url": "https://github.com/huggingface/datasets/pull/5403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5403.patch",
"merged_at": "2023-01-03T14:59:01"
} | MKhalusova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the docs fix for consistency.\r\n> \r\n> Again for consistency, it would be nice to make the same fix across all the docs, e.g.\r\n> \r\n> https://github.com/huggingface/datasets/blob/310cdddd1c43f9658de172b85b6509d07d5e... |
1,517,409,429 | 5,402 | Missing state.json when creating a cloud dataset using a dataset_builder | open | 2023-01-03T13:39:59 | 2023-01-04T17:23:57 | null | https://github.com/huggingface/datasets/issues/5402 | null | danielfleischer | false | [
"`load_from_disk` must be used on datasets saved using `save_to_disk`: they correspond to fully serialized datasets including their state.\r\n\r\nOn the other hand, `download_and_prepare` just downloads the raw data and convert them to arrow (or parquet if you want). We are working on allowing you to reload a datas... |
1,517,160,935 | 5,401 | Support Dataset conversion from/to Spark | open | 2023-01-03T09:57:40 | 2023-01-05T14:21:33 | null | https://github.com/huggingface/datasets/pull/5401 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5401",
"html_url": "https://github.com/huggingface/datasets/pull/5401",
"diff_url": "https://github.com/huggingface/datasets/pull/5401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5401.patch",
"merged_at": null
} | albertvillanova | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5401). All of your documentation changes will be reflected on that endpoint.",
"Cool thanks !\r\n\r\nSpark DataFrame are usually quite big, and I believe here `from_spark` would load everything in the driver node's RAM, which i... |
1,517,032,972 | 5,400 | Support streaming datasets with os.path.exists and Path.exists | closed | 2023-01-03T07:42:37 | 2023-01-06T10:42:44 | 2023-01-06T10:35:44 | https://github.com/huggingface/datasets/pull/5400 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5400",
"html_url": "https://github.com/huggingface/datasets/pull/5400",
"diff_url": "https://github.com/huggingface/datasets/pull/5400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5400.patch",
"merged_at": "2023-01-06T10:35:44"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,515,548,427 | 5,399 | Got disconnected from remote data host. Retrying in 5sec [2/20] | closed | 2023-01-01T13:00:11 | 2023-01-02T07:21:52 | 2023-01-02T07:21:52 | https://github.com/huggingface/datasets/issues/5399 | null | alhuri | false | [] |
1,514,425,231 | 5,398 | Unpin pydantic | closed | 2022-12-30T10:37:31 | 2022-12-30T10:43:41 | 2022-12-30T10:43:41 | https://github.com/huggingface/datasets/issues/5398 | null | albertvillanova | false | [] |
1,514,412,246 | 5,397 | Unpin pydantic test dependency | closed | 2022-12-30T10:22:09 | 2022-12-30T10:53:11 | 2022-12-30T10:43:40 | https://github.com/huggingface/datasets/pull/5397 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5397",
"html_url": "https://github.com/huggingface/datasets/pull/5397",
"diff_url": "https://github.com/huggingface/datasets/pull/5397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5397.patch",
"merged_at": "2022-12-30T10:43:40"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,514,002,934 | 5,396 | Fix checksum verification | closed | 2022-12-29T19:45:17 | 2023-02-13T11:11:22 | 2023-02-13T11:11:22 | https://github.com/huggingface/datasets/pull/5396 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5396",
"html_url": "https://github.com/huggingface/datasets/pull/5396",
"diff_url": "https://github.com/huggingface/datasets/pull/5396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5396.patch",
"merged_at": null
} | daskol | true | [
"Hi ! If I'm not mistaken both `expected_checksums[url]` and `recorded_checksums[url]` are dictionaries with keys \"checksum\" and \"num_bytes\". So we need to check whether `expected_checksums[url] != recorded_checksums[url]` (or simply `expected_checksums[url][\"checksum\"] != recorded_checksums[url][\"checksum\"... |
1,513,997,335 | 5,395 | Temporarily pin pydantic test dependency | closed | 2022-12-29T19:34:19 | 2022-12-30T06:36:57 | 2022-12-29T21:00:26 | https://github.com/huggingface/datasets/pull/5395 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5395",
"html_url": "https://github.com/huggingface/datasets/pull/5395",
"diff_url": "https://github.com/huggingface/datasets/pull/5395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5395.patch",
"merged_at": "2022-12-29T21:00:26"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,513,976,229 | 5,394 | CI error: TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' | closed | 2022-12-29T18:58:44 | 2022-12-30T10:40:51 | 2022-12-29T21:00:27 | https://github.com/huggingface/datasets/issues/5394 | null | albertvillanova | false | [
"I still getting the same error :\r\n\r\n`python -m spacy download fr_core_news_lg\r\n`.\r\n`import spacy`",
"@MFatnassi, this issue and the corresponding fix only affect our Continuous Integration testing environment.\r\n\r\nNote that `datasets` does not depend on `spacy`."
] |
1,512,908,613 | 5,393 | Finish deprecating the fs argument | closed | 2022-12-28T15:33:17 | 2023-01-18T12:42:33 | 2023-01-18T12:35:32 | https://github.com/huggingface/datasets/pull/5393 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5393",
"html_url": "https://github.com/huggingface/datasets/pull/5393",
"diff_url": "https://github.com/huggingface/datasets/pull/5393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5393.patch",
"merged_at": "2023-01-18T12:35:32"
} | dconathan | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the deprecation. Some minor suggested fixes below...\r\n> \r\n> Also note that the corresponding tests should be updated as well.\r\n\r\nThanks for the suggestions/typo fixes. I updated the failing test - passing locall... |
1,512,712,529 | 5,392 | Fix Colab notebook link | closed | 2022-12-28T11:44:53 | 2023-01-03T15:36:14 | 2023-01-03T15:27:31 | https://github.com/huggingface/datasets/pull/5392 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5392",
"html_url": "https://github.com/huggingface/datasets/pull/5392",
"diff_url": "https://github.com/huggingface/datasets/pull/5392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5392.patch",
"merged_at": "2023-01-03T15:27:31"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,510,350,400 | 5,391 | Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it] | closed | 2022-12-25T15:17:14 | 2023-07-21T14:29:47 | 2023-07-21T14:29:47 | https://github.com/huggingface/datasets/issues/5391 | null | catswithbats | false | [
"Hey @catswithbats! Super sorry for the late reply! This is happening because there is data with label length (504) that exceeds the model's max length (448). \r\n\r\nThere are two options here:\r\n1. Increase the model's `max_length` parameter: \r\n```python\r\nmodel.config.max_length = 512\r\n```\r\n2. Filter dat... |
1,509,357,553 | 5,390 | Error when pushing to the CI hub | closed | 2022-12-23T13:36:37 | 2022-12-23T20:29:02 | 2022-12-23T20:29:02 | https://github.com/huggingface/datasets/issues/5390 | null | severo | false | [
"Hmmm, git bisect tells me that the behavior is the same since https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c (3 Oct), i.e. https://github.com/huggingface/datasets/pull/4926",
"Maybe related to the discussions in https://github.com/huggingface/datasets/pull/5196",
"Maybe... |
1,509,348,626 | 5,389 | Fix link in `load_dataset` docstring | closed | 2022-12-23T13:26:31 | 2023-01-25T19:00:43 | 2023-01-24T16:33:38 | https://github.com/huggingface/datasets/pull/5389 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5389",
"html_url": "https://github.com/huggingface/datasets/pull/5389",
"diff_url": "https://github.com/huggingface/datasets/pull/5389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5389.patch",
"merged_at": "2023-01-24T16:33:38"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,509,042,348 | 5,388 | Getting Value Error while loading a dataset.. | closed | 2022-12-23T08:16:43 | 2022-12-29T08:36:33 | 2022-12-27T17:59:09 | https://github.com/huggingface/datasets/issues/5388 | null | valmetisrinivas | false | [
"Hi! I can't reproduce this error locally (Mac) or in Colab. What version of `datasets` are you using?",
"Hi [mariosasko](https://github.com/mariosasko), the datasets version is '2.8.0'.",
"@valmetisrinivas you get that error because you imported `datasets` (and thus `fsspec`) before installing `zstandard`.\r\n... |
1,508,740,177 | 5,387 | Missing documentation page : improve-performance | closed | 2022-12-23T01:12:57 | 2023-01-24T16:33:40 | 2023-01-24T16:33:40 | https://github.com/huggingface/datasets/issues/5387 | null | astariul | false | [
"Hi! Our documentation builder does not support links to sections, hence the bug. This is the link it should point to https://huggingface.co/docs/datasets/v2.8.0/en/cache#improve-performance."
] |
1,508,592,918 | 5,386 | `max_shard_size` in `datasets.push_to_hub()` breaks with large files | closed | 2022-12-22T21:50:58 | 2022-12-26T23:45:51 | 2022-12-26T23:45:51 | https://github.com/huggingface/datasets/issues/5386 | null | salieri | false | [
"Hi! \r\n\r\nThis behavior stems from the fact that we don't always embed image bytes in the underlying arrow table, which can lead to bad size estimation (we use the first 1000 table rows to [estimate](https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.... |
1,508,535,532 | 5,385 | Is `fs=` deprecated in `load_from_disk()` as well? | closed | 2022-12-22T21:00:45 | 2023-01-23T10:50:05 | 2023-01-23T10:50:04 | https://github.com/huggingface/datasets/issues/5385 | null | dconathan | false | [
"Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR? ",
"> Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR?\r\n\r\nYeah I can do that sometime next week. Should the storage_options be a new arg here? I’ll look around for anywh... |
1,508,152,598 | 5,384 | Handle 0-dim tensors in `cast_to_python_objects` | closed | 2022-12-22T16:15:30 | 2023-01-13T16:10:15 | 2023-01-13T16:00:52 | https://github.com/huggingface/datasets/pull/5384 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5384",
"html_url": "https://github.com/huggingface/datasets/pull/5384",
"diff_url": "https://github.com/huggingface/datasets/pull/5384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5384.patch",
"merged_at": "2023-01-13T16:00:52"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,507,293,968 | 5,383 | IterableDataset missing column_names, differs from Dataset interface | closed | 2022-12-22T05:27:02 | 2023-03-13T19:03:33 | 2023-03-13T19:03:33 | https://github.com/huggingface/datasets/issues/5383 | null | iceboundflame | false | [
"Another example is that `IterableDataset.map` does not have `fn_kwargs`, among other arguments. It makes it harder to convert code from Dataset to IterableDataset.",
"Hi! `fn_kwargs` was added to `IterableDataset.map` in `datasets 2.5.0`, so please update your installation (`pip install -U datasets`) to use it.\... |
1,504,788,691 | 5,382 | Raise from disconnect error in xopen | closed | 2022-12-20T15:52:44 | 2023-01-26T09:51:13 | 2023-01-26T09:42:45 | https://github.com/huggingface/datasets/pull/5382 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5382",
"html_url": "https://github.com/huggingface/datasets/pull/5382",
"diff_url": "https://github.com/huggingface/datasets/pull/5382.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5382.patch",
"merged_at": "2023-01-26T09:42:45"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Could you review this small PR @albertvillanova ? :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric... |
1,504,498,387 | 5,381 | Wrong URL for the_pile dataset | closed | 2022-12-20T12:40:14 | 2023-02-15T16:24:57 | 2023-02-15T16:24:57 | https://github.com/huggingface/datasets/issues/5381 | null | LeoGrin | false | [
"Hi! This error can happen if there is a local file/folder with the same name as the requested dataset. And to avoid it, rename the local file/folder.\r\n\r\nSoon, it will be possible to explicitly request a Hub dataset as follows:https://github.com/huggingface/datasets/issues/5228#issuecomment-1313494020"
] |
1,504,404,043 | 5,380 | Improve dataset `.skip()` speed in streaming mode | open | 2022-12-20T11:25:23 | 2023-03-08T10:47:12 | null | https://github.com/huggingface/datasets/issues/5380 | null | versae | false | [
"Hi! I agree `skip` can be inefficient to use in the current state.\r\n\r\nTo make it fast, we could use \"statistics\" stored in Parquet metadata and read only the chunks needed to form a dataset. \r\n\r\nAnd thanks to the \"datasets-server\" project, which aims to store the Parquet versions of the Hub datasets (o... |
1,504,010,639 | 5,379 | feat: depth estimation dataset guide. | closed | 2022-12-20T05:32:11 | 2023-01-13T12:30:31 | 2023-01-13T12:23:34 | https://github.com/huggingface/datasets/pull/5379 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5379",
"html_url": "https://github.com/huggingface/datasets/pull/5379",
"diff_url": "https://github.com/huggingface/datasets/pull/5379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5379.patch",
"merged_at": "2023-01-13T12:23:34"
} | sayakpaul | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the changes, looks good to me!",
"@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review? ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0... |
1,503,887,508 | 5,378 | The dataset "the_pile", subset "enron_emails" , load_dataset() failure | closed | 2022-12-20T02:19:13 | 2022-12-20T07:52:54 | 2022-12-20T07:52:54 | https://github.com/huggingface/datasets/issues/5378 | null | shaoyuta | false | [
"Thanks for reporting @shaoyuta. We are investigating it.\r\n\r\nWe are transferring the issue to \"the_pile\" Community tab on the Hub: https://huggingface.co/datasets/the_pile/discussions/4"
] |
1,503,477,833 | 5,377 | Add a parallel implementation of to_tf_dataset() | closed | 2022-12-19T19:40:27 | 2023-01-25T16:28:44 | 2023-01-25T16:21:40 | https://github.com/huggingface/datasets/pull/5377 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5377",
"html_url": "https://github.com/huggingface/datasets/pull/5377",
"diff_url": "https://github.com/huggingface/datasets/pull/5377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5377.patch",
"merged_at": "2023-01-25T16:21:40"
} | Rocketknight1 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing because the test server uses Py3.7 but the `SharedMemory` features require Py3.8! I forgot we still support 3.7 for another couple of months. I'm not sure exactly how to proceed, whether I should leave this PR until then, or ... |
1,502,730,559 | 5,376 | set dev version | closed | 2022-12-19T10:56:56 | 2022-12-19T11:01:55 | 2022-12-19T10:57:16 | https://github.com/huggingface/datasets/pull/5376 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5376",
"html_url": "https://github.com/huggingface/datasets/pull/5376",
"diff_url": "https://github.com/huggingface/datasets/pull/5376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5376.patch",
"merged_at": "2022-12-19T10:57:16"
} | lhoestq | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5376). All of your documentation changes will be reflected on that endpoint."
] |
1,502,720,404 | 5,375 | Release: 2.8.0 | closed | 2022-12-19T10:48:26 | 2022-12-19T10:55:43 | 2022-12-19T10:53:15 | https://github.com/huggingface/datasets/pull/5375 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5375",
"html_url": "https://github.com/huggingface/datasets/pull/5375",
"diff_url": "https://github.com/huggingface/datasets/pull/5375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5375.patch",
"merged_at": "2022-12-19T10:53:15"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,501,872,945 | 5,374 | Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec | closed | 2022-12-18T11:38:58 | 2023-07-24T15:23:07 | 2023-07-24T15:23:07 | https://github.com/huggingface/datasets/issues/5374 | null | Muennighoff | false | [
"The data files are hosted on HF at https://huggingface.co/datasets/allenai/c4/tree/main\r\n\r\nYou have 200 runs streaming the same files in parallel. So this is probably a Hub limitation. Maybe rate limiting ? cc @julien-c \r\n\r\nMaybe you can also try to reduce the number of HTTP requests by increasing the bloc... |
1,501,484,197 | 5,373 | Simplify skipping | closed | 2022-12-17T17:23:52 | 2022-12-18T21:43:31 | 2022-12-18T21:40:21 | https://github.com/huggingface/datasets/pull/5373 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5373",
"html_url": "https://github.com/huggingface/datasets/pull/5373",
"diff_url": "https://github.com/huggingface/datasets/pull/5373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5373.patch",
"merged_at": "2022-12-18T21:40:21"
} | Muennighoff | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,501,377,802 | 5,372 | Fix streaming pandas.read_excel | closed | 2022-12-17T12:58:52 | 2023-01-06T11:50:58 | 2023-01-06T11:43:37 | https://github.com/huggingface/datasets/pull/5372 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5372",
"html_url": "https://github.com/huggingface/datasets/pull/5372",
"diff_url": "https://github.com/huggingface/datasets/pull/5372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5372.patch",
"merged_at": "2023-01-06T11:43:37"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,501,369,036 | 5,371 | Add a robustness benchmark dataset for vision | open | 2022-12-17T12:35:13 | 2022-12-20T06:21:41 | null | https://github.com/huggingface/datasets/issues/5371 | null | sayakpaul | false | [
"Ccing @nazneenrajani @lvwerra @osanseviero "
] |
1,500,622,276 | 5,369 | Distributed support | closed | 2022-12-16T17:43:47 | 2023-07-25T12:00:31 | 2023-01-16T13:33:32 | https://github.com/huggingface/datasets/pull/5369 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5369",
"html_url": "https://github.com/huggingface/datasets/pull/5369",
"diff_url": "https://github.com/huggingface/datasets/pull/5369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5369.patch",
"merged_at": "2023-01-16T13:33:32"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright all the tests are passing - this is ready for review",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n... |
1,500,322,973 | 5,368 | Align remove columns behavior and input dict mutation in `map` with previous behavior | closed | 2022-12-16T14:28:47 | 2022-12-16T16:28:08 | 2022-12-16T16:25:12 | https://github.com/huggingface/datasets/pull/5368 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5368",
"html_url": "https://github.com/huggingface/datasets/pull/5368",
"diff_url": "https://github.com/huggingface/datasets/pull/5368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5368.patch",
"merged_at": "2022-12-16T16:25:12"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,499,174,749 | 5,367 | Fix remove columns from lazy dict | closed | 2022-12-15T22:04:12 | 2022-12-15T22:27:53 | 2022-12-15T22:24:50 | https://github.com/huggingface/datasets/pull/5367 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5367",
"html_url": "https://github.com/huggingface/datasets/pull/5367",
"diff_url": "https://github.com/huggingface/datasets/pull/5367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5367.patch",
"merged_at": "2022-12-15T22:24:50"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,498,530,851 | 5,366 | ExamplesIterable fixes | closed | 2022-12-15T14:23:05 | 2022-12-15T14:44:47 | 2022-12-15T14:41:45 | https://github.com/huggingface/datasets/pull/5366 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5366",
"html_url": "https://github.com/huggingface/datasets/pull/5366",
"diff_url": "https://github.com/huggingface/datasets/pull/5366.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5366.patch",
"merged_at": "2022-12-15T14:41:45"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,498,422,466 | 5,365 | fix: image array should support other formats than uint8 | closed | 2022-12-15T13:17:50 | 2023-01-26T18:46:45 | 2023-01-26T18:39:36 | https://github.com/huggingface/datasets/pull/5365 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5365",
"html_url": "https://github.com/huggingface/datasets/pull/5365",
"diff_url": "https://github.com/huggingface/datasets/pull/5365.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5365.patch",
"merged_at": "2023-01-26T18:39:36"
} | vigsterkr | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! \r\n\r\nI agree that the current type-casting (always cast to `np.uint8` as Tensorflow Datasets does) is a bit too harsh. However, not all dtypes are supported in `Image.fromarray` (e.g. np.int64), so ... |
1,498,360,628 | 5,364 | Support for writing arrow files directly with BeamWriter | closed | 2022-12-15T12:38:05 | 2024-01-11T14:52:33 | 2024-01-11T14:45:15 | https://github.com/huggingface/datasets/pull/5364 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5364",
"html_url": "https://github.com/huggingface/datasets/pull/5364",
"diff_url": "https://github.com/huggingface/datasets/pull/5364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5364.patch",
"merged_at": null
} | mariosasko | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5364). All of your documentation changes will be reflected on that endpoint.",
"Deleting `BeamPipeline` and `upload_local_to_remote` would break the existing Beam scripts, so I reverted this change.\r\n\r\nFrom what I understan... |
1,498,171,317 | 5,363 | Dataset.from_generator() crashes on simple example | closed | 2022-12-15T10:21:28 | 2022-12-15T11:51:33 | 2022-12-15T11:51:33 | https://github.com/huggingface/datasets/issues/5363 | null | villmow | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.