id int64 953M 3.35B | number int64 2.72k 7.75k | title stringlengths 1 290 | state stringclasses 2
values | created_at timestamp[s]date 2021-07-26 12:21:17 2025-08-23 00:18:43 | updated_at timestamp[s]date 2021-07-26 13:27:59 2025-08-23 12:34:39 | closed_at timestamp[s]date 2021-07-26 13:27:59 2025-08-20 16:35:55 ⌀ | html_url stringlengths 49 51 | pull_request dict | user_login stringlengths 3 26 | is_pull_request bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|
1,771,588,158 | 5,985 | Cannot reuse tokenizer object for dataset map | closed | 2023-06-23T14:45:31 | 2023-07-21T14:09:14 | 2023-07-21T14:09:14 | https://github.com/huggingface/datasets/issues/5985 | null | vikigenius | false | [
"This is a known issue: https://github.com/huggingface/datasets/issues/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with ... |
1,771,571,458 | 5,984 | AutoSharding IterableDataset's when num_workers > 1 | open | 2023-06-23T14:34:20 | 2024-03-22T15:01:14 | null | https://github.com/huggingface/datasets/issues/5984 | null | mathephysicist | false | [
"For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @... |
1,770,578,804 | 5,983 | replaced PathLike as a variable for save_to_disk for dataset_path wit… | closed | 2023-06-23T00:57:05 | 2023-09-11T04:17:17 | 2023-09-11T04:17:17 | https://github.com/huggingface/datasets/pull/5983 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5983",
"html_url": "https://github.com/huggingface/datasets/pull/5983",
"diff_url": "https://github.com/huggingface/datasets/pull/5983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5983.patch",
"merged_at": null
} | benjaminbrown038 | true | [] |
1,770,333,296 | 5,982 | 404 on Datasets Documentation Page | closed | 2023-06-22T20:14:57 | 2023-06-26T15:45:03 | 2023-06-26T15:45:03 | https://github.com/huggingface/datasets/issues/5982 | null | kmulka-bloomberg | false | [
"This wasn’t working for me a bit earlier, but it looks to be back up now",
"We had a minor issue updating the docs after the latest release. It should work now :)."
] |
1,770,310,087 | 5,981 | Only two cores are getting used in sagemaker with pytorch 3.10 kernel | closed | 2023-06-22T19:57:31 | 2023-10-30T06:17:40 | 2023-07-24T11:54:52 | https://github.com/huggingface/datasets/issues/5981 | null | mmr-crexi | false | [
"I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https://github.com/pytorch/pytorch/issues/99625",
"From reading that ticket, it may be down in mkl? Is it worth hotfixing in the ... |
1,770,255,973 | 5,980 | Viewing dataset card returns “502 Bad Gateway” | closed | 2023-06-22T19:14:48 | 2023-06-27T08:38:19 | 2023-06-26T14:42:45 | https://github.com/huggingface/datasets/issues/5980 | null | tbenthompson | false | [
"Can you try again? Maybe there was a minor outage.",
"Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ",
"we fixed something on the server side, glad it's fixed now"
] |
1,770,198,250 | 5,979 | set dev version | closed | 2023-06-22T18:32:14 | 2023-06-22T18:42:22 | 2023-06-22T18:32:22 | https://github.com/huggingface/datasets/pull/5979 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5979",
"html_url": "https://github.com/huggingface/datasets/pull/5979",
"diff_url": "https://github.com/huggingface/datasets/pull/5979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5979.patch",
"merged_at": "2023-06-22T18:32:22"
} | lhoestq | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5979). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,770,187,053 | 5,978 | Release: 2.13.1 | closed | 2023-06-22T18:23:11 | 2023-06-22T18:40:24 | 2023-06-22T18:30:16 | https://github.com/huggingface/datasets/pull/5978 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"merged_at": "2023-06-22T18:30:16"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,768,503,913 | 5,976 | Avoid stuck map operation when subprocesses crashes | closed | 2023-06-21T21:18:31 | 2023-07-10T09:58:39 | 2023-07-10T09:50:07 | https://github.com/huggingface/datasets/pull/5976 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5976",
"html_url": "https://github.com/huggingface/datasets/pull/5976",
"diff_url": "https://github.com/huggingface/datasets/pull/5976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5976.patch",
"merged_at": "2023-07-10T09:50:07"
} | pappacena | true | [
"Hi ! Do you think this can be fixed at the Pool level ? Ideally it should be the Pool responsibility to handle this, not the `map` code. We could even subclass Pool if needed (at least the one from `multiprocess`)",
"@lhoestq it makes sense to me. Just pushed a refactoring creating a `class ProcessPool(multiproc... |
1,768,271,343 | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | closed | 2023-06-21T19:10:02 | 2023-06-30T05:55:39 | 2023-06-30T05:55:38 | https://github.com/huggingface/datasets/issues/5975 | null | Veluchs | false | [
"Duplicate of #",
"Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables",
"Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PR... |
1,767,981,231 | 5,974 | Deprecate `errors` param in favor of `encoding_errors` in text builder | closed | 2023-06-21T16:31:38 | 2023-06-26T10:34:43 | 2023-06-26T10:27:40 | https://github.com/huggingface/datasets/pull/5974 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5974",
"html_url": "https://github.com/huggingface/datasets/pull/5974",
"diff_url": "https://github.com/huggingface/datasets/pull/5974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5974.patch",
"merged_at": "2023-06-26T10:27:40"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,767,897,485 | 5,972 | Filter unsupported extensions | closed | 2023-06-21T15:43:01 | 2023-06-22T14:23:29 | 2023-06-22T14:16:26 | https://github.com/huggingface/datasets/pull/5972 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5972",
"html_url": "https://github.com/huggingface/datasets/pull/5972",
"diff_url": "https://github.com/huggingface/datasets/pull/5972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5972.patch",
"merged_at": "2023-06-22T14:16:26"
} | lhoestq | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,767,053,635 | 5,971 | Docs: make "repository structure" easier to find | open | 2023-06-21T08:26:44 | 2023-07-05T06:51:38 | null | https://github.com/huggingface/datasets/issues/5971 | null | severo | false | [
"Loading a local dataset also works the same way when `data_files` are not specified, so I agree we should make this info easier to discover \r\n\r\ncc @stevhliu ",
"Is this issue open? If so, I will self assign. ",
"@benjaminbrown038 Yes, it is. Maybe @stevhliu can give some pointers on improving this doc pag... |
1,766,010,356 | 5,970 | description disappearing from Info when Uploading a Dataset Created with `from_dict` | open | 2023-06-20T19:18:26 | 2023-06-22T14:23:56 | null | https://github.com/huggingface/datasets/issues/5970 | null | balisujohn | false | [
"Here's a minimal way to reproduce the bug, for the sake of convenience.\r\n````\r\nfrom datasets import Dataset, DatasetInfo, load_dataset\r\n\r\n\r\nepisodes_dict = {\"test\":[1,2,3],\"test2\": [1,2,4]}\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=\"test_str\... |
1,765,529,905 | 5,969 | Add `encoding` and `errors` params to JSON loader | closed | 2023-06-20T14:28:35 | 2023-06-21T13:39:50 | 2023-06-21T13:32:22 | https://github.com/huggingface/datasets/pull/5969 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5969",
"html_url": "https://github.com/huggingface/datasets/pull/5969",
"diff_url": "https://github.com/huggingface/datasets/pull/5969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5969.patch",
"merged_at": "2023-06-21T13:32:22"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,765,252,561 | 5,968 | Common Voice datasets still need `use_auth_token=True` | closed | 2023-06-20T11:58:37 | 2023-07-29T16:08:59 | 2023-07-29T16:08:58 | https://github.com/huggingface/datasets/issues/5968 | null | patrickvonplaten | false | [
"cc @pcuenca as well. \r\n\r\nNot super urgent btw",
"The issue commes from the dataset itself and is not related to the `datasets` lib\r\n\r\nsee https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1/blob/2c475b3b88e0f2e5828f830a4b91618a25ff20b7/common_voice_6_1.py#L148-L152",
"Let's remove these... |
1,763,926,520 | 5,967 | Config name / split name lost after map with multiproc | open | 2023-06-19T17:27:36 | 2023-06-28T08:55:25 | null | https://github.com/huggingface/datasets/issues/5967 | null | sanchit-gandhi | false | [
"This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_na... |
1,763,885,914 | 5,966 | Fix JSON generation in benchmarks CI | closed | 2023-06-19T16:56:06 | 2023-06-19T17:29:11 | 2023-06-19T17:22:10 | https://github.com/huggingface/datasets/pull/5966 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5966",
"html_url": "https://github.com/huggingface/datasets/pull/5966",
"diff_url": "https://github.com/huggingface/datasets/pull/5966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5966.patch",
"merged_at": "2023-06-19T17:22:10"
} | mariosasko | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,763,648,540 | 5,965 | "Couldn't cast array of type" in complex datasets | closed | 2023-06-19T14:16:14 | 2023-07-26T15:13:53 | 2023-07-26T15:13:53 | https://github.com/huggingface/datasets/issues/5965 | null | piercefreeman | false | [
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datase... |
1,763,513,574 | 5,964 | Always return list in `list_datasets` | closed | 2023-06-19T13:07:08 | 2023-06-19T17:29:37 | 2023-06-19T17:22:41 | https://github.com/huggingface/datasets/pull/5964 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5964",
"html_url": "https://github.com/huggingface/datasets/pull/5964",
"diff_url": "https://github.com/huggingface/datasets/pull/5964.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5964.patch",
"merged_at": "2023-06-19T17:22:41"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,762,774,457 | 5,963 | Got an error _pickle.PicklingError use Dataset.from_spark. | closed | 2023-06-19T05:30:35 | 2023-07-24T11:55:46 | 2023-07-24T11:55:46 | https://github.com/huggingface/datasets/issues/5963 | null | yanzia12138 | false | [
"i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?",
"@lhoestq ",
"cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset build... |
1,761,589,882 | 5,962 | Issue with train_test_split maintaining the same underlying PyArrow Table | open | 2023-06-17T02:19:58 | 2023-06-17T02:19:58 | null | https://github.com/huggingface/datasets/issues/5962 | null | Oziel14 | false | [] |
1,758,525,111 | 5,961 | IterableDataset: split by node and map may preprocess samples that will be skipped anyway | open | 2023-06-15T10:29:10 | 2023-09-01T10:35:11 | null | https://github.com/huggingface/datasets/issues/5961 | null | johnchienbronci | false | [
"Does \"number of shards\" refer to the total number of data?\r\n\r\nmy config:\r\nnproc_per_node=2\r\nds=ds['train'] = load_dataset(streaming=True).take(50000)\r\n\r\nI'm test again: in prepare_data(), data have the same for each GPU\r\n",
"The number of shards is `ds.n_shards`. It corresponds generally to the ... |
1,757,397,507 | 5,959 | read metric glue.py from local file | closed | 2023-06-14T17:59:35 | 2023-06-14T18:04:16 | 2023-06-14T18:04:16 | https://github.com/huggingface/datasets/issues/5959 | null | JiazhaoLi | false | [
"Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n"
] |
1,757,265,971 | 5,958 | set dev version | closed | 2023-06-14T16:26:34 | 2023-06-14T16:34:55 | 2023-06-14T16:26:51 | https://github.com/huggingface/datasets/pull/5958 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5958",
"html_url": "https://github.com/huggingface/datasets/pull/5958",
"diff_url": "https://github.com/huggingface/datasets/pull/5958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5958.patch",
"merged_at": "2023-06-14T16:26:51"
} | lhoestq | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5958). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,757,252,466 | 5,957 | Release: 2.13.0 | closed | 2023-06-14T16:17:26 | 2023-06-14T16:33:39 | 2023-06-14T16:24:39 | https://github.com/huggingface/datasets/pull/5957 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5957",
"html_url": "https://github.com/huggingface/datasets/pull/5957",
"diff_url": "https://github.com/huggingface/datasets/pull/5957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5957.patch",
"merged_at": "2023-06-14T16:24:39"
} | lhoestq | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,756,959,367 | 5,956 | Fix ArrowExamplesIterable.shard_data_sources | closed | 2023-06-14T13:50:38 | 2023-06-14T14:43:12 | 2023-06-14T14:33:45 | https://github.com/huggingface/datasets/pull/5956 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5956",
"html_url": "https://github.com/huggingface/datasets/pull/5956",
"diff_url": "https://github.com/huggingface/datasets/pull/5956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5956.patch",
"merged_at": "2023-06-14T14:33:45"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,756,827,133 | 5,955 | Strange bug in loading local JSON files, using load_dataset | closed | 2023-06-14T12:46:00 | 2023-06-21T14:42:15 | 2023-06-21T14:42:15 | https://github.com/huggingface/datasets/issues/5955 | null | Night-Quiet | false | [
"This is the actual error:\r\n```\r\nFailed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values\r\n```\r\nWhich means some samples are incorrectly formatted.\r\n\r\nPyArrow, a storage backend that we use under the hoo... |
1,756,572,994 | 5,954 | Better filenotfound for gated | closed | 2023-06-14T10:33:10 | 2023-06-14T12:33:27 | 2023-06-14T12:26:31 | https://github.com/huggingface/datasets/pull/5954 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5954",
"html_url": "https://github.com/huggingface/datasets/pull/5954",
"diff_url": "https://github.com/huggingface/datasets/pull/5954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5954.patch",
"merged_at": "2023-06-14T12:26:31"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,756,520,523 | 5,953 | Bad error message when trying to download gated dataset | closed | 2023-06-14T10:03:39 | 2023-06-14T16:36:51 | 2023-06-14T12:26:32 | https://github.com/huggingface/datasets/issues/5953 | null | patrickvonplaten | false | [
"cc @sanchit-gandhi @Vaibhavs10 @lhoestq - this is mainly for demos that use Common Voice datasets as done here: https://github.com/facebookresearch/fairseq/tree/main/examples/mms#-transformers\r\n",
"Hi ! the error for me is\r\n\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/mozilla-foun... |
1,756,481,591 | 5,952 | Add Arrow builder docs | closed | 2023-06-14T09:42:46 | 2023-06-14T14:42:31 | 2023-06-14T14:34:39 | https://github.com/huggingface/datasets/pull/5952 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5952",
"html_url": "https://github.com/huggingface/datasets/pull/5952",
"diff_url": "https://github.com/huggingface/datasets/pull/5952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5952.patch",
"merged_at": "2023-06-14T14:34:39"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,756,363,546 | 5,951 | What is the Right way to use discofuse dataset?? | closed | 2023-06-14T08:38:39 | 2023-06-14T13:25:06 | 2023-06-14T12:10:16 | https://github.com/huggingface/datasets/issues/5951 | null | akesh1235 | false | [
"Thanks for opening https://huggingface.co/datasets/discofuse/discussions/3, let's continue the discussion over there if you don't mind",
"I have posted there also sir, please check\r\n@lhoestq"
] |
1,755,197,946 | 5,950 | Support for data with instance-wise dictionary as features | open | 2023-06-13T15:49:00 | 2025-04-07T13:20:37 | null | https://github.com/huggingface/datasets/issues/5950 | null | richardwth | false | [
"Hi ! We use the Arrow columnar format under the hood, which doesn't support such dictionaries: each field must have a fixed type and exist in each sample.\r\n\r\nInstead you can restructure your data like\r\n```\r\n{\r\n \"index\": 0,\r\n \"keys\": [\"2 * x + y >= 3\"],\r\n \"values\": [[\"2 * x + y >= 3\... |
1,754,843,717 | 5,949 | Replace metadata utils with `huggingface_hub`'s RepoCard API | closed | 2023-06-13T13:03:19 | 2023-06-27T16:47:51 | 2023-06-27T16:38:32 | https://github.com/huggingface/datasets/pull/5949 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5949",
"html_url": "https://github.com/huggingface/datasets/pull/5949",
"diff_url": "https://github.com/huggingface/datasets/pull/5949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5949.patch",
"merged_at": "2023-06-27T16:38:32"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,754,794,611 | 5,948 | Fix sequence of array support for most dtype | closed | 2023-06-13T12:38:59 | 2023-06-14T15:11:55 | 2023-06-14T15:03:33 | https://github.com/huggingface/datasets/pull/5948 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5948",
"html_url": "https://github.com/huggingface/datasets/pull/5948",
"diff_url": "https://github.com/huggingface/datasets/pull/5948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5948.patch",
"merged_at": "2023-06-14T15:03:33"
} | qgallouedec | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,754,359,316 | 5,947 | Return the audio filename when decoding fails due to corrupt files | open | 2023-06-13T08:44:09 | 2023-06-14T12:45:01 | null | https://github.com/huggingface/datasets/issues/5947 | null | wetdog | false | [
"Hi ! The audio data don't always exist as files on disk - the blobs are often stored in the Arrow files. For now I'd suggest disabling decoding with `.cast_column(\"audio\", Audio(decode=False))` and apply your own decoding that handles corrupted files (maybe to filter them out ?)\r\n\r\ncc @sanchit-gandhi since i... |
1,754,234,469 | 5,946 | IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ?? | open | 2023-06-13T07:34:15 | 2023-07-14T12:04:48 | null | https://github.com/huggingface/datasets/issues/5946 | null | syngokhan | false | [
"https://colab.research.google.com/#scrollTo=AQ_HCYruWIHU&fileId=https%3A//huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb\r\n\r\nI ran the same administration exactly the same but got the same error",
"Looks related to https://discuss.huggingface.co/t/indexer... |
1,754,084,577 | 5,945 | Failing to upload dataset to the hub | closed | 2023-06-13T05:46:46 | 2023-07-24T11:56:40 | 2023-07-24T11:56:40 | https://github.com/huggingface/datasets/issues/5945 | null | Ar770 | false | [
"Hi ! Feel free to re-run your code later, it will resume automatically where you left",
"Tried many times in the last 2 weeks, problem remains.",
"Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\... |
1,752,882,200 | 5,944 | Arrow dataset builder to be able to load and stream Arrow datasets | closed | 2023-06-12T14:21:49 | 2023-06-13T17:36:02 | 2023-06-13T17:29:01 | https://github.com/huggingface/datasets/pull/5944 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5944",
"html_url": "https://github.com/huggingface/datasets/pull/5944",
"diff_url": "https://github.com/huggingface/datasets/pull/5944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5944.patch",
"merged_at": "2023-06-13T17:29:01"
} | mariusz-jachimowicz-83 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq tips applied. Thanks for a review. :smile: It's a lot of fun to improve this project. ",
"Let's add some documentation in a subsequent PR :)\r\n\r\nIn particular @mariosasko and I think it's important to note to users tha... |
1,752,021,681 | 5,942 | Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py` | open | 2023-06-12T06:50:50 | 2023-06-30T09:15:00 | null | https://github.com/huggingface/datasets/pull/5942 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5942",
"html_url": "https://github.com/huggingface/datasets/pull/5942",
"diff_url": "https://github.com/huggingface/datasets/pull/5942.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5942.patch",
"merged_at": null
} | graelo | true | [] |
1,751,838,897 | 5,941 | Load Data Sets Too Slow In Train Seq2seq Model | closed | 2023-06-12T03:58:43 | 2023-08-15T02:52:22 | 2023-08-15T02:52:22 | https://github.com/huggingface/datasets/issues/5941 | null | xyx361100238 | false | [
"Hi ! you can speed it up using multiprocessing by passing `num_proc=` to `load_dataset()`",
"already did,but not useful for step Generating train split,it works in step \"Resolving data files\" & \"Downloading data files\" ",
"@mariosasko some advice , thanks!",
"I met the same problem, terrible experience... |
1,774,389,854 | 5,990 | Pushing a large dataset on the hub consistently hangs | open | 2023-06-10T14:46:47 | 2025-02-15T09:29:10 | null | https://github.com/huggingface/datasets/issues/5990 | null | AntreasAntoniou | false | [
"Hi @AntreasAntoniou , sorry to know you are facing this issue. To help debugging it, could you tell me:\r\n- What is the total dataset size?\r\n- Is it always failing on the same shard or is the hanging problem happening randomly?\r\n- Were you able to save the dataset as parquet locally? This would help us determ... |
1,749,955,883 | 5,939 | . | closed | 2023-06-09T14:01:34 | 2023-06-12T12:19:34 | 2023-06-12T12:19:19 | https://github.com/huggingface/datasets/issues/5939 | null | flckv | false | [] |
1,749,462,851 | 5,938 | Make get_from_cache use custom temp filename that is locked | closed | 2023-06-09T09:01:13 | 2023-06-14T13:35:38 | 2023-06-14T13:27:24 | https://github.com/huggingface/datasets/pull/5938 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5938",
"html_url": "https://github.com/huggingface/datasets/pull/5938",
"diff_url": "https://github.com/huggingface/datasets/pull/5938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5938.patch",
"merged_at": "2023-06-14T13:27:24"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,749,388,597 | 5,937 | Avoid parallel redownload in cache | closed | 2023-06-09T08:18:36 | 2023-06-14T12:30:59 | 2023-06-14T12:23:57 | https://github.com/huggingface/datasets/pull/5937 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5937",
"html_url": "https://github.com/huggingface/datasets/pull/5937",
"diff_url": "https://github.com/huggingface/datasets/pull/5937.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5937.patch",
"merged_at": "2023-06-14T12:23:57"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,748,424,388 | 5,936 | Sequence of array not supported for most dtype | closed | 2023-06-08T18:18:07 | 2023-06-14T15:03:34 | 2023-06-14T15:03:34 | https://github.com/huggingface/datasets/issues/5936 | null | qgallouedec | false | [
"Related, `float16` is the only dtype not supported by `Array2D` (probably by every `ArrayND`):\r\n\r\n```python\r\nfrom datasets import Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # ok\r\n \"int16\", # ok\r\n \"int32\", # ok\r\n \"i... |
1,748,090,220 | 5,935 | Better row group size in push_to_hub | closed | 2023-06-08T15:01:15 | 2023-06-09T17:47:37 | 2023-06-09T17:40:09 | https://github.com/huggingface/datasets/pull/5935 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5935",
"html_url": "https://github.com/huggingface/datasets/pull/5935",
"diff_url": "https://github.com/huggingface/datasets/pull/5935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5935.patch",
"merged_at": "2023-06-09T17:40:09"
} | lhoestq | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,747,904,840 | 5,934 | Modify levels of some logging messages | closed | 2023-06-08T13:31:44 | 2023-07-12T18:21:03 | 2023-07-12T18:21:02 | https://github.com/huggingface/datasets/pull/5934 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5934",
"html_url": "https://github.com/huggingface/datasets/pull/5934",
"diff_url": "https://github.com/huggingface/datasets/pull/5934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5934.patch",
"merged_at": null
} | Laurent2916 | true | [
"I've addressed this as part of #6019, so feel free to close this PR. ",
"Thanks !"
] |
1,747,382,500 | 5,933 | Fix `to_numpy` when None values in the sequence | closed | 2023-06-08T08:38:56 | 2023-06-09T13:49:41 | 2023-06-09T13:23:48 | https://github.com/huggingface/datasets/pull/5933 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5933",
"html_url": "https://github.com/huggingface/datasets/pull/5933",
"diff_url": "https://github.com/huggingface/datasets/pull/5933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5933.patch",
"merged_at": "2023-06-09T13:23:48"
} | qgallouedec | true | [
"I just added the same test with dynamic shape",
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome ! I'm merging now if you don't mind :)\r\nWe should probably give you permissions to merge your own PRs when you have an approval",
"<details>\n<summary>Show benchmarks</su... |
1,746,249,161 | 5,932 | [doc build] Use secrets | closed | 2023-06-07T16:09:39 | 2023-06-09T10:16:58 | 2023-06-09T09:53:16 | https://github.com/huggingface/datasets/pull/5932 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5932",
"html_url": "https://github.com/huggingface/datasets/pull/5932",
"diff_url": "https://github.com/huggingface/datasets/pull/5932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5932.patch",
"merged_at": "2023-06-09T09:53:16"
} | mishig25 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,745,408,784 | 5,931 | `datasets.map` not reusing cached copy by default | closed | 2023-06-07T09:03:33 | 2023-06-21T16:15:40 | 2023-06-21T16:15:40 | https://github.com/huggingface/datasets/issues/5931 | null | bhavitvyamalik | false | [
"This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on... |
1,745,184,395 | 5,930 | loading private custom dataset script - authentication error | closed | 2023-06-07T06:58:23 | 2023-06-15T14:49:21 | 2023-06-15T14:49:20 | https://github.com/huggingface/datasets/issues/5930 | null | flckv | false | [
"This issue seems to have been resolved, so I'm closing it."
] |
1,744,478,456 | 5,929 | Importing PyTorch reduces multiprocessing performance for map | closed | 2023-06-06T19:42:25 | 2023-06-16T13:09:12 | 2023-06-16T13:09:12 | https://github.com/huggingface/datasets/issues/5929 | null | Maxscha | false | [
"Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.",
"Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigati... |
1,744,098,371 | 5,928 | Fix link to quickstart docs in README.md | closed | 2023-06-06T15:23:01 | 2023-06-06T15:52:34 | 2023-06-06T15:43:53 | https://github.com/huggingface/datasets/pull/5928 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5928",
"html_url": "https://github.com/huggingface/datasets/pull/5928",
"diff_url": "https://github.com/huggingface/datasets/pull/5928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5928.patch",
"merged_at": "2023-06-06T15:43:53"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,744,009,032 | 5,927 | `IndexError` when indexing `Sequence` of `Array2D` with `None` values | closed | 2023-06-06T14:36:22 | 2023-06-13T12:39:39 | 2023-06-09T13:23:50 | https://github.com/huggingface/datasets/issues/5927 | null | qgallouedec | false | [
"Easy fix would be to add:\r\n\r\n```python\r\nnull_indices -= np.arange(len(null_indices))\r\n```\r\n\r\nbefore L279, but I'm not sure it's the most intuitive way to fix it.",
"Same issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7fcbe5b1575c8d162b65b9397b3dfda995a4e048/src/datasets/features/feat... |
1,743,922,028 | 5,926 | Uncaught exception when generating the splits from a dataset that miss data | open | 2023-06-06T13:51:01 | 2023-06-07T07:53:16 | null | https://github.com/huggingface/datasets/issues/5926 | null | severo | false | [
"Thanks for reporting, @severo.\r\n\r\nThis is a known issue with `fsspec`:\r\n- #5862\r\n- https://github.com/fsspec/filesystem_spec/issues/1265"
] |
1,741,941,436 | 5,925 | Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets | closed | 2023-06-05T14:46:04 | 2023-06-19T17:22:43 | 2023-06-19T17:22:43 | https://github.com/huggingface/datasets/issues/5925 | null | mtkinit | false | [] |
1,738,889,236 | 5,924 | Add parallel module using joblib for Spark | closed | 2023-06-02T22:25:25 | 2023-06-14T10:25:10 | 2023-06-14T10:15:46 | https://github.com/huggingface/datasets/pull/5924 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5924",
"html_url": "https://github.com/huggingface/datasets/pull/5924",
"diff_url": "https://github.com/huggingface/datasets/pull/5924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5924.patch",
"merged_at": "2023-06-14T10:15:46"
} | es94129 | true | [
"Hi @lhoestq, I added the `parallel` part according to the discussion we had. Could you take a look to see if this is aligned with your proposal?\r\n\r\nMeanwhile I'm working on adding a `parallel_backend` parameter to `load_datasets` so that it can be used like:\r\n```python\r\nwith parallel_backend('spark', steps... |
1,737,436,227 | 5,923 | Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility | closed | 2023-06-02T04:16:32 | 2024-06-27T10:07:49 | 2024-02-25T16:38:03 | https://github.com/huggingface/datasets/issues/5923 | null | ehuangc | false | [
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; ... |
1,736,898,953 | 5,922 | Length of table does not accurately reflect the split | closed | 2023-06-01T18:56:26 | 2023-06-02T16:13:31 | 2023-06-02T16:13:31 | https://github.com/huggingface/datasets/issues/5922 | null | amogkam | false | [
"As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.",
"This is an optimization that w... |
1,736,563,023 | 5,921 | Fix streaming parquet with image feature in schema | closed | 2023-06-01T15:23:10 | 2023-06-02T10:02:54 | 2023-06-02T09:53:11 | https://github.com/huggingface/datasets/pull/5921 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5921",
"html_url": "https://github.com/huggingface/datasets/pull/5921",
"diff_url": "https://github.com/huggingface/datasets/pull/5921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5921.patch",
"merged_at": "2023-06-02T09:53:11"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,736,196,991 | 5,920 | Optimize IterableDataset.from_file using ArrowExamplesIterable | closed | 2023-06-01T12:14:36 | 2023-06-01T12:42:10 | 2023-06-01T12:35:14 | https://github.com/huggingface/datasets/pull/5920 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5920",
"html_url": "https://github.com/huggingface/datasets/pull/5920",
"diff_url": "https://github.com/huggingface/datasets/pull/5920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5920.patch",
"merged_at": "2023-06-01T12:35:14"
} | lhoestq | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,735,519,227 | 5,919 | add support for storage_options for load_dataset API | closed | 2023-06-01T05:52:32 | 2023-07-18T06:14:32 | 2023-07-17T17:02:00 | https://github.com/huggingface/datasets/pull/5919 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5919",
"html_url": "https://github.com/huggingface/datasets/pull/5919",
"diff_url": "https://github.com/huggingface/datasets/pull/5919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5919.patch",
"merged_at": null
} | janineguo | true | [
"hi @lhoestq,\r\nI saw some errors in my test and found all the failed reasons are `FileNotFoundError` about `test_load_streaming_private_dataset_with_zipped_data` and `test_load_dataset_private_zipped_images` in `test_load.py `, I run pytest on my own Wins and Ubuntu system all the test in `test_load.py ` are suc... |
1,735,313,549 | 5,918 | File not found for audio dataset | open | 2023-06-01T02:15:29 | 2023-06-11T06:02:25 | null | https://github.com/huggingface/datasets/issues/5918 | null | RobertBaruch | false | [
"load_dataset () did not work for loading local files either "
] |
1,733,661,588 | 5,917 | Refactor extensions | closed | 2023-05-31T08:33:02 | 2023-05-31T13:34:35 | 2023-05-31T13:25:57 | https://github.com/huggingface/datasets/pull/5917 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5917",
"html_url": "https://github.com/huggingface/datasets/pull/5917",
"diff_url": "https://github.com/huggingface/datasets/pull/5917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5917.patch",
"merged_at": "2023-05-31T13:25:57"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,732,456,392 | 5,916 | Unpin responses | closed | 2023-05-30T14:59:48 | 2023-05-30T18:03:10 | 2023-05-30T17:53:29 | https://github.com/huggingface/datasets/pull/5916 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5916",
"html_url": "https://github.com/huggingface/datasets/pull/5916",
"diff_url": "https://github.com/huggingface/datasets/pull/5916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5916.patch",
"merged_at": "2023-05-30T17:53:29"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,732,389,984 | 5,915 | Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"` | closed | 2023-05-30T14:27:55 | 2023-05-31T13:31:21 | 2023-05-31T13:23:54 | https://github.com/huggingface/datasets/pull/5915 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5915",
"html_url": "https://github.com/huggingface/datasets/pull/5915",
"diff_url": "https://github.com/huggingface/datasets/pull/5915.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5915.patch",
"merged_at": "2023-05-31T13:23:54"
} | mariosasko | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,731,483,996 | 5,914 | array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets | open | 2023-05-30T04:25:00 | 2024-10-27T04:09:18 | null | https://github.com/huggingface/datasets/issues/5914 | null | ravenouse | false | [
"Was a fix for this identified?",
"> Was a fix for this identified?\r\n\r\nHi @pranav-sridhar \r\nHave you encountered a similar issue with this dataset?\r\nI’ve modified the dataset construction script to address the problem. Feel free to use this updated version to avoid the issue.\r\n\r\n[Ericwang/samromur_chi... |
1,731,427,484 | 5,913 | I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred. | closed | 2023-05-30T02:55:26 | 2023-07-24T12:00:38 | 2023-07-24T12:00:38 | https://github.com/huggingface/datasets/issues/5913 | null | cjt222 | false | [
"Thanks for reporting, @cjt222.\r\n\r\nWhat is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead. ",
"> Thanks for reporting, @cjt222.\r\n> \r\n> What is the structure of your JSON files. Please note that it is normally simpler if the data file... |
1,730,299,852 | 5,912 | Missing elements in `map` a batched dataset | closed | 2023-05-29T08:09:19 | 2023-07-26T15:48:15 | 2023-07-26T15:48:15 | https://github.com/huggingface/datasets/issues/5912 | null | sachinruk | false | [
"Hi ! in your code batching is **only used within** `map`, to process examples in batch. The dataset itself however is not batched and returns elements one by one.\r\n\r\nTo iterate on batches, you can do\r\n```python\r\nfor batch in dataset.iter(batch_size=8):\r\n ...\r\n```"
] |
1,728,909,790 | 5,910 | Cannot use both set_format and set_transform | closed | 2023-05-27T19:22:23 | 2023-07-09T21:40:54 | 2023-06-16T14:41:24 | https://github.com/huggingface/datasets/issues/5910 | null | ybouane | false | [
"Currently, it's not possible to chain `set_format`/`set_transform` calls (plus, this is a breaking change if we decide to implement it), so I see two possible solutions:\r\n* using `set_format`/`set_transform` for the 1st transform and then passing the transformed example/batch to the 2nd transform\r\n* implementi... |
1,728,900,068 | 5,909 | Use more efficient and idiomatic way to construct list. | closed | 2023-05-27T18:54:47 | 2023-05-31T15:37:11 | 2023-05-31T13:28:29 | https://github.com/huggingface/datasets/pull/5909 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5909",
"html_url": "https://github.com/huggingface/datasets/pull/5909",
"diff_url": "https://github.com/huggingface/datasets/pull/5909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5909.patch",
"merged_at": "2023-05-31T13:28:28"
} | ttsugriy | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,728,653,935 | 5,908 | Unbearably slow sorting on big mapped datasets | open | 2023-05-27T11:08:32 | 2023-06-13T17:45:10 | null | https://github.com/huggingface/datasets/issues/5908 | null | maximxlss | false | [
"Hi ! `shard` currently returns a slow dataset by default, with examples evenly distributed in the dataset.\r\n\r\nYou can get a fast dataset using `contiguous=True` (which should be the default imo):\r\n\r\n```python\r\ndataset = dataset.shard(10, 0, contiguous=True)\r\n```\r\n\r\nThis way you don't need to flatte... |
1,728,648,560 | 5,907 | Add `flatten_indices` to `DatasetDict` | closed | 2023-05-27T10:55:44 | 2023-06-01T11:46:35 | 2023-06-01T11:39:36 | https://github.com/huggingface/datasets/pull/5907 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5907",
"html_url": "https://github.com/huggingface/datasets/pull/5907",
"diff_url": "https://github.com/huggingface/datasets/pull/5907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5907.patch",
"merged_at": "2023-06-01T11:39:35"
} | maximxlss | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,728,171,113 | 5,906 | Could you unpin responses version? | closed | 2023-05-26T20:02:14 | 2023-05-30T17:53:31 | 2023-05-30T17:53:31 | https://github.com/huggingface/datasets/issues/5906 | null | kenimou | false | [] |
1,727,541,392 | 5,905 | Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently | open | 2023-05-26T12:33:02 | 2023-06-15T13:34:18 | null | https://github.com/huggingface/datasets/issues/5905 | null | bruno-hays | false | [
"We plan to improve this eventually (see https://github.com/huggingface/datasets/issues/5454 and https://github.com/huggingface/datasets/issues/5380).\r\n\r\n> Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe someth... |
1,727,415,626 | 5,904 | Validate name parameter in make_file_instructions | closed | 2023-05-26T11:12:46 | 2023-05-31T07:43:32 | 2023-05-31T07:34:57 | https://github.com/huggingface/datasets/pull/5904 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5904",
"html_url": "https://github.com/huggingface/datasets/pull/5904",
"diff_url": "https://github.com/huggingface/datasets/pull/5904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5904.patch",
"merged_at": "2023-05-31T07:34:57"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,727,372,549 | 5,903 | Relax `ci.yml` trigger for `pull_request` based on modified paths | open | 2023-05-26T10:46:52 | 2023-09-07T15:52:36 | null | https://github.com/huggingface/datasets/pull/5903 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5903",
"html_url": "https://github.com/huggingface/datasets/pull/5903",
"diff_url": "https://github.com/huggingface/datasets/pull/5903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5903.patch",
"merged_at": null
} | alvarobartt | true | [
"Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! 🤗",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5903). All of your documentation changes will be reflected on that endpoint.",
"Maybe ... |
1,727,342,194 | 5,902 | Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository | closed | 2023-05-26T10:25:01 | 2023-07-25T13:50:06 | 2023-07-25T13:38:33 | https://github.com/huggingface/datasets/pull/5902 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5902",
"html_url": "https://github.com/huggingface/datasets/pull/5902",
"diff_url": "https://github.com/huggingface/datasets/pull/5902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5902.patch",
"merged_at": "2023-07-25T13:38:33"
} | alvarobartt | true | [
"Random fact: previous run was showing that the Hub was hosting 13336 datasets, while the most recent run shows 36662 👀🎉",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! \r\n\r\nHowever, I think we should stop linking this notebook and use the notebook version of the ... |
1,727,179,016 | 5,901 | Make prepare_split more robust if errors in metadata dataset_info splits | closed | 2023-05-26T08:48:22 | 2023-06-02T06:06:38 | 2023-06-01T13:39:40 | https://github.com/huggingface/datasets/pull/5901 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5901",
"html_url": "https://github.com/huggingface/datasets/pull/5901",
"diff_url": "https://github.com/huggingface/datasets/pull/5901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5901.patch",
"merged_at": "2023-06-01T13:39:39"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,727,129,617 | 5,900 | Fix minor typo in docs loading.mdx | closed | 2023-05-26T08:10:54 | 2023-05-26T09:34:15 | 2023-05-26T09:25:12 | https://github.com/huggingface/datasets/pull/5900 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5900",
"html_url": "https://github.com/huggingface/datasets/pull/5900",
"diff_url": "https://github.com/huggingface/datasets/pull/5900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5900.patch",
"merged_at": "2023-05-26T09:25:12"
} | albertvillanova | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,726,279,011 | 5,899 | canonicalize data dir in config ID hash | closed | 2023-05-25T18:17:10 | 2023-06-02T16:02:15 | 2023-06-02T15:52:04 | https://github.com/huggingface/datasets/pull/5899 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5899",
"html_url": "https://github.com/huggingface/datasets/pull/5899",
"diff_url": "https://github.com/huggingface/datasets/pull/5899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5899.patch",
"merged_at": "2023-06-02T15:52:04"
} | kylrth | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,726,190,481 | 5,898 | Loading The flores data set for specific language | closed | 2023-05-25T17:08:55 | 2023-05-25T17:21:38 | 2023-05-25T17:21:37 | https://github.com/huggingface/datasets/issues/5898 | null | 106AbdulBasit | false | [
"got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook/flores\", \"ace_Arab\")"
] |
1,726,135,494 | 5,897 | Fix `FixedSizeListArray` casting | closed | 2023-05-25T16:26:33 | 2023-05-26T12:22:04 | 2023-05-26T11:57:16 | https://github.com/huggingface/datasets/pull/5897 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5897",
"html_url": "https://github.com/huggingface/datasets/pull/5897",
"diff_url": "https://github.com/huggingface/datasets/pull/5897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5897.patch",
"merged_at": "2023-05-26T11:57:16"
} | mariosasko | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,726,022,500 | 5,896 | HuggingFace does not cache downloaded files aggressively/early enough | closed | 2023-05-25T15:14:36 | 2024-03-15T15:36:07 | 2024-03-15T15:36:07 | https://github.com/huggingface/datasets/issues/5896 | null | jack-jjm | false | [
"I also faced this. Any update?",
"We've dropped the `apache-beam` dependency in https://huggingface.co/datasets/wikipedia/discussions/19, so you should no longer get this error."
] |
1,725,467,252 | 5,895 | The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset | closed | 2023-05-25T09:39:06 | 2023-05-29T02:32:12 | 2023-05-29T02:32:12 | https://github.com/huggingface/datasets/issues/5895 | null | DongHande | false | [
"Thanks for reporting, @DongHande.\r\n\r\nI think the issue is caused by the metadata in the dataset card: in the header of the `README.md`, they state that the dataset has 4 splits (\"finetune\", \"reward\", \"rl\", \"evaluation\"). \r\n```yaml\r\n splits:\r\n - name: finetune\r\n num_bytes: 6674567576\r\... |
1,724,774,910 | 5,894 | Force overwrite existing filesystem protocol | closed | 2023-05-24T21:41:53 | 2023-05-25T06:52:08 | 2023-05-25T06:42:33 | https://github.com/huggingface/datasets/pull/5894 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5894",
"html_url": "https://github.com/huggingface/datasets/pull/5894",
"diff_url": "https://github.com/huggingface/datasets/pull/5894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5894.patch",
"merged_at": "2023-05-25T06:42:33"
} | baskrahmer | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,722,519,056 | 5,893 | Load cached dataset as iterable | closed | 2023-05-23T17:40:35 | 2023-06-01T11:58:24 | 2023-06-01T11:51:29 | https://github.com/huggingface/datasets/pull/5893 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5893",
"html_url": "https://github.com/huggingface/datasets/pull/5893",
"diff_url": "https://github.com/huggingface/datasets/pull/5893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5893.patch",
"merged_at": "2023-06-01T11:51:29"
} | mariusz-jachimowicz-83 | true | [
"@lhoestq Could you please look into that and review?",
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I refactored the code. Could you please check is it what you requested?",
"@lhoestq Thanks for a review. Excellent tips. All tips applied. ",
"I think there is j... |
1,722,503,824 | 5,892 | User access requests with manual review do not notify the dataset owner | closed | 2023-05-23T17:27:46 | 2023-07-21T13:55:37 | 2023-07-21T13:55:36 | https://github.com/huggingface/datasets/issues/5892 | null | leondz | false | [
"cc @SBrandeis",
"I think this has been addressed.\r\n\r\nPlease open a new issue if you are still not getting notified."
] |
1,722,384,135 | 5,891 | Make split slicing consistent with list slicing | closed | 2023-05-23T16:04:33 | 2024-01-31T16:00:26 | 2024-01-31T15:54:17 | https://github.com/huggingface/datasets/pull/5891 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"merged_at": "2024-01-31T15:54:17"
} | mariosasko | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,722,373,618 | 5,889 | Token Alignment for input and output data over train and test batch/dataset. | open | 2023-05-23T15:58:55 | 2023-05-23T15:58:55 | null | https://github.com/huggingface/datasets/issues/5889 | null | akesh1235 | false | [] |
1,722,166,382 | 5,887 | HuggingsFace dataset example give error | closed | 2023-05-23T14:09:05 | 2023-07-25T14:01:01 | 2023-07-25T14:01:00 | https://github.com/huggingface/datasets/issues/5887 | null | donhuvy | false | [
"Nice catch @donhuvy, that's because some models don't need the `token_type_ids`, as in this case, as the example is using `distilbert-base-cased`, and according to the DistilBert documentation at https://huggingface.co/transformers/v3.0.2/model_doc/distilbert.html, `DistilBert doesn’t have token_type_ids, you don’... |
1,721,070,225 | 5,886 | Use work-stealing algorithm when parallel computing | open | 2023-05-23T03:08:44 | 2023-05-24T15:30:09 | null | https://github.com/huggingface/datasets/issues/5886 | null | 1014661165 | false | [
"Alternatively we could set the number of shards to be a factor than the number of processes (current they're equal) - this way it will be less likely to end up with a shard that is significantly slower than all the other ones."
] |
1,720,954,440 | 5,885 | Modify `is_remote_filesystem` to return True for FUSE-mounted paths | closed | 2023-05-23T01:04:54 | 2024-01-08T18:31:00 | 2024-01-08T18:31:00 | https://github.com/huggingface/datasets/pull/5885 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5885",
"html_url": "https://github.com/huggingface/datasets/pull/5885",
"diff_url": "https://github.com/huggingface/datasets/pull/5885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5885.patch",
"merged_at": null
} | maddiedawson | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5885). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq would you or another maintainer be able to review please? :)",
"Why you do need to support FUSE mounted paths ?\r\n\r\n`datasets` uses d... |
1,722,290,363 | 5,888 | A way to upload and visualize .mp4 files (millions of them) as part of a dataset | open | 2023-05-22T18:05:26 | 2023-06-23T03:37:16 | null | https://github.com/huggingface/datasets/issues/5888 | null | AntreasAntoniou | false | [
"Hi! \r\n\r\nYou want to use `push_to_hub` (creates Parquet files) instead of `save_to_disk` (creates Arrow files) when creating a Hub dataset. Parquet is designed for long-term storage and takes less space than the Arrow format, and, most importantly, `load_dataset` can parse it, which should fix the viewer. \r\n\... |
1,719,548,172 | 5,884 | `Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_` | closed | 2023-05-22T12:03:06 | 2023-06-09T16:04:56 | 2023-06-09T16:04:55 | https://github.com/huggingface/datasets/issues/5884 | null | alvarobartt | false | [
"May eventually be solved in #5883 ",
"#self-assign"
] |
1,719,527,597 | 5,883 | Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset` | closed | 2023-05-22T11:51:07 | 2023-06-08T11:09:03 | 2023-06-06T16:49:15 | https://github.com/huggingface/datasets/pull/5883 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5883",
"html_url": "https://github.com/huggingface/datasets/pull/5883",
"diff_url": "https://github.com/huggingface/datasets/pull/5883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5883.patch",
"merged_at": "2023-06-06T16:49:15"
} | alvarobartt | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n\r... |
1,719,402,643 | 5,881 | Split dataset by node: index error when sharding iterable dataset | open | 2023-05-22T10:36:13 | 2025-01-31T16:36:30 | null | https://github.com/huggingface/datasets/issues/5881 | null | sanchit-gandhi | false | [
"cc @lhoestq in case you have any ideas here! Might need a multi-host set-up to debug (can give you access to a JAX one if you need)",
"I am also facing the same problem. Could you let me know if you found a solution for this?",
"I couldn't reproduce with the latest version of `datasets` 2.16.1, can you update ... |
1,719,090,101 | 5,880 | load_dataset from s3 file system through streaming can't not iterate data | open | 2023-05-22T07:40:27 | 2023-05-26T12:52:08 | null | https://github.com/huggingface/datasets/issues/5880 | null | janineguo | false | [
"This sounds related to #5281.\r\n\r\nCan you try passing `storage_options=s3_client.storage_options` instead passing it to `use_auth_token=` ?",
"I tried `storage_options` before, but it doesn't work, I checked our source code and I found that we even didn't pass this parameter to the following process. if I use... |
1,718,203,843 | 5,878 | Prefetching for IterableDataset | open | 2023-05-20T15:25:40 | 2025-01-24T17:13:55 | null | https://github.com/huggingface/datasets/issues/5878 | null | vyeevani | false | [
"Very cool! Do you have a link to the code that you're using to eagerly fetch the data? Would also be interested in hacking around something here for pre-fetching iterable datasets",
"I ended up just switching back to the pytorch dataloader and using it's multiprocessing functionality to handle this :(. I'm just ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.