id
int64
953M
3.35B
number
int64
2.72k
7.75k
title
stringlengths
1
290
state
stringclasses
2 values
created_at
timestamp[s]date
2021-07-26 12:21:17
2025-08-23 00:18:43
updated_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-23 12:34:39
closed_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-20 16:35:55
html_url
stringlengths
49
51
pull_request
dict
user_login
stringlengths
3
26
is_pull_request
bool
2 classes
comments
listlengths
0
30
2,583,233,980
7,224
fallback to default feature casting in case custom features not available during dataset loading
open
2024-10-12T16:13:56
2024-10-12T16:13:56
null
https://github.com/huggingface/datasets/pull/7224
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7224", "html_url": "https://github.com/huggingface/datasets/pull/7224", "diff_url": "https://github.com/huggingface/datasets/pull/7224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7224.patch", "merged_at": null }
alex-hh
true
[]
2,583,231,590
7,223
Fallback to arrow defaults when loading dataset with custom features that aren't registered locally
open
2024-10-12T16:08:20
2024-10-12T16:08:20
null
https://github.com/huggingface/datasets/issues/7223
null
alex-hh
false
[]
2,582,678,033
7,222
TypeError: Couldn't cast array of type string to null in long json
open
2024-10-12T08:14:59
2025-07-21T03:07:32
null
https://github.com/huggingface/datasets/issues/7222
null
nokados
false
[ "I am encountering this same issue. It seems that the library manages to recognise an optional column (but not **exclusively** null) if there is at least one non-null instance within the same file. For example, given a `test_0.jsonl` file:\r\n```json\r\n{\"a\": \"a1\", \"b\": \"b1\", \"c\": null, \"d\": null}\r\n{\"a\": \"a2\", \"b\": null, \"c\": \"c2\", \"d\": null}\r\n```\r\nthe data is correctly loaded, recognising that columns `b` & `c` are optional, while `d` is null.\r\n```python\r\n{'a': ['a1', 'a2'], 'b': ['b1', None], 'c': [None, 'c2'], 'd': [None, None]}\r\n```\r\n\r\nBut if the `config` has another file, say `test_1.jsonl` where `d` now has some non-null values:\r\n```json\r\n{\"a\": null, \"b\": \"b3\", \"c\": \"c3\", \"d\": \"d3\"}\r\n{\"a\": \"a4\", \"b\": \"b4\", \"c\": null, \"d\": null}\r\n```\r\nthen, an error is raised:\r\n```\r\nTypeError Traceback (most recent call last)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\r\n 1869 try:\r\n-> 1870 writer.write_table(table)\r\n 1871 except CastError as cast_error:\r\n\r\n14 frames\r\n\r\nTypeError: Couldn't cast array of type string to null\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\r\n 1895 if isinstance(e, DatasetGenerationError):\r\n 1896 raise\r\n-> 1897 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n 1898 \r\n 1899 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\n---\r\n\r\nI have created a [sample repository](https://huggingface.co/datasets/KurtMica/optional_columns_mutiple_files) if that helps. Interestingly, the dataset viewer correctly shows the data across files, although it still indicates the above error.", " Managed to find a workaround, by [specifying the features explicitly](https://huggingface.co/docs/datasets/main/en/loading#specify-features), which is also possible to do directly using the [YAML file configuration](https://discuss.huggingface.co/t/appropriate-yaml-for-dataset-info-list-float/74418).", "I hit the same issue for `datasets 3.2.0`. Given the two jsonl files with the same content but different ordering, `load_dataset` worked for one but did not work for the other.\n\n```\nfrom datasets import load_dataset\n\nissues_dataset = load_dataset(\n \"json\", data_files=\"NeMo-issues-fixed.jsonl\", split=\"train\"\n)\nissues_dataset\n```\n\nFor [NeMo-issues.jsonl](https://github.com/renweizhukov/jupyter-lab-notebook/blob/main/hugging-face-nlp-course/NeMo-issues.jsonl), I got an exception:\n\n```\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py:1870](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py#line=1869), in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1869 try:\n-> 1870 writer.write_table(table)\n 1871 except CastError as cast_error:\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/arrow_writer.py:622](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/arrow_writer.py#line=621), in ArrowWriter.write_table(self, pa_table, writer_batch_size)\n 621 pa_table = pa_table.combine_chunks()\n--> 622 pa_table = table_cast(pa_table, self._schema)\n 623 if self.embed_local_files:\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py:2292](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py#line=2291), in table_cast(table, schema)\n 2291 if table.schema != schema:\n-> 2292 return cast_table_to_schema(table, schema)\n 2293 elif table.schema.metadata != schema.metadata:\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py:2246](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py#line=2245), in cast_table_to_schema(table, schema)\n 2240 raise CastError(\n 2241 f\"Couldn't cast\\n{_short_str(table.schema)}\\nto\\n{_short_str(features)}\\nbecause column names don't match\",\n 2242 table_column_names=table.column_names,\n 2243 requested_column_names=list(features),\n 2244 )\n 2245 arrays = [\n-> 2246 cast_array_to_feature(\n 2247 table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),\n 2248 feature,\n 2249 )\n 2250 for name, feature in features.items()\n 2251 ]\n 2252 return pa.Table.from_arrays(arrays, schema=schema)\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py:1795](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py#line=1794), in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)\n 1794 if isinstance(array, pa.ChunkedArray):\n-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\n 1796 else:\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py:2102](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py#line=2101), in cast_array_to_feature(array, feature, allow_primitive_to_str, allow_decimal_to_str)\n 2101 elif not isinstance(feature, (Sequence, dict, list, tuple)):\n-> 2102 return array_cast(\n 2103 array,\n 2104 feature(),\n 2105 allow_primitive_to_str=allow_primitive_to_str,\n 2106 allow_decimal_to_str=allow_decimal_to_str,\n 2107 )\n 2108 raise TypeError(f\"Couldn't cast array of type\\n{_short_str(array.type)}\\nto\\n{_short_str(feature)}\")\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py:1797](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py#line=1796), in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)\n 1796 else:\n-> 1797 return func(array, *args, **kwargs)\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py:1948](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/table.py#line=1947), in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)\n 1947 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\n-> 1948 raise TypeError(f\"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}\")\n 1949 return array.cast(pa_type)\n\nTypeError: Couldn't cast array of type string to null\n\nThe above exception was the direct cause of the following exception:\n\nDatasetGenerationError Traceback (most recent call last)\nCell In[73], line 3\n 1 from datasets import load_dataset\n----> 3 issues_dataset = load_dataset(\n 4 \"json\", data_files=\"NeMo-issues.jsonl\", split=\"train\"\n 5 )\n 6 issues_dataset\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/load.py:2151](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/load.py#line=2150), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\n 2148 return builder_instance.as_streaming_dataset(split=split)\n 2150 # Download and prepare data\n-> 2151 builder_instance.download_and_prepare(\n 2152 download_config=download_config,\n 2153 download_mode=download_mode,\n 2154 verification_mode=verification_mode,\n 2155 num_proc=num_proc,\n 2156 storage_options=storage_options,\n 2157 )\n 2159 # Build dataset for splits\n 2160 keep_in_memory = (\n 2161 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\n 2162 )\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py:924](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py#line=923), in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)\n 922 if num_proc is not None:\n 923 prepare_split_kwargs[\"num_proc\"] = num_proc\n--> 924 self._download_and_prepare(\n 925 dl_manager=dl_manager,\n 926 verification_mode=verification_mode,\n 927 **prepare_split_kwargs,\n 928 **download_and_prepare_kwargs,\n 929 )\n 930 # Sync info\n 931 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py:1000](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py#line=999), in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\n 996 split_dict.add(split_generator.split_info)\n 998 try:\n 999 # Prepare split will record examples associated to the split\n-> 1000 self._prepare_split(split_generator, **prepare_split_kwargs)\n 1001 except OSError as e:\n 1002 raise OSError(\n 1003 \"Cannot find data file. \"\n 1004 + (self.manual_download_instructions or \"\")\n 1005 + \"\\nOriginal erro[r:\\n](file:///R:/n)\"\n 1006 + str(e)\n 1007 ) from None\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py:1741](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py#line=1740), in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)\n 1739 job_id = 0\n 1740 with pbar:\n-> 1741 for job_id, done, content in self._prepare_split_single(\n 1742 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args\n 1743 ):\n 1744 if done:\n 1745 result = content\n\nFile [~/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py:1897](http://localhost:8888/home/renwei/anaconda3/envs/llm/lib/python3.12/site-packages/datasets/builder.py#line=1896), in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1895 if isinstance(e, DatasetGenerationError):\n 1896 raise\n-> 1897 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\n 1899 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\n\nDatasetGenerationError: An error occurred while generating the dataset\n```\n\nFor [NeMo-issues-fixed.json](https://github.com/renweizhukov/jupyter-lab-notebook/blob/main/hugging-face-nlp-course/NeMo-issues-fixed.jsonl) which consists of the last 1000 lines and then the first 9000 lines of NeMo-issues.jsonl, I could load the data:\n\n```\nDataset({\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'sub_issues_summary', 'active_lock_reason', 'draft', 'pull_request', 'body', 'closed_by', 'reactions', 'timeline_url', 'performed_via_github_app', 'state_reason'],\n num_rows: 10000\n})\n```", "having the same issue as well!", "Is this fixed in the latest version?", "@DronHazra @renweizhukov Is this fixed in the latest version?" ]
2,582,114,631
7,221
add CustomFeature base class to support user-defined features with encoding/decoding logic
closed
2024-10-11T20:10:27
2025-01-28T09:40:29
2025-01-28T09:40:29
https://github.com/huggingface/datasets/pull/7221
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7221", "html_url": "https://github.com/huggingface/datasets/pull/7221", "diff_url": "https://github.com/huggingface/datasets/pull/7221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7221.patch", "merged_at": null }
alex-hh
true
[ "@lhoestq would you be open to supporting this kind of extensibility?", "I suggested a fix in https://github.com/huggingface/datasets/issues/7220 that would not necessarily require a parent class for custom features, lmk what you think" ]
2,582,036,110
7,220
Custom features not compatible with special encoding/decoding logic
open
2024-10-11T19:20:11
2024-11-08T15:10:58
null
https://github.com/huggingface/datasets/issues/7220
null
alex-hh
false
[ "I think you can fix this simply by replacing the line with hardcoded features with `hastattr(schema, \"encode_example\")` actually", "#7284 " ]
2,581,708,084
7,219
bump fsspec
closed
2024-10-11T15:56:36
2024-10-14T08:21:56
2024-10-14T08:21:55
https://github.com/huggingface/datasets/pull/7219
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7219", "html_url": "https://github.com/huggingface/datasets/pull/7219", "diff_url": "https://github.com/huggingface/datasets/pull/7219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7219.patch", "merged_at": "2024-10-14T08:21:55" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7219). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,581,095,098
7,217
ds.map(f, num_proc=10) is slower than df.apply
open
2024-10-11T11:04:05
2025-02-28T21:21:01
null
https://github.com/huggingface/datasets/issues/7217
null
lanlanlanlanlanlan365
false
[ "Hi ! `map()` reads all the columns and writes the resulting dataset with all the columns as well, while df.column_name.apply only reads and writes one column and does it in memory. So this is speed difference is actually expected.\r\n\r\nMoreover using multiprocessing on a dataset that lives in memory (from_pandas uses the same in-memory data as the pandas DataFrame while load_dataset or from_generator load from disk) requires to copy the data to each subprocess which can also be slow. Data loaded from disk don't need to be copied though since they work as a form of shared memory thanks to memory mapping.\r\n\r\nHowever you can make you map() call much faster by making it read and write only the column you want:\r\n\r\n```python\r\nhas_cover_ds = ds.map(lambda song_name: {'has_cover': has_cover(song_name)}, input_columns=[\"song_name\"], remove_columns=ds.column_names) # outputs a dataset with 1 column\r\nds = ds.concatenate_datasets([ds, has_cover_ds], axis=1)\r\n```\r\n\r\nand if your dataset is loaded from disk you can pass num_proc=10 and get a nice speed up as well (no need to copy the data to subprocesses)", "Isn't there a way to do memory mapping with the in-memory dataset without saving it to disk?", "Maybe saving it to a memory-mapped filesystem? It'd be like a trick to make datasets save to \"disk\" but actually it's memory. But it feels like there should be a better \"automatic\" way provided by `datasets`." ]
2,579,942,939
7,215
Iterable dataset map with explicit features causes slowdown for Sequence features
open
2024-10-10T22:08:20
2024-10-10T22:10:32
null
https://github.com/huggingface/datasets/issues/7215
null
alex-hh
false
[]
2,578,743,713
7,214
Formatted map + with_format(None) changes array dtype for iterable datasets
open
2024-10-10T12:45:16
2024-10-12T16:55:57
null
https://github.com/huggingface/datasets/issues/7214
null
alex-hh
false
[ "possibly due to this logic:\r\n\r\n```python\r\n def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:\r\n if isinstance(pa_array, pa.ChunkedArray):\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and all(\r\n not _is_array_with_nulls(chunk) for chunk in pa_array.chunks\r\n )\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and not _is_array_with_nulls(pa_array)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```" ]
2,578,675,565
7,213
Add with_rank to Dataset.from_generator
open
2024-10-10T12:15:29
2024-10-10T12:17:11
null
https://github.com/huggingface/datasets/issues/7213
null
muthissar
false
[]
2,578,641,259
7,212
Windows do not supprot signal.alarm and singal.signal
open
2024-10-10T12:00:19
2024-10-10T12:00:19
null
https://github.com/huggingface/datasets/issues/7212
null
TomasJavurek
false
[]
2,576,400,502
7,211
Describe only selected fields in README
open
2024-10-09T16:25:47
2024-10-09T16:25:47
null
https://github.com/huggingface/datasets/issues/7211
null
alozowski
false
[]
2,575,883,939
7,210
Convert Array features to numpy arrays rather than lists by default
open
2024-10-09T13:05:21
2024-10-09T13:05:21
null
https://github.com/huggingface/datasets/issues/7210
null
alex-hh
false
[]
2,575,526,651
7,209
Preserve features in iterable dataset.filter
closed
2024-10-09T10:42:05
2024-10-16T11:27:22
2024-10-09T16:04:07
https://github.com/huggingface/datasets/pull/7209
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7209", "html_url": "https://github.com/huggingface/datasets/pull/7209", "diff_url": "https://github.com/huggingface/datasets/pull/7209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7209.patch", "merged_at": "2024-10-09T16:04:07" }
alex-hh
true
[ "Yes your assumption on concatenate/interleave is ok imo.\r\n\r\nIt seems the TypedExamplesIterable can slow down things, it should take formatting into account to not convert numpy arrays to python lists\r\n\r\nright now it's slow (unrelatedly to your PR):\r\n\r\n```python\r\n>>> ds = Dataset.from_dict({\"a\": np.zeros((1000, 32, 32))}).to_iterable_dataset().with_format(\"np\")\r\n>>> filtered_ds = ds.filter(lambda x: True)\r\n>>> %time sum(1 for _ in ds)\r\nCPU times: user 175 ms, sys: 8.1 ms, total: 183 ms\r\nWall time: 184 ms\r\n1000\r\n>>> %time sum(1 for _ in filtered_ds)\r\nCPU times: user 4.1 s, sys: 8.41 ms, total: 4.1 s\r\nWall time: 4.12 s\r\n1000\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7209). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> It seems the TypedExamplesIterable can slow down things, it should take formatting into account to not convert numpy arrays to python lists\r\n\r\nShould be fixed by updated #7207 I hope!" ]
2,575,484,256
7,208
Iterable dataset.filter should not override features
closed
2024-10-09T10:23:45
2024-10-09T16:08:46
2024-10-09T16:08:45
https://github.com/huggingface/datasets/issues/7208
null
alex-hh
false
[ "closed by https://github.com/huggingface/datasets/pull/7209, thanks @alex-hh !" ]
2,573,582,335
7,207
apply formatting after iter_arrow to speed up format -> map, filter for iterable datasets
closed
2024-10-08T15:44:53
2025-01-14T18:36:03
2025-01-14T16:59:30
https://github.com/huggingface/datasets/pull/7207
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7207", "html_url": "https://github.com/huggingface/datasets/pull/7207", "diff_url": "https://github.com/huggingface/datasets/pull/7207.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7207.patch", "merged_at": "2025-01-14T16:59:30" }
alex-hh
true
[ "I think the problem is that the underlying ex_iterable will not use iter_arrow unless the formatting type is arrow, which leads to conversion from arrow -> python -> numpy in this case rather than arrow -> numpy.\r\n\r\nIdea of updated fix is to use the ex_iterable's iter_arrow in any case where it's available and any formatting is specified. The formatter then works directly on arrow tables; the outputs of the formatter get passed to the function to be mapped.\r\n\r\nWith updated version:\r\n\r\n```python\r\nimport numpy as np\r\nimport time\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\nfeatures=Features(**{\"array0\": Array3D((None, 10, 10), dtype=\"float32\"), \"array1\": Array3D((None,10,10), dtype=\"float32\")})\r\ndataset = Dataset.from_dict({f\"array{i}\": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)\r\n```\r\n\r\n```python\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").map(lambda x: x, batched=True, batch_size=10)\r\nt0 = time.time()\r\nfor ex in ds:\r\n pass\r\nt1 = time.time()\r\n```\r\nTotal time: < 0.01s (~30s on main)\r\n\r\n```python\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").map(lambda x: x, batched=False)\r\nt0 = time.time()\r\nfor ex in ds:\r\n pass\r\nt1 = time.time()\r\n```\r\nTime: ~0.02 s (~30s on main)\r\n\r\n```python\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\")\r\nt0 = time.time()\r\nfor ex in ds:\r\n pass\r\nt1 = time.time()\r\n```\r\nTime: ~0.02s", "also now working for filter with similar performance improvements:\r\n\r\n```python\r\nfiltered_examples = []\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").filter(lambda x: [arr.shape[0]==2000 for arr in x[\"array0\"]], batch_size=10, batched=True)\r\nt0 = time.time()\r\nfor ex in ds:\r\n filtered_examples.append(ex)\r\nt1 = time.time()\r\nassert len(filtered_examples) == 25\r\n```\r\n0.01s vs 50s on main\r\n\r\n\r\n```python\r\nfiltered_examples = []\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").filter(lambda x: x[\"array0\"].shape[0]==2000, batched=False)\r\nt0 = time.time()\r\nfor ex in ds:\r\n filtered_examples.append(ex)\r\nt1 = time.time()\r\nassert len(filtered_examples) == 25\r\n```\r\n0.04s vs 50s on main\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7207). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "(the distributed tests failing in the CI are unrelated)", "There also appears to be a separate? issue with chaining filter and map bc filter iter_arrow only returns _iter_arrow if arrow formatting is applied (and vv presumably)\r\n\r\nI don't have a good minimal example atm", "> issue with chaining filter and map bc filter iter_arrow only returns _iter_arrow if arrow formatting is applied (and vv presumably)\r\n\r\nMaybe related to this issue ?\r\n\r\n```python\r\nds = Dataset.from_dict({\"a\": range(10)}).to_iterable_dataset()\r\nds = ds.with_format(\"arrow\").map(lambda x: x, features=Features({\"a\": Value(\"string\")})).with_format(None)\r\nprint(list(ds)) # yields integers instead of strings\r\n```", "I feel like we could get rid of TypedExampleIterable altogether and apply formatting with feature conversion with `formatted_python_examples_iterator ` and `formatted_arrow_examples_iterator`\r\n\r\nbtw you can pass `features=` in `get_formatter()` to get a formatter that does the feature conversion at the same time as formatting\r\n\r\n(edit:\r\n\r\nexcept maybe the arrow formatter doesn't use `features` yet, we can fix it like this if it's really needed\r\n```diff\r\nclass ArrowFormatter(Formatter[pa.Table, pa.Array, pa.Table]):\r\n def format_row(self, pa_table: pa.Table) -> pa.Table:\r\n- return self.simple_arrow_extractor().extract_row(pa_table)\r\n+ pa_table = self.simple_arrow_extractor().extract_row(pa_table)\r\n+. return cast_table_to_features(pa_table, self.features) if self.features else pa_table\r\n \r\n```\r\n\r\n\r\n)", "> I feel like we could get rid of TypedExampleIterable altogether and apply formatting with feature conversion with formatted_python_examples_iterator and formatted_arrow_examples_iterator\r\n\r\nOh nice didn't know about the feature support in get_formatter. Haven't thought through whether this works but would a FormattedExampleIterable (with feature conversion) be able to solve this and fit the API better?", "> Oh nice didn't know about the feature support in get_formatter. Haven't thought through whether this works but would a FormattedExampleIterable (with feature conversion) be able to solve this and fit the API better?\r\n\r\nYes this is surely the way to go actually !", "ok i've fixed the chaining issue with my last two commits.\r\n\r\nWill see if I can refactor into a FormattedExampleIterable\r\n\r\nThe other issue you posted seems to be unrelated (maybe something to do with feature decoding?)", "updated with FormattedExamplesIterable.\r\n\r\nthere might be a few unnecessary format calls once the data is already formatted - doesn't seem like a big performance bottleneck but could maybe be fixed with e.g. an is_formatted property\r\n\r\nIt also might be possible to do a wider refactor and use FormattedExamplesIterable elsewhere. But I'd personally prefer not to try that rn.", "Thinking about this in the context of #7210 - am wondering if it would make sense for Features to define their own extraction arrow->object logic? e.g. Arrays should *always* be extracted with NumpyArrowExtractor, not only in case with_format is set to numpy (which a user can easily forget or not know to do)\r\n", "> Thinking about this in the context of https://github.com/huggingface/datasets/issues/7210 - am wondering if it would make sense for Features to define their own extraction arrow->object logic? e.g. Arrays should always be extracted with NumpyArrowExtractor, not only in case with_format is set to numpy (which a user can easily forget or not know to do)\r\n\r\nFor `ArrayND` they already implement `to_pylist` to decode arrow data and it can be updated to return a numpy array (see the `ArrayExtensionArray` class for more details)", "@lhoestq im no longer sure my specific concern about with_format(None) was well-founded - I didn't appreciate that the python formatter tries to do nothing to python objects including numpy arrays, so the existing with_format(None) should I *think* do what I want. Do you think with_format(None) is ok as is after all? If so think this is hopefully ready for final review!", "@lhoestq I've updated to make compatible with latest changes on main, and think the current with_format None behaviour is probably fine - please let me know if there's anything else I can do!", "Hi Alex, I will be less available from today and for a week. I'll review your PR and play with it once I come back if you don't mind !", "thanks for the reviews and extensions, happy to see this merged :)" ]
2,573,567,467
7,206
Slow iteration for iterable dataset with numpy formatting for array data
open
2024-10-08T15:38:11
2024-10-17T17:14:52
null
https://github.com/huggingface/datasets/issues/7206
null
alex-hh
false
[ "The below easily eats up 32G of RAM. Leaving it for a while bricked the laptop with 16GB.\r\n\r\n```\r\ndataset = load_dataset(\"Voxel51/OxfordFlowers102\", data_dir=\"data\").with_format(\"numpy\")\r\nprocessed_dataset = dataset.map(lambda x: x)\r\n```\r\n\r\n![image](https://github.com/user-attachments/assets/c1863a69-b18f-4014-89dc-98994336df96)\r\n\r\nSimilar problems occur if using a real transform function in `.map()`." ]
2,573,490,859
7,205
fix ci benchmark
closed
2024-10-08T15:06:18
2024-10-08T15:25:28
2024-10-08T15:25:25
https://github.com/huggingface/datasets/pull/7205
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7205", "html_url": "https://github.com/huggingface/datasets/pull/7205", "diff_url": "https://github.com/huggingface/datasets/pull/7205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7205.patch", "merged_at": "2024-10-08T15:25:25" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7205). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,573,289,063
7,204
fix unbatched arrow map for iterable datasets
closed
2024-10-08T13:54:09
2024-10-08T14:19:47
2024-10-08T14:19:47
https://github.com/huggingface/datasets/pull/7204
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7204", "html_url": "https://github.com/huggingface/datasets/pull/7204", "diff_url": "https://github.com/huggingface/datasets/pull/7204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7204.patch", "merged_at": "2024-10-08T14:19:46" }
alex-hh
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7204). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,573,154,222
7,203
with_format docstring
closed
2024-10-08T13:05:19
2024-10-08T13:13:12
2024-10-08T13:13:05
https://github.com/huggingface/datasets/pull/7203
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7203", "html_url": "https://github.com/huggingface/datasets/pull/7203", "diff_url": "https://github.com/huggingface/datasets/pull/7203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7203.patch", "merged_at": "2024-10-08T13:13:05" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7203). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,572,583,798
7,202
`from_parquet` return type annotation
open
2024-10-08T09:08:10
2024-10-08T09:08:10
null
https://github.com/huggingface/datasets/issues/7202
null
saiden89
false
[]
2,569,837,015
7,201
`load_dataset()` of images from a single directory where `train.png` image exists
open
2024-10-07T09:14:17
2024-10-07T09:14:17
null
https://github.com/huggingface/datasets/issues/7201
null
SagiPolaczek
false
[]
2,567,921,694
7,200
Fix the environment variable for huggingface cache
closed
2024-10-05T11:54:35
2024-10-30T23:10:27
2024-10-08T15:45:18
https://github.com/huggingface/datasets/pull/7200
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7200", "html_url": "https://github.com/huggingface/datasets/pull/7200", "diff_url": "https://github.com/huggingface/datasets/pull/7200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7200.patch", "merged_at": "2024-10-08T15:45:17" }
torotoki
true
[ "Hi ! yes now `datasets` uses `huggingface_hub` to download and cache files from the HF Hub so you need to use `HF_HOME` (or manually `HF_HUB_CACHE` and `HF_DATASETS_CACHE` if you want to separate HF Hub cached files and cached datasets Arrow files)\r\n\r\nSo in your change I guess it needs to be `HF_HOME` instead of `HF_CACHE` ?", "Thank you for your comment. You are right. I am sorry for my mistake, I fixed it.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7200). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I just had this issue, and needed to move the setting the env code in the python file to top, before the import of the lib \r\nie. \r\n```python\r\nimport os\r\nLOCAL_DISK_MOUNT = '/mnt/data'\r\n\r\nos.environ['HF_HOME'] = f'{LOCAL_DISK_MOUNT}/hf_cache/'\r\nos.environ['HF_DATASETS_CACHE'] = f'{LOCAL_DISK_MOUNT}/datasets/'\r\n\r\nfrom datasets import load_dataset\r\nfrom datasets import load_dataset_builder\r\nfrom psutil._common import bytes2human\r\n\r\n\r\n```" ]
2,566,788,225
7,199
Add with_rank to Dataset.from_generator
open
2024-10-04T16:51:53
2024-10-04T16:51:53
null
https://github.com/huggingface/datasets/pull/7199
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7199", "html_url": "https://github.com/huggingface/datasets/pull/7199", "diff_url": "https://github.com/huggingface/datasets/pull/7199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7199.patch", "merged_at": null }
muthissar
true
[]
2,566,064,849
7,198
Add repeat method to datasets
closed
2024-10-04T10:45:16
2025-02-05T16:32:31
2025-02-05T16:32:31
https://github.com/huggingface/datasets/pull/7198
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7198", "html_url": "https://github.com/huggingface/datasets/pull/7198", "diff_url": "https://github.com/huggingface/datasets/pull/7198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7198.patch", "merged_at": "2025-02-05T16:32:31" }
alex-hh
true
[ "@lhoestq does this look reasonable?", "updated and added test cases!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7198). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "thanks for the fixes!" ]
2,565,924,788
7,197
ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事
open
2024-10-04T09:33:25
2025-02-26T02:26:16
null
https://github.com/huggingface/datasets/issues/7197
null
Mrgengli
false
[ "Also cant download \"allenai/c4\", but with different error reported:\r\n```\r\nTraceback (most recent call last): \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 2074, in load_dataset \r\n builder_instance = load_dataset_builder( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1795, in load_dataset_builder \r\n dataset_module = dataset_module_factory( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1659, in dataset_module_factory \r\n raise e1 from None \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1647, in dataset_module_factory \r\n ).get_module() \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1069, in get_module \r\n module_name, default_builder_kwargs = infer_module_for_data_files( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 594, in infer_module_for_data_files \r\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\")) \r\ndatasets.exceptions.DataFilesNotFoundError: No (supported) data files found in allenai/c4 \r\n```\r\n\r\n## Code to reproduce\r\n```\r\ndataset = load_dataset(\"allenai/c4\", \"en\", split=\"train\", streaming=True,trust_remote_code=True,\r\n cache_dir=\"dataset/en\",\r\n download_mode=\"force_redownload\")\r\n```\r\n\r\n## Environment\r\ndatasets 3.0.1 \r\nhuggingface_hub 0.25.1", "应该是网络问题,无法访问外网?" ]
2,564,218,566
7,196
concatenate_datasets does not preserve shuffling state
open
2024-10-03T14:30:38
2025-03-18T10:56:47
null
https://github.com/huggingface/datasets/issues/7196
null
alex-hh
false
[ "It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`" ]
2,564,070,809
7,195
Add support for 3D datasets
open
2024-10-03T13:27:44
2024-10-04T09:23:36
null
https://github.com/huggingface/datasets/issues/7195
null
severo
false
[ "maybe related: https://github.com/huggingface/datasets/issues/6388", "Also look at https://github.com/huggingface/dataset-viewer/blob/f5fd117ceded990a7766e705bba1203fa907d6ad/services/worker/src/worker/job_runners/dataset/modalities.py#L241 which lists the 3D file formats that will assign the 3D modality to a dataset.", "~~we can brainstorm about the UX maybe (i don't expect we should load all models on the page at once – IMO there should be a manual action from user to load + maybe load first couple of row by default) cc @gary149 @cfahlgren1~~\r\n\r\nit's more for the viewer issue (https://github.com/huggingface/dataset-viewer/issues/1003)" ]
2,563,364,199
7,194
datasets.exceptions.DatasetNotFoundError for private dataset
closed
2024-10-03T07:49:36
2024-10-03T10:09:28
2024-10-03T10:09:28
https://github.com/huggingface/datasets/issues/7194
null
kdutia
false
[ "Actually there is no such dataset available, that is why you are getting that error.", "Fixed with @kdutia in Slack chat. Generating a new token fixed this issue. " ]
2,562,392,887
7,193
Support of num_workers (multiprocessing) in map for IterableDataset
open
2024-10-02T18:34:04
2024-10-03T09:54:15
null
https://github.com/huggingface/datasets/issues/7193
null
getao
false
[ "I was curious about the same - since map is applied on the fly I was assuming that setting num_workers>1 in DataLoader would effectively do the map in parallel, have you tried that?" ]
2,562,289,642
7,192
Add repeat() for iterable datasets
closed
2024-10-02T17:48:13
2025-03-18T10:48:33
2025-03-18T10:48:32
https://github.com/huggingface/datasets/issues/7192
null
alex-hh
false
[ "perhaps concatenate_datasets can already be used to achieve almost the same effect? ", "`concatenate_datasets` does the job when there is a finite number of repetitions, but in case of `.repeat()` forever we need a new logic in `iterable_dataset.py`", "done in https://github.com/huggingface/datasets/pull/7198" ]
2,562,206,949
7,191
Solution to issue: #7080 Modified load_dataset function, so that it prompts the user to select a dataset when subdatasets or splits (train, test) are available
closed
2024-10-02T17:02:45
2024-11-10T08:48:21
2024-11-10T08:48:21
https://github.com/huggingface/datasets/pull/7191
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7191", "html_url": "https://github.com/huggingface/datasets/pull/7191", "diff_url": "https://github.com/huggingface/datasets/pull/7191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7191.patch", "merged_at": null }
negativenagesh
true
[ "I think the approach presented in https://github.com/huggingface/datasets/pull/6832 is the one we'll be taking.\r\n\r\nAsking user input is not a good idea since `load_dataset` is used a lot in server that don't have someone in front of them to select a split" ]
2,562,162,725
7,190
Datasets conflicts with fsspec 2024.9
open
2024-10-02T16:43:46
2024-10-10T07:33:18
null
https://github.com/huggingface/datasets/issues/7190
null
cw-igormorgado
false
[ "Yes, I need to use the latest version of fsspec and datasets for my usecase. \r\nhttps://github.com/fsspec/s3fs/pull/888#issuecomment-2404204606\r\nhttps://github.com/apache/arrow/issues/34363#issuecomment-2403553473\r\n\r\nlast version where things install without conflict is: 2.14.4\r\n\r\nSo this issue starts from:\r\nhttps://github.com/huggingface/datasets/releases/tag/2.14.5" ]
2,562,152,845
7,189
Audio preview in dataset viewer for audio array data without a path/filename
open
2024-10-02T16:38:38
2024-10-02T17:01:40
null
https://github.com/huggingface/datasets/issues/7189
null
Lauler
false
[]
2,560,712,689
7,188
Pin multiprocess<0.70.1 to align with dill<0.3.9
closed
2024-10-02T05:40:18
2024-10-02T06:08:25
2024-10-02T06:08:23
https://github.com/huggingface/datasets/pull/7188
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7188", "html_url": "https://github.com/huggingface/datasets/pull/7188", "diff_url": "https://github.com/huggingface/datasets/pull/7188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7188.patch", "merged_at": "2024-10-02T06:08:23" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7188). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,560,501,308
7,187
shard_data_sources() got an unexpected keyword argument 'worker_id'
open
2024-10-02T01:26:35
2024-10-02T01:26:35
null
https://github.com/huggingface/datasets/issues/7187
null
Qinghao-Hu
false
[]
2,560,323,917
7,186
pinning `dill<0.3.9` without pinning `multiprocess`
closed
2024-10-01T22:29:32
2024-10-02T06:08:24
2024-10-02T06:08:24
https://github.com/huggingface/datasets/issues/7186
null
shubhbapna
false
[]
2,558,508,748
7,185
CI benchmarks are broken
closed
2024-10-01T08:16:08
2024-10-09T16:07:48
2024-10-09T16:07:48
https://github.com/huggingface/datasets/issues/7185
null
albertvillanova
false
[ "Fixed by #7205" ]
2,556,855,150
7,184
Pin dill<0.3.9 to fix CI
closed
2024-09-30T14:26:25
2024-09-30T14:38:59
2024-09-30T14:38:57
https://github.com/huggingface/datasets/pull/7184
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7184", "html_url": "https://github.com/huggingface/datasets/pull/7184", "diff_url": "https://github.com/huggingface/datasets/pull/7184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7184.patch", "merged_at": "2024-09-30T14:38:57" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7184). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,556,789,055
7,183
CI is broken for deps-latest
closed
2024-09-30T14:02:07
2024-09-30T14:38:58
2024-09-30T14:38:58
https://github.com/huggingface/datasets/issues/7183
null
albertvillanova
false
[]
2,556,333,671
7,182
Support features in metadata configs
closed
2024-09-30T11:14:53
2024-10-09T16:03:57
2024-10-09T16:03:54
https://github.com/huggingface/datasets/pull/7182
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7182", "html_url": "https://github.com/huggingface/datasets/pull/7182", "diff_url": "https://github.com/huggingface/datasets/pull/7182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7182.patch", "merged_at": "2024-10-09T16:03:54" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7182). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The CI issue is unrelated:\r\n- #7183" ]
2,554,917,019
7,181
Fix datasets export to JSON
closed
2024-09-29T12:45:20
2024-11-01T11:55:36
2024-11-01T11:55:36
https://github.com/huggingface/datasets/pull/7181
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7181", "html_url": "https://github.com/huggingface/datasets/pull/7181", "diff_url": "https://github.com/huggingface/datasets/pull/7181.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7181.patch", "merged_at": null }
varadhbhatnagar
true
[ "Linked Issue: #7037\r\nIdeas: #7039 ", "@albertvillanova / @lhoestq any early feedback?\r\n\r\nAFAIK there is no param `orient` in `load_dataset()`. So for orientations other than \"records\", the loading isn't very accurate. Any thoughts?", "`orient = \"split\"` can also be handled. I will add the changes soon", "Thanks for diving into this ! I don't think we want the JSON export to be that complex though, especially if people can do `ds.to_pandas().to_json(orient=...)`. Maybe we can just raise an error and suggest users to use pandas ? And also note that it loads the full dataset in memory so it's mainly for small scale datasets. The only acceptable option for large scale datasets is probably just JSON Lines anyway since it enables streaming.", "@lhoestq Simply doing `ds.to_pandas().to_json(orient=...)` is not going to give any batching or multiprocessing benefits right? Also, which function are you referring to - when you say that its meant for small scale datasets only?", "Yes indeed. Though I think it's fine since using something else than orient=\"lines\" is only suitable/useful for small datasets. Or you know a case where a big dataset need to be in a format that is not orient=\"lines\" ?", "@lhoestq Let me close this PR and open another one where I will add an error message, as suggested here.\r\n\r\n> Thanks for diving into this ! I don't think we want the JSON export to be that complex though, especially if people can do `ds.to_pandas().to_json(orient=...)`. Maybe we can just raise an error and suggest users to use pandas ? And also note that it loads the full dataset in memory so it's mainly for small scale datasets. The only acceptable option for large scale datasets is probably just JSON Lines anyway since it enables streaming.\r\n\r\n", "Addressed here: #7273 \r\n@lhoestq " ]
2,554,244,750
7,180
Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion
closed
2024-09-28T14:00:47
2024-09-30T12:07:56
2024-09-30T12:07:56
https://github.com/huggingface/datasets/issues/7180
null
iamwangyabin
false
[ "> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data will stay in RAM, and get paged out once your system needs RAM for something else (no OOM).\r\n\r\nrelated: https://github.com/huggingface/datasets/issues/4883" ]
2,552,387,980
7,179
Support Python 3.11
closed
2024-09-27T08:55:44
2024-10-08T16:21:06
2024-10-08T16:21:03
https://github.com/huggingface/datasets/pull/7179
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7179", "html_url": "https://github.com/huggingface/datasets/pull/7179", "diff_url": "https://github.com/huggingface/datasets/pull/7179.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7179.patch", "merged_at": "2024-10-08T16:21:03" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7179). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,552,378,330
7,178
Support Python 3.11
closed
2024-09-27T08:50:47
2024-10-08T16:21:04
2024-10-08T16:21:04
https://github.com/huggingface/datasets/issues/7178
null
albertvillanova
false
[]
2,552,371,082
7,177
Fix release instructions
closed
2024-09-27T08:47:01
2024-09-27T08:57:35
2024-09-27T08:57:32
https://github.com/huggingface/datasets/pull/7177
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7177", "html_url": "https://github.com/huggingface/datasets/pull/7177", "diff_url": "https://github.com/huggingface/datasets/pull/7177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7177.patch", "merged_at": "2024-09-27T08:57:32" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7177). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,551,025,564
7,176
fix grammar in fingerprint.py
open
2024-09-26T16:13:42
2024-09-26T16:13:42
null
https://github.com/huggingface/datasets/pull/7176
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7176", "html_url": "https://github.com/huggingface/datasets/pull/7176", "diff_url": "https://github.com/huggingface/datasets/pull/7176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7176.patch", "merged_at": null }
jxmorris12
true
[]
2,550,957,337
7,175
[FSTimeoutError] load_dataset
closed
2024-09-26T15:42:29
2025-02-01T09:09:35
2024-09-30T17:28:35
https://github.com/huggingface/datasets/issues/7175
null
cosmo3769
false
[ "Is this `FSTimeoutError` due to download network issue from remote resource (from where it is being accessed)?", "It seems to happen for all datasets, not just a specific one, and especially for versions after 3.0. (3.0.0, 3.0.1 have this problem)\r\n\r\nI had the same error on a different dataset, but after downgrading to datasets==2.21.0, the problem was solved.", "Same as https://github.com/huggingface/datasets/issues/7164\r\n\r\nThis dataset is made of a python script that downloads data from elsewhere than HF, so availability depends on the original host. Ultimately it would be nice to host the files of this dataset on HF\r\n\r\nin `datasets` <3.0 there were lots of mechanisms that got removed after the decision to make datasets with python loading scripts legacy for security and maintenance reasons (we only do very basic support now)", "@lhoestq Thank you for the clarification! Closing the issue.", "I'm getting this too, and also at 5 minutes. But for `CSTR-Edinburgh/vctk`, so it's not just this dataset, it seems to be a timeout that was introduced and needs to be raised. The progress bar was moving along just fine before the timeout, and I get more or less of it depending on how fast the network is.", "You can change the `aiohttp` timeout from 5min to 1h like this:\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```", "@JonasLoos Solution solved a download timeout error I received when downloading `\"HuggingFaceM4/VQAv2\"` 🎉 " ]
2,549,892,315
7,174
Set dev version
closed
2024-09-26T08:30:11
2024-09-26T08:32:39
2024-09-26T08:30:21
https://github.com/huggingface/datasets/pull/7174
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7174", "html_url": "https://github.com/huggingface/datasets/pull/7174", "diff_url": "https://github.com/huggingface/datasets/pull/7174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7174.patch", "merged_at": "2024-09-26T08:30:21" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7174). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,549,882,529
7,173
Release: 3.0.1
closed
2024-09-26T08:25:54
2024-09-26T08:28:29
2024-09-26T08:26:03
https://github.com/huggingface/datasets/pull/7173
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7173", "html_url": "https://github.com/huggingface/datasets/pull/7173", "diff_url": "https://github.com/huggingface/datasets/pull/7173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7173.patch", "merged_at": "2024-09-26T08:26:03" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7173). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,549,781,691
7,172
Add torchdata as a regular test dependency
closed
2024-09-26T07:45:55
2024-09-26T08:12:12
2024-09-26T08:05:40
https://github.com/huggingface/datasets/pull/7172
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7172", "html_url": "https://github.com/huggingface/datasets/pull/7172", "diff_url": "https://github.com/huggingface/datasets/pull/7172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7172.patch", "merged_at": "2024-09-26T08:05:40" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7172). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,549,738,919
7,171
CI is broken: No solution found when resolving dependencies
closed
2024-09-26T07:24:58
2024-09-26T08:05:41
2024-09-26T08:05:41
https://github.com/huggingface/datasets/issues/7171
null
albertvillanova
false
[]
2,546,944,016
7,170
Support JSON lines with missing columns
closed
2024-09-25T05:08:15
2024-09-26T06:42:09
2024-09-26T06:42:07
https://github.com/huggingface/datasets/pull/7170
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7170", "html_url": "https://github.com/huggingface/datasets/pull/7170", "diff_url": "https://github.com/huggingface/datasets/pull/7170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7170.patch", "merged_at": "2024-09-26T06:42:07" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7170). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,546,894,076
7,169
JSON lines with missing columns raise CastError
closed
2024-09-25T04:43:28
2024-09-26T06:42:08
2024-09-26T06:42:08
https://github.com/huggingface/datasets/issues/7169
null
albertvillanova
false
[]
2,546,710,631
7,168
sd1.5 diffusers controlnet training script gives new error
closed
2024-09-25T01:42:49
2024-09-30T05:24:03
2024-09-30T05:24:02
https://github.com/huggingface/datasets/issues/7168
null
Night1099
false
[ "not sure why the issue is formatting oddly", "I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071", "this turned out to be because of a bad image in dataset" ]
2,546,708,014
7,167
Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers
closed
2024-09-25T01:39:51
2024-09-30T05:28:15
2024-09-30T05:28:04
https://github.com/huggingface/datasets/issues/7167
null
Night1099
false
[ "this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, batch_size=16, new_fingerprint=new_fingerprint)\r\n```" ]
2,545,608,736
7,166
fix docstring code example for distributed shuffle
closed
2024-09-24T14:39:54
2024-09-24T14:42:41
2024-09-24T14:40:14
https://github.com/huggingface/datasets/pull/7166
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7166", "html_url": "https://github.com/huggingface/datasets/pull/7166", "diff_url": "https://github.com/huggingface/datasets/pull/7166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7166.patch", "merged_at": "2024-09-24T14:40:14" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7166). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,544,972,541
7,165
fix increase_load_count
closed
2024-09-24T10:14:40
2024-09-24T17:31:07
2024-09-24T13:48:00
https://github.com/huggingface/datasets/pull/7165
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7165", "html_url": "https://github.com/huggingface/datasets/pull/7165", "diff_url": "https://github.com/huggingface/datasets/pull/7165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7165.patch", "merged_at": "2024-09-24T13:48:00" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I tested a few load_dataset and they do show up in download stats now", "Thanks for having noticed and fixed." ]
2,544,757,297
7,164
fsspec.exceptions.FSTimeoutError when downloading dataset
closed
2024-09-24T08:45:05
2025-07-28T14:58:49
2025-07-28T14:58:49
https://github.com/huggingface/datasets/issues/7164
null
timonmerk
false
[ "Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\n\r\nIn the meantime I can only recommend to try again later :/", "Ok, still many thanks!", "I'm also getting this same error but for `CSTR-Edinburgh/vctk`, so I don't think it's the remote host that's timing out, since I also time out at exactly 5 minutes. It seems there is a universal fsspec timeout that's getting hit starting in v3.", "in v3 we cleaned the download parts of the library to make it more robust for HF downloads and to simplify support of script-based datasets. As a side effect it's not the same code that is used for other hosts, maybe time out handling changed. Anyway it should be possible to tweak fsspec to use retries\r\n\r\nFor example using [aiohttp_retry](https://github.com/inyutin/aiohttp_retry) maybe (haven't tried) ?\r\n\r\n```python\r\nimport fsspec\r\nfrom aiohttp_retry import RetryClient\r\n\r\nfsspec.filesystem(\"http\")._session = RetryClient()\r\n```\r\n\r\nrelated topic : https://github.com/huggingface/datasets/issues/7175", "Adding a timeout argument to the `fs.get_file` call in `fsspec_get` in `datasets/utils/file_utils.py` might fix this ([source code](https://github.com/huggingface/datasets/blob/65f6eb54aa0e8bb44cea35deea28e0e8fecc25b9/src/datasets/utils/file_utils.py#L330)):\r\n\r\n```python\r\nfs.get_file(path, temp_file.name, callback=callback, timeout=3600)\r\n```\r\n\r\nSetting `timeout=1` fails after about one second, so setting it to 3600 should give us 1h. Havn't really tested this though. I'm also not sure what implications this has and if it causes errors for other `fs` implementations/configurations.\r\n\r\nThis is using `datasets==3.0.1` and Python 3.11.6.\r\n\r\n---\r\n\r\nEdit: This doesn't seem to change the timeout time, but add a second timeout counter (probably in `fsspec/asyn.py/sync`). So one can reduce the time for downloading like this, but not expand.\r\n\r\n---\r\n\r\nEdit 2: `fs` is of type `fsspec.implementations.http.HTTPFileSystem` which initializes a `aiohttp.ClientSession` using `client_kwargs`. We can pass these when calling `load_dataset`.\r\n\r\n**TLDR; This fixes it:**\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```", "I've handled the issue like this to ensure smoother downloads when using the `datasets` library. \nIf modifying the library is not too inconvenient, this approach could be a good (but tentative) solution.\n\n### Changes Made\n\nModified `datasets.utils.file_utils.fsspec_get` to handle storage options and set a timeout:\n\n```python\ndef fsspec_get(url, temp_file, storage_options=None, desc=None, disable_tqdm=False):\n\n # ---> [ADD]\n if storage_options is None:\n storage_options = {}\n if \"client_kwargs\" not in storage_options:\n storage_options[\"client_kwargs\"] = {}\n storage_options[\"client_kwargs\"][\"timeout\"] = aiohttp.ClientTimeout(total=3600)\n # <---\n\n # The rest of the original code remains unchanged", "Librispeech_asr is now hosted on HF, which fixes this issue 🎉\n\nclosing this one now :)" ]
2,542,361,234
7,163
Set explicit seed in iterable dataset ddp shuffling example
closed
2024-09-23T11:34:06
2024-09-24T14:40:15
2024-09-24T14:40:15
https://github.com/huggingface/datasets/issues/7163
null
alex-hh
false
[ "thanks for reporting !" ]
2,542,323,382
7,162
Support JSON lines with empty struct
closed
2024-09-23T11:16:12
2024-09-23T11:30:08
2024-09-23T11:30:06
https://github.com/huggingface/datasets/pull/7162
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7162", "html_url": "https://github.com/huggingface/datasets/pull/7162", "diff_url": "https://github.com/huggingface/datasets/pull/7162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7162.patch", "merged_at": "2024-09-23T11:30:06" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7162). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,541,971,931
7,161
JSON lines with empty struct raise ArrowTypeError
closed
2024-09-23T08:48:56
2024-09-25T04:43:44
2024-09-23T11:30:07
https://github.com/huggingface/datasets/issues/7161
null
albertvillanova
false
[]
2,541,877,813
7,160
Support JSON lines with missing struct fields
closed
2024-09-23T08:04:09
2024-09-23T11:09:19
2024-09-23T11:09:17
https://github.com/huggingface/datasets/pull/7160
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7160", "html_url": "https://github.com/huggingface/datasets/pull/7160", "diff_url": "https://github.com/huggingface/datasets/pull/7160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7160.patch", "merged_at": "2024-09-23T11:09:17" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7160). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,541,865,613
7,159
JSON lines with missing struct fields raise TypeError: Couldn't cast array
closed
2024-09-23T07:57:58
2024-10-21T08:07:07
2024-09-23T11:09:18
https://github.com/huggingface/datasets/issues/7159
null
albertvillanova
false
[ "Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wikipedia dataset locally but when loading I have the same issue.\r\n\r\n```\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"/gpfsdsdir/dataset/HuggingFace/wikimedia/structured-wikipedia/20240916.fr\")\r\n```\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<content_url: string, width: int64, height: int64, alternative_text: string>\r\nto\r\n{'content_url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n```\r\nMy version of datasets is 3.0.1" ]
2,541,494,765
7,158
google colab ex
closed
2024-09-23T03:29:50
2024-12-20T16:41:07
2024-12-20T16:41:07
https://github.com/huggingface/datasets/pull/7158
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7158", "html_url": "https://github.com/huggingface/datasets/pull/7158", "diff_url": "https://github.com/huggingface/datasets/pull/7158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7158.patch", "merged_at": null }
docfhsp
true
[]
2,540,354,890
7,157
Fix zero proba interleave datasets
closed
2024-09-21T15:19:14
2024-09-24T14:33:54
2024-09-24T14:33:54
https://github.com/huggingface/datasets/pull/7157
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7157", "html_url": "https://github.com/huggingface/datasets/pull/7157", "diff_url": "https://github.com/huggingface/datasets/pull/7157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7157.patch", "merged_at": null }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7157). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,539,360,617
7,156
interleave_datasets resets shuffle state
open
2024-09-20T17:57:54
2025-03-18T10:56:25
null
https://github.com/huggingface/datasets/issues/7156
null
jonathanasdf
false
[ "It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`" ]
2,533,641,870
7,155
Dataset viewer not working! Failure due to more than 32 splits.
closed
2024-09-18T12:43:21
2024-09-18T13:20:03
2024-09-18T13:20:03
https://github.com/huggingface/datasets/issues/7155
null
sleepingcat4
false
[ "I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. " ]
2,532,812,323
7,154
Support ndjson data files
closed
2024-09-18T06:10:10
2024-09-19T11:25:17
2024-09-19T11:25:14
https://github.com/huggingface/datasets/pull/7154
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7154", "html_url": "https://github.com/huggingface/datasets/pull/7154", "diff_url": "https://github.com/huggingface/datasets/pull/7154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7154.patch", "merged_at": "2024-09-19T11:25:14" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7154). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for your review, @severo.\r\n\r\nYes, I was aware of this. From internal conversation:\r\n> Please note that although NDJSON was planned to be submitted as an RFC standard spec, it is no longer maintained:\r\n> - See note from the author: https://github.com/ndjson/ndjson-spec/issues/35#issuecomment-1285673417\r\n> - See that their official website domain has expired: https://ndjson.org/ \r\n\r\nThe purpose of this PR is just supporting datasets with ndjson data files (e.g. Wikimedia Enterprise data files), but it should not imply any recommendation or endorsement of this format from our part." ]
2,532,788,555
7,153
Support data files with .ndjson extension
closed
2024-09-18T05:54:45
2024-09-19T11:25:15
2024-09-19T11:25:15
https://github.com/huggingface/datasets/issues/7153
null
albertvillanova
false
[]
2,527,577,048
7,151
Align filename prefix splitting with WebDataset library
closed
2024-09-16T06:07:39
2024-09-16T15:26:36
2024-09-16T15:26:34
https://github.com/huggingface/datasets/pull/7151
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7151", "html_url": "https://github.com/huggingface/datasets/pull/7151", "diff_url": "https://github.com/huggingface/datasets/pull/7151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7151.patch", "merged_at": "2024-09-16T15:26:34" }
albertvillanova
true
[]
2,527,571,175
7,150
WebDataset loader splits keys differently than WebDataset library
closed
2024-09-16T06:02:47
2024-09-16T15:26:35
2024-09-16T15:26:35
https://github.com/huggingface/datasets/issues/7150
null
albertvillanova
false
[]
2,524,497,448
7,149
Datasets Unknown Keyword Argument Error - task_templates
closed
2024-09-13T10:30:57
2025-03-06T07:11:55
2024-09-13T14:10:48
https://github.com/huggingface/datasets/issues/7149
null
varungupta31
false
[ "Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n", "Hello @albertvillanova \r\n\r\nI got the same error while loading this dataset: https://huggingface.co/datasets/alaleye/aloresb...\r\n\r\nHow can I fix it ? \r\nThanks", "I am getting the same error on the below code, any fix to this ?\n\n```\nfrom datasets import load_dataset\n\nminds = load_dataset(\"PolyAI/minds14\", name=\"en-AU\", split=\"train\")\nminds\n```" ]
2,523,833,413
7,148
Bug: Error when downloading mteb/mtop_domain
closed
2024-09-13T04:09:39
2024-09-14T15:11:35
2024-09-14T15:11:35
https://github.com/huggingface/datasets/issues/7148
null
ZiyiXia
false
[ "Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```", "Seems the error is still there", "I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: data = load_dataset(\"mteb/mtop_domain\", \"en\")\r\n\r\nIn [3]: data\r\nOut[3]: DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 15667\r\n })\r\n validation: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 2235\r\n })\r\n test: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 4386\r\n })\r\n})\r\n```", "Just solved this by reinstall Huggingface Hub and datasets. Thanks for your help!" ]
2,523,129,465
7,147
IterableDataset strange deadlock
closed
2024-09-12T18:59:33
2024-09-23T09:32:27
2024-09-21T17:37:34
https://github.com/huggingface/datasets/issues/7147
null
jonathanasdf
false
[ "Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard to 8 workers, so some workers may not have an example that satisfies your condition `if shard < 25`). It creates an infinite loop, trying to get samples from empty datasets with probability 1.", "Opened https://github.com/huggingface/datasets/issues/7156\r\n\r\nCan the deadlock be fixed somehow? The point of IterableDataset is so we don't need to preload the entire dataset, which loses some meaning if we need to see how many examples are in the dataset in order to set shards correctly.", "~~And it is kinda strange that `Commenting out the final shuffle avoids the issue` since if the infinite loop is inside interleave_datasets you'd expect that to happen regardless of the additional shuffle call?~~\r\n\r\nEdit: oh I guess without the shuffle it's guaranteed every worker gets something, but the shuffle makes it so some workers could have nothing\r\n\r\n~~Edit2: maybe the shuffle can be changed so initially it gives one example to each worker, and only starts the random shuffle after that~~ wait it's not about the workers not getting any shards, it's about a worker getting shards but all of the shards it gets are empty shards\r\n\r\nEdit3: If it's trying to get samples from empty datasets, it should be getting back a StopIteration -- and \"all_exhausted\" should mean it eventually discovers all its datasets are empty, and then it should just raise a StopIteration itself. So it seems like there is a reasonable behavior result for this?", "well the second dataset passed to interleave_datasets is never exhausted, since it's never sampled. But we could also state that the stream of examples from the second dataset is empty if it has probability 0, so I opened https://github.com/huggingface/datasets/pull/7157 to fix the infinite loop issue by ignoring datasets with probability 0, let me know what you think !", "Thanks for taking a look!\r\n\r\nI think you're right that this is ultimately an issue that the user opts into by specifying a dataset with probability 0, because the user is basically saying \"I want to force this `interleave_datasets` call to run forever\" and yet one of the workers can end up having only empty shards to mix...\r\n\r\nThat said it's probably not a good idea to randomly change the behavior of `interleave_datasets` with probability 0, I can't be the only one that uses it to repeat many different datasets (since there is no `datasets.repeat()` function). https://xkcd.com/1172/\r\n\r\nI think just the knowledge that filtering out probability 0 datasets fixes the deadlock is good enough for me. I can filter it out on my side and add a restart loop around the dataloader instead.\r\n\r\nThanks again for investigating.", "Ok I see ! We can also add .repeat() as well" ]
2,519,820,162
7,146
Set dev version
closed
2024-09-11T13:53:27
2024-09-12T04:34:08
2024-09-12T04:34:06
https://github.com/huggingface/datasets/pull/7146
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7146", "html_url": "https://github.com/huggingface/datasets/pull/7146", "diff_url": "https://github.com/huggingface/datasets/pull/7146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7146.patch", "merged_at": "2024-09-12T04:34:06" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,519,789,724
7,145
Release: 3.0.0
closed
2024-09-11T13:41:47
2024-09-11T13:48:42
2024-09-11T13:48:41
https://github.com/huggingface/datasets/pull/7145
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7145", "html_url": "https://github.com/huggingface/datasets/pull/7145", "diff_url": "https://github.com/huggingface/datasets/pull/7145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7145.patch", "merged_at": "2024-09-11T13:48:41" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7145). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,519,393,560
7,144
Fix key error in webdataset
closed
2024-09-11T10:50:17
2025-01-15T10:32:43
2024-09-13T04:31:37
https://github.com/huggingface/datasets/pull/7144
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7144", "html_url": "https://github.com/huggingface/datasets/pull/7144", "diff_url": "https://github.com/huggingface/datasets/pull/7144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7144.patch", "merged_at": null }
ragavsachdeva
true
[ "hi ! What version of `datasets` are you using ? Is this issue also happening with `datasets==3.0.0` ?\r\nAsking because we made sure to replicate the official webdataset logic, which is to use the latest dot as separator between the sample base name and the key", "Hi, yes this is still a problem on `datasets==3.0.0`.\r\n\r\nI was using `datasets=2.20.0` and in that version you get the key error.\r\n\r\nI just upgraded to `datasets==3.0.0` and in this version, you do not get a key error because it sets all keys to none by default in `_generate_examples` function:\r\n\r\n```python\r\nif field_name not in example:\r\n example[field_name] = None\r\n```\r\n\r\nHowever, the behaviour is still incorrect. This `if` condition is triggered because the filename is not split properly and it returns the data as `None` when it shouldn't.\r\n\r\n> we made sure to replicate the official webdataset logic, which is to use the latest dot as separator\r\n \r\nAh, but that's not what `split(, 1)` does though. This is exactly why I'm suggesting to use `rsplit` instead. In general, using `rsplit` should not be a breaking change I believe.", "Hi @ragavsachdeva,\r\n\r\nWe already had this discussion in the issue you have linked:\r\n- #6880\r\n- I even opened a PR with your proposed fix:\r\n - #6888\r\n\r\nHowever, we decided not to implement this feature because it is NOT aligned with the behavior of the `webdataset` library:\r\n> The prefix of a file is all directory components of the file plus the file name component up to the *first* “.” in the file name.\r\n```python\r\nIn [1]: import webdataset as wds\r\n\r\nIn [2]: wds.tariterators.base_plus_ext(\"22.05.png\")\r\nOut[2]: ('22', '05.png')\r\n```\r\n\r\n", "Ah, my apologies I missed https://github.com/huggingface/datasets/pull/6888 (clearly didn't do my due diligence). It's such a weird convention to have though. My keys are `/some/path/22.0/1.1.png` and it splits them at `/some/path/22` and `.0/1.1.png`(!) I'm okay with this PR not being merged though. Thanks for your time.", "Actually `datasets` is not behaving correctly in this case and should not split as `.0/1.1.png` - even webdataset handles this correctly via their regex `^((?:.*/|)[^.]+)[.]([^/]*)$` in `wds.tariterators.base_plus_ext` here:\r\n\r\nhttps://github.com/webdataset/webdataset/blob/87bd5aa41602d57f070f65a670893ee625702f2f/webdataset/tariterators.py#L36", "Oh.. the intention with that regex is to capture \"multi-part\" extensions e.g. `.tar.gz`. Makes sense. So `rsplit` isn't the solution then and neither is `split`. This expression makes so much more sense. Nice find! I'm assuming you'll add a patch?", "Issue addressed by:\r\n- #7151", " Maybe the libray should suggest/warn users about **file naming**, in particular, file naming **with dots**. Indeed, otherwise a naive naming approach like mentioned above would lead to\r\n\r\n```python\r\nValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.\r\n```\r\n\r\nwith the following code\r\n\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: !tar tvf easy_trouble/001.tar\r\ndrwxr-xr-x phunc20/phunc20 0 2025-01-15 17:57 001/\r\n-rw-r--r-- phunc20/phunc20 20 2025-01-15 17:54 001/1.2.jpg\r\n-rw-r--r-- phunc20/phunc20 20 2025-01-15 17:53 001/1.1.jpg\r\n-rw-r--r-- phunc20/phunc20 20 2025-01-15 17:57 001/2.1.jpg\r\n-rw-r--r-- phunc20/phunc20 20 2025-01-15 17:57 001/2.2.jpg\r\n-rw-r--r-- phunc20/phunc20 20 2025-01-15 17:57 001/2.3.jpg\r\n\r\nIn [29]: dataset = load_dataset(\"webdataset\", data_dir=\"easy_trouble\")\r\n```\r\n\r\nBTW, @ragavsachdeva , why would you find including a `.tar.gz` inside a `.tar` makes sense? I personally find switching `base_plus_ext` into sth similar to the following makes more sense, though. (That is, no multi-part extension.)\r\n\r\n```python\r\nfrom pathlib import Path\r\n\r\ndef base_plus_ext(path):\r\n \"\"\"Split off all file extensions.\r\n\r\n Returns base, allext.\r\n \"\"\"\r\n p = Path(path)\r\n base = (p.parent / p.stem).as_posix()\r\n ext = p.suffix.split(\".\")[-1]\r\n return base, ext\r\n```" ]
2,512,327,211
7,143
Modify add_column() to optionally accept a FeatureType as param
closed
2024-09-08T10:56:57
2024-09-17T06:01:23
2024-09-16T15:11:01
https://github.com/huggingface/datasets/pull/7143
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7143", "html_url": "https://github.com/huggingface/datasets/pull/7143", "diff_url": "https://github.com/huggingface/datasets/pull/7143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7143.patch", "merged_at": "2024-09-16T15:11:01" }
varadhbhatnagar
true
[ "Requesting review @lhoestq \r\nI will also update the docs if this looks good.", "Cool ! maybe you can rename the argument `feature` and with type `FeatureType` ? This way it would work the same way as `.cast_column()` ?", "@lhoestq Since there is no way to get a `pyarrow.Schema` from a `FeatureType`, I had to go via `Features`. How does this look?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7143). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq done!", "@lhoestq anything pending on this?" ]
2,512,244,938
7,142
Specifying datatype when adding a column to a dataset.
closed
2024-09-08T07:34:24
2024-09-17T03:46:32
2024-09-17T03:46:32
https://github.com/huggingface/datasets/issues/7142
null
varadhbhatnagar
false
[ "#self-assign" ]
2,510,797,653
7,141
Older datasets throwing safety errors with 2.21.0
closed
2024-09-06T16:26:30
2024-09-06T21:14:14
2024-09-06T19:09:29
https://github.com/huggingface/datasets/issues/7141
null
alvations
false
[ "I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval", "Me too, didn't have this issue few hours ago.", "same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?\r\n", "Not a good idea, but commenting out the whole security block at `/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py` is a temporary workaround:\r\n\r\n```\r\n #security = kwargs.pop(\"security\", None)\r\n #if security is not None:\r\n # security = BlobSecurityInfo(\r\n # safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\n # )\r\n #self.security = security\r\n```\r\n", "Uploading a dataset to Huggingface also results in the following error in the Dataset Preview:\r\n```\r\nThe full dataset viewer is not available (click to read why). Only showing a preview of the rows.\r\n'safe'\r\nError code: UnexpectedError\r\nNeed help to make the dataset viewer work? Make sure to review [how to configure the dataset viewer](link1), and [open a discussion](link2) for direct support.\r\n```\r\nI used jsonl format for the dataset in this case. Same exact dataset worked previously.", "Same issue here. Even reverting to older version of `datasets` (e.g., `2.19.0`) results in same error:\r\n\r\n```python\r\n>>> datasets.load_dataset('allenai/ai2_arc', 'ARC-Easy')\r\n\r\nFile \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3048, in <listcomp>\r\n RepoFile(**path_info) if path_info[\"type\"] == \"file\" else RepoFolder(**path_info)\r\n File \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 534, in __init__\r\n safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\nKeyError: 'safe'\r\n```", "i just had this issue a few minutes ago, crawled the internet and found nothing. came here to open an issue and found this. it is really frustrating. anyone found a fix?", "hi, me and my team have the same problem", "Yeah, this just suddenly appeared without client-side code changes, within the last hours.\r\n\r\nHere's a patch to fix the issue temporarily:\r\n```python\r\nimport huggingface_hub\r\ndef patched_repofolder_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.tree_id = kwargs.pop(\"oid\")\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n\r\n\r\ndef patched_repo_file_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.size = kwargs.pop(\"size\")\r\n self.blob_id = kwargs.pop(\"oid\")\r\n lfs = kwargs.pop(\"lfs\", None)\r\n if lfs is not None:\r\n lfs = huggingface_hub.hf_api.BlobLfsInfo(size=lfs[\"size\"], sha256=lfs[\"oid\"], pointer_size=lfs[\"pointerSize\"])\r\n self.lfs = lfs\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n self.security = None\r\n\r\n # backwards compatibility\r\n self.rfilename = self.path\r\n self.lastCommit = self.last_commit\r\n\r\n\r\nhuggingface_hub.hf_api.RepoFile.__init__ = patched_repo_file_init\r\nhuggingface_hub.hf_api.RepoFolder.__init__ = patched_repofolder_init\r\n```\r\n", "Also discussed here:\r\nhttps://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/1", "i'm thinking this should be a server issue, i mean no client code was changed on my end. so weird!", "As far as I can tell, this seems to be happening with **all** datasets that use RepoFolder (probably represents most datasets on huggingface, right?)", "> Here is a temporary fix for the problem: https://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/12?u=mlscientist\r\n\r\nthis doesn't seem to work!", "In case you are using Colab or similar, remember to restart your session after modyfing the hf_api.py file", "No need to modify the file directly, just monkey-patch.\r\n\r\nI'm now more sure that the error appears because the backend expects the api code to look like it does on `main`. If `RepoFile` and `RepoFolder` look about like they look on main, they work again.\r\n\r\nIf not fixed like above, a secondary error that will appear is \r\n```\r\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n \"tree_id\": path_info.tree_id,\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'RepoFolder' object has no attribute 'tree_id'\r\n```\r\n", "We've reverted the deployment, please let us know if the issue still persists!", "thanks @muellerzr!" ]
2,508,078,858
7,139
Use load_dataset to load imagenet-1K But find a empty dataset
open
2024-09-05T15:12:22
2024-10-09T04:02:41
null
https://github.com/huggingface/datasets/issues/7139
null
fscdc
false
[ "Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set = load_dataset('imagenet-1k', split='train', use_auth_token=True)\r\n``` ", "Thanks a lot! It helps me" ]
2,507,738,308
7,138
Cache only changed columns?
open
2024-09-05T12:56:47
2024-09-20T13:27:20
null
https://github.com/huggingface/datasets/issues/7138
null
Modexus
false
[ "so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.", "yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files" ]
2,506,851,048
7,137
[BUG] dataset_info sequence unexpected behavior in README.md YAML
closed
2024-09-05T06:06:06
2025-07-07T09:20:29
2025-07-04T19:50:59
https://github.com/huggingface/datasets/issues/7137
null
ain-soph
false
[ "The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: string\r\n\r\n\r\n# data\r\n{\"answers\": {\"text\": \"ADDRESS\", \"label\": \"abc\"}}\r\n```", "According to https://github.com/huggingface/datasets/issues/7590#issuecomment-3035647354\nreplacing `sequence` to `list` will solve this issue.\n\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n list:\n - name: text\n sequence: string\n - name: label\n sequence: string\n```", "Btw you can use `list` instead of `sequence` everywhere for consistency:\n\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n list:\n - name: text\n list: string\n - name: label\n list: string\n```" ]
2,506,115,857
7,136
Do not consume unnecessary memory during sharding
open
2024-09-04T19:26:06
2024-09-04T19:28:23
null
https://github.com/huggingface/datasets/pull/7136
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7136", "html_url": "https://github.com/huggingface/datasets/pull/7136", "diff_url": "https://github.com/huggingface/datasets/pull/7136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7136.patch", "merged_at": null }
janEbert
true
[]
2,503,318,328
7,135
Bug: Type Mismatch in Dataset Mapping
open
2024-09-03T16:37:01
2024-09-05T14:09:05
null
https://github.com/huggingface/datasets/issues/7135
null
marko1616
false
[ "By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dict(data)\r\n\r\n# Mapping function to convert label to string\r\ndef add_one(example):\r\n example['label'] += 1\r\n return example\r\n\r\n# Applying the mapping function\r\ndataset = dataset.map(add_one)\r\n\r\n# Iterating over the dataset to show results\r\nfor item in dataset:\r\n print(item)\r\n print(type(item['label']))\r\n```", "Hello, thanks for submitting an issue.\r\n\r\nFWIU, the issue is that `datasets` tries to limit casting [ref](https://github.com/huggingface/datasets/blob/ca58154bba185c1916ca5eea4e33b27258642044/src/datasets/arrow_writer.py#L526) and as such will try to convert your strings back to int to preserve the `Features`. \r\n\r\nA quick solution would be to use `dataset.cast` or to supply `features` when calling `dataset.map`.\r\n\r\n\r\n```python\r\n# using Dataset.cast\r\ndataset = dataset.cast_column('label', Value('string'))\r\n\r\n# Alternative, supply features\r\ndataset = dataset.map(add_one, features=Features({**dataset.features, 'label': Value('string')}))\r\n```", "LGTM! Thanks for the review.\r\n\r\nJust to clarify, is this intended behavior, or is it something that might be addressed in a future update?\r\nI'll leave this issue open until it's fixed if this is not the intended behavior." ]
2,499,484,041
7,134
Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown
open
2024-09-01T13:55:41
2024-09-02T10:34:53
null
https://github.com/huggingface/datasets/issues/7134
null
navidmafi
false
[]
2,496,474,495
7,133
remove filecheck to enable symlinks
closed
2024-08-30T07:36:56
2024-12-24T14:25:22
2024-12-24T14:25:22
https://github.com/huggingface/datasets/pull/7133
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7133", "html_url": "https://github.com/huggingface/datasets/pull/7133", "diff_url": "https://github.com/huggingface/datasets/pull/7133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7133.patch", "merged_at": "2024-12-24T14:25:22" }
fschlatt
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The CI is failing, looks like it breaks imagefolder loading.\r\n\r\nI just checked fsspec internals and maybe instead we can detect symlink by checking `islink` and `size` to make sure it's a file\r\n```python\r\nif info[\"type\"] == \"file\" or (info.get(\"islink\") and info[\"size\"])\r\n```\r\n", "hmm actually `size` doesn't seem to filter symlinked directories, we need another way", "Does fsspec perhaps allow resolving symlinks? Something like https://docs.python.org/3/library/pathlib.html#pathlib.Path.resolve", "there is `info[\"destination\"]` in case of a symlink, so maybe\r\n\r\n\r\n```python\r\nif info[\"type\"] == \"file\" or (info.get(\"islink\") and info.get(\"destination\") and os.path.isfile(info[\"destination\"]))\r\n```", "I've added a fix which works with some temporary test files locally \r\n\r\n`(info[\"type\"] == \"file\" or (info.get(\"islink\") and os.path.isfile(os.path.realpath(filepath))))`" ]
2,494,510,464
7,132
Fix data file module inference
open
2024-08-29T13:48:16
2024-09-02T19:52:13
null
https://github.com/huggingface/datasets/pull/7132
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7132", "html_url": "https://github.com/huggingface/datasets/pull/7132", "diff_url": "https://github.com/huggingface/datasets/pull/7132.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7132.patch", "merged_at": null }
HennerM
true
[ "Hi ! datasets saved using `save_to_disk` should be loaded with `load_from_disk` ;)", "It is convienient to just pass in a path to a local dataset or one from the hub and use the same function to load it. Is it not possible to get this fix merged in to allow this? ", "We can modify `save_to_disk` to write the dataset in a structure supported by the Hub in this case, it's kind of a legacy function anyway" ]
2,491,942,650
7,129
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
closed
2024-08-28T12:27:48
2024-12-06T11:32:02
2024-12-06T11:32:02
https://github.com/huggingface/datasets/issues/7129
null
sergiopaniego
false
[]
2,490,274,775
7,128
Filter Large Dataset Entry by Entry
open
2024-08-27T20:31:09
2024-10-07T23:37:44
null
https://github.com/huggingface/datasets/issues/7128
null
QiyaoWei
false
[ "Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n", "Jumping on this as it seems relevant - when I use the `filter` method, it often results in an OOM (or at least unacceptably high memory usage).\r\n\r\nFor example in the [this notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_), we load an object detection dataset from HF and imagine I want to filter such that I only have images which contain a single annotation class. Each row has a JSON field that contains MS-COCO annotations for the image, so we could load that field and filter on it.\r\n\r\nThe test dataset is only about 440 images, probably less than 1GB, but running the following filter crashes the VM (over 12 GB RAM):\r\n\r\n```python\r\nimport json\r\ndef filter_single_class(example, target_class_id):\r\n \"\"\"Filters examples based on whether they contain annotations from a single class.\r\n\r\n Args:\r\n example: A dictionary representing a single example from the dataset.\r\n target_class_id: The target class ID to filter for.\r\n\r\n Returns:\r\n True if the example contains only annotations from the target class, False otherwise.\r\n \"\"\"\r\n if not example['coco_annotations']:\r\n return False\r\n\r\n annotation_category_ids = set([annotation['category_id'] for annotation in json.loads(example['coco_annotations'])])\r\n\r\n return len(annotation_category_ids) == 1 and target_class_id in annotation_category_ids\r\n\r\ntarget_class_id = 1 \r\nfiltered_dataset = dataset['test'].filter(lambda example: filter_single_class(example, target_class_id))\r\n```\r\n\r\n<img width=\"255\" alt=\"image\" src=\"https://github.com/user-attachments/assets/be475f15-5b6b-4df2-b5b5-a1f60ae2b05c\">\r\n\r\nIterating over the dataset works fine:\r\n\r\n```python\r\nfiltered_dataset = []\r\nfor example in dataset['test']:\r\n if filter_single_class(example, target_class_id):\r\n filtered_dataset.append(example)\r\n```\r\n\r\n<img width=\"129\" alt=\"image\" src=\"https://github.com/user-attachments/assets/34fa5612-0394-4c46-9f34-e94650f05d65\">\r\n\r\nIt would be great if there was guidance in the documentation on how to use filters efficiently, or if this is some performance bug that could be addressed. At the very least I would expect a filter operation to use at most 2x the footprint of the database plus some overhead for the lambda (i.e. worst case would be a duplicate copy with all entries retained). Even if the operation is parallelised, each thread/worker should only take a subset of the dataset - so I'm not sure where this ballooning in memory usage comes from.\r\n\r\nFrom some other comments there seems to be a workaround with `writer_batch_size` or caching to file, but in the [docs](https://huggingface.co/docs/datasets/v3.0.0/en/package_reference/main_classes#datasets.Dataset.filter) at least, `keep_in_memory` defaults to `False`.", "You can try passing input_columns=[\"coco_annotations\"] to only load this column instead of all the columns. In that case your function should take coco_annotations as input instead of example", "If your filter_function is large and computationally intensive, consider using multi-processing or multi-threading with concurrent.futures to filter the dataset. This approach allows you to process multiple tables concurrently, reducing overall processing time, especially for CPU-bound tasks. Use ThreadPoolExecutor for I/O-bound operations and ProcessPoolExecutor for CPU-bound operations.\r\n" ]
2,486,524,966
7,127
Caching shuffles by np.random.Generator results in unintiutive behavior
open
2024-08-26T10:29:48
2025-07-28T11:00:00
null
https://github.com/huggingface/datasets/issues/7127
null
el-hult
false
[ "I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/arrow_dataset.py#L4306-L4316\r\n\r\nbecause the shuffle happens after checking the cache, the rng state won't advance if the cache is used. This is VERY confusing. Also not documented.\r\n\r\nMy proposal is that you remove the API for using a Generator, and only keep the seed-based API since that is functional and cache-compatible.", "Second that, its very confusing." ]
2,485,939,495
7,126
Disable implicit token in CI
closed
2024-08-26T05:29:46
2024-08-26T06:05:01
2024-08-26T05:59:15
https://github.com/huggingface/datasets/pull/7126
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7126", "html_url": "https://github.com/huggingface/datasets/pull/7126", "diff_url": "https://github.com/huggingface/datasets/pull/7126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7126.patch", "merged_at": "2024-08-26T05:59:15" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003428 / 0.011008 (-0.007580) | 0.062673 / 0.038508 (0.024164) | 0.030111 / 0.023109 (0.007002) | 0.238017 / 0.275898 (-0.037881) | 0.262655 / 0.323480 (-0.060825) | 0.003015 / 0.007986 (-0.004971) | 0.002664 / 0.004328 (-0.001665) | 0.050010 / 0.004250 (0.045759) | 0.045620 / 0.037052 (0.008567) | 0.251800 / 0.258489 (-0.006689) | 0.278829 / 0.293841 (-0.015011) | 0.029838 / 0.128546 (-0.098709) | 0.011703 / 0.075646 (-0.063943) | 0.204503 / 0.419271 (-0.214768) | 0.036173 / 0.043533 (-0.007359) | 0.242850 / 0.255139 (-0.012289) | 0.263811 / 0.283200 (-0.019389) | 0.019027 / 0.141683 (-0.122656) | 1.168028 / 1.452155 (-0.284126) | 1.208975 / 1.492716 (-0.283742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091309 / 0.018006 (0.073303) | 0.299583 / 0.000490 (0.299093) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018451 / 0.037411 (-0.018960) | 0.062516 / 0.014526 (0.047991) | 0.073983 / 0.176557 (-0.102573) | 0.120952 / 0.737135 (-0.616184) | 0.075275 / 0.296338 (-0.221063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286870 / 0.215209 (0.071661) | 2.810498 / 2.077655 (0.732843) | 1.490028 / 1.504120 (-0.014092) | 1.362249 / 1.541195 (-0.178946) | 1.368939 / 1.468490 (-0.099551) | 0.736643 / 4.584777 (-3.848134) | 2.414237 / 3.745712 (-1.331475) | 2.898911 / 5.269862 (-2.370951) | 1.840630 / 4.565676 (-2.725047) | 0.077872 / 0.424275 (-0.346403) | 0.005087 / 0.007607 (-0.002520) | 0.337054 / 0.226044 (0.111009) | 3.390734 / 2.268929 (1.121806) | 1.844174 / 55.444624 (-53.600451) | 1.532741 / 6.876477 (-5.343736) | 1.551650 / 2.142072 (-0.590422) | 0.778642 / 4.805227 (-4.026585) | 0.131899 / 6.500664 (-6.368765) | 0.041801 / 0.075469 (-0.033668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958362 / 1.841788 (-0.883425) | 11.323330 / 8.074308 (3.249022) | 9.396199 / 10.191392 (-0.795193) | 0.131154 / 0.680424 (-0.549270) | 0.014705 / 0.534201 (-0.519496) | 0.302424 / 0.579283 (-0.276859) | 0.261870 / 0.434364 (-0.172494) | 0.340788 / 0.540337 (-0.199550) | 0.433360 / 1.386936 (-0.953576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005571 / 0.011353 (-0.005782) | 0.003388 / 0.011008 (-0.007621) | 0.050366 / 0.038508 (0.011858) | 0.032633 / 0.023109 (0.009524) | 0.261847 / 0.275898 (-0.014051) | 0.292197 / 0.323480 (-0.031283) | 0.005070 / 0.007986 (-0.002916) | 0.002753 / 0.004328 (-0.001575) | 0.048613 / 0.004250 (0.044363) | 0.040272 / 0.037052 (0.003219) | 0.275441 / 0.258489 (0.016952) | 0.309175 / 0.293841 (0.015334) | 0.032403 / 0.128546 (-0.096143) | 0.011734 / 0.075646 (-0.063912) | 0.059532 / 0.419271 (-0.359740) | 0.033886 / 0.043533 (-0.009647) | 0.263453 / 0.255139 (0.008314) | 0.281997 / 0.283200 (-0.001203) | 0.018522 / 0.141683 (-0.123161) | 1.150364 / 1.452155 (-0.301791) | 1.204090 / 1.492716 (-0.288627) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093129 / 0.018006 (0.075123) | 0.303691 / 0.000490 (0.303201) | 0.000231 / 0.000200 (0.000031) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022084 / 0.037411 (-0.015327) | 0.076354 / 0.014526 (0.061828) | 0.087710 / 0.176557 (-0.088847) | 0.128907 / 0.737135 (-0.608228) | 0.088603 / 0.296338 (-0.207735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301161 / 0.215209 (0.085952) | 2.954780 / 2.077655 (0.877125) | 1.601366 / 1.504120 (0.097246) | 1.477225 / 1.541195 (-0.063970) | 1.482355 / 1.468490 (0.013865) | 0.722461 / 4.584777 (-3.862315) | 0.981439 / 3.745712 (-2.764273) | 2.927006 / 5.269862 (-2.342856) | 1.884444 / 4.565676 (-2.681233) | 0.079044 / 0.424275 (-0.345231) | 0.005530 / 0.007607 (-0.002077) | 0.347082 / 0.226044 (0.121037) | 3.491984 / 2.268929 (1.223056) | 1.944317 / 55.444624 (-53.500307) | 1.645792 / 6.876477 (-5.230685) | 1.649506 / 2.142072 (-0.492567) | 0.800822 / 4.805227 (-4.004405) | 0.133936 / 6.500664 (-6.366729) | 0.041198 / 0.075469 (-0.034271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029764 / 1.841788 (-0.812024) | 11.928840 / 8.074308 (3.854532) | 10.021390 / 10.191392 (-0.170002) | 0.141608 / 0.680424 (-0.538816) | 0.014921 / 0.534201 (-0.519280) | 0.302050 / 0.579283 (-0.277233) | 0.124151 / 0.434364 (-0.310213) | 0.347143 / 0.540337 (-0.193195) | 0.467649 / 1.386936 (-0.919287) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4c87a6bf57b3aa094c28895c5b89b91b3509c58 \"CML watermark\")\n" ]
2,485,912,246
7,125
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport
closed
2024-08-26T05:09:35
2024-08-26T05:33:15
2024-08-26T05:27:09
https://github.com/huggingface/datasets/pull/7125
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7125", "html_url": "https://github.com/huggingface/datasets/pull/7125", "diff_url": "https://github.com/huggingface/datasets/pull/7125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7125.patch", "merged_at": "2024-08-26T05:27:09" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005741 / 0.011353 (-0.005612) | 0.004011 / 0.011008 (-0.006998) | 0.063962 / 0.038508 (0.025454) | 0.031512 / 0.023109 (0.008403) | 0.242249 / 0.275898 (-0.033649) | 0.269601 / 0.323480 (-0.053879) | 0.004502 / 0.007986 (-0.003483) | 0.002835 / 0.004328 (-0.001494) | 0.049878 / 0.004250 (0.045628) | 0.048012 / 0.037052 (0.010959) | 0.250454 / 0.258489 (-0.008035) | 0.283266 / 0.293841 (-0.010575) | 0.030752 / 0.128546 (-0.097794) | 0.012655 / 0.075646 (-0.062991) | 0.211043 / 0.419271 (-0.208229) | 0.037165 / 0.043533 (-0.006367) | 0.246815 / 0.255139 (-0.008324) | 0.264306 / 0.283200 (-0.018893) | 0.018343 / 0.141683 (-0.123340) | 1.140452 / 1.452155 (-0.311702) | 1.214849 / 1.492716 (-0.277867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098048 / 0.018006 (0.080042) | 0.292201 / 0.000490 (0.291712) | 0.000217 / 0.000200 (0.000017) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018732 / 0.037411 (-0.018679) | 0.062887 / 0.014526 (0.048361) | 0.074353 / 0.176557 (-0.102204) | 0.120794 / 0.737135 (-0.616341) | 0.077066 / 0.296338 (-0.219272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276335 / 0.215209 (0.061126) | 2.722905 / 2.077655 (0.645250) | 1.423080 / 1.504120 (-0.081040) | 1.305443 / 1.541195 (-0.235752) | 1.342142 / 1.468490 (-0.126348) | 0.741899 / 4.584777 (-3.842878) | 2.407567 / 3.745712 (-1.338145) | 3.070263 / 5.269862 (-2.199599) | 1.935732 / 4.565676 (-2.629944) | 0.081371 / 0.424275 (-0.342904) | 0.005207 / 0.007607 (-0.002401) | 0.328988 / 0.226044 (0.102943) | 3.240771 / 2.268929 (0.971842) | 1.801028 / 55.444624 (-53.643597) | 1.490593 / 6.876477 (-5.385884) | 1.521317 / 2.142072 (-0.620756) | 0.794051 / 4.805227 (-4.011176) | 0.136398 / 6.500664 (-6.364266) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974186 / 1.841788 (-0.867602) | 12.280011 / 8.074308 (4.205703) | 9.453389 / 10.191392 (-0.738003) | 0.132627 / 0.680424 (-0.547797) | 0.014608 / 0.534201 (-0.519593) | 0.309298 / 0.579283 (-0.269985) | 0.275911 / 0.434364 (-0.158452) | 0.348261 / 0.540337 (-0.192077) | 0.439031 / 1.386936 (-0.947905) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006248 / 0.011353 (-0.005105) | 0.004369 / 0.011008 (-0.006639) | 0.050588 / 0.038508 (0.012080) | 0.032880 / 0.023109 (0.009771) | 0.268979 / 0.275898 (-0.006919) | 0.294714 / 0.323480 (-0.028766) | 0.004518 / 0.007986 (-0.003467) | 0.002995 / 0.004328 (-0.001333) | 0.048776 / 0.004250 (0.044525) | 0.041696 / 0.037052 (0.004644) | 0.283413 / 0.258489 (0.024924) | 0.322137 / 0.293841 (0.028296) | 0.032809 / 0.128546 (-0.095737) | 0.012559 / 0.075646 (-0.063087) | 0.060456 / 0.419271 (-0.358815) | 0.034564 / 0.043533 (-0.008968) | 0.267263 / 0.255139 (0.012124) | 0.292633 / 0.283200 (0.009434) | 0.019011 / 0.141683 (-0.122672) | 1.199820 / 1.452155 (-0.252335) | 1.251829 / 1.492716 (-0.240887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097615 / 0.018006 (0.079609) | 0.313764 / 0.000490 (0.313274) | 0.000220 / 0.000200 (0.000020) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.089301 / 0.014526 (0.074775) | 0.092964 / 0.176557 (-0.083592) | 0.131724 / 0.737135 (-0.605412) | 0.094792 / 0.296338 (-0.201546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305119 / 0.215209 (0.089910) | 2.932192 / 2.077655 (0.854537) | 1.610573 / 1.504120 (0.106453) | 1.487502 / 1.541195 (-0.053693) | 1.533300 / 1.468490 (0.064810) | 0.717223 / 4.584777 (-3.867554) | 0.964402 / 3.745712 (-2.781310) | 3.111398 / 5.269862 (-2.158464) | 1.957942 / 4.565676 (-2.607734) | 0.079160 / 0.424275 (-0.345116) | 0.005639 / 0.007607 (-0.001968) | 0.358971 / 0.226044 (0.132927) | 3.564401 / 2.268929 (1.295472) | 2.043079 / 55.444624 (-53.401546) | 1.742681 / 6.876477 (-5.133795) | 1.784758 / 2.142072 (-0.357314) | 0.798508 / 4.805227 (-4.006719) | 0.133905 / 6.500664 (-6.366759) | 0.043008 / 0.075469 (-0.032461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031715 / 1.841788 (-0.810073) | 13.374312 / 8.074308 (5.300004) | 10.789098 / 10.191392 (0.597706) | 0.133663 / 0.680424 (-0.546761) | 0.016692 / 0.534201 (-0.517509) | 0.304716 / 0.579283 (-0.274567) | 0.129074 / 0.434364 (-0.305290) | 0.346440 / 0.540337 (-0.193897) | 0.464593 / 1.386936 (-0.922343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#880a52cea337032d39e90e6f0dcc55198a75a285 \"CML watermark\")\n" ]
2,485,890,442
7,124
Test get_dataset_config_info with non-existing/gated/private dataset
closed
2024-08-26T04:53:59
2024-08-26T06:15:33
2024-08-26T06:09:42
https://github.com/huggingface/datasets/pull/7124
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7124", "html_url": "https://github.com/huggingface/datasets/pull/7124", "diff_url": "https://github.com/huggingface/datasets/pull/7124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7124.patch", "merged_at": "2024-08-26T06:09:42" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005339 / 0.011353 (-0.006014) | 0.003640 / 0.011008 (-0.007368) | 0.064012 / 0.038508 (0.025504) | 0.030424 / 0.023109 (0.007314) | 0.239966 / 0.275898 (-0.035932) | 0.264361 / 0.323480 (-0.059119) | 0.004247 / 0.007986 (-0.003739) | 0.002847 / 0.004328 (-0.001481) | 0.049640 / 0.004250 (0.045390) | 0.044903 / 0.037052 (0.007851) | 0.250174 / 0.258489 (-0.008315) | 0.281423 / 0.293841 (-0.012418) | 0.029419 / 0.128546 (-0.099127) | 0.012221 / 0.075646 (-0.063426) | 0.205907 / 0.419271 (-0.213365) | 0.036654 / 0.043533 (-0.006878) | 0.245805 / 0.255139 (-0.009334) | 0.265029 / 0.283200 (-0.018170) | 0.018081 / 0.141683 (-0.123602) | 1.113831 / 1.452155 (-0.338324) | 1.156443 / 1.492716 (-0.336274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134389 / 0.018006 (0.116383) | 0.300637 / 0.000490 (0.300147) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019111 / 0.037411 (-0.018300) | 0.062585 / 0.014526 (0.048059) | 0.075909 / 0.176557 (-0.100647) | 0.121382 / 0.737135 (-0.615753) | 0.074980 / 0.296338 (-0.221359) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285062 / 0.215209 (0.069853) | 2.850130 / 2.077655 (0.772476) | 1.519877 / 1.504120 (0.015757) | 1.388711 / 1.541195 (-0.152484) | 1.397284 / 1.468490 (-0.071206) | 0.723100 / 4.584777 (-3.861677) | 2.393184 / 3.745712 (-1.352529) | 2.908418 / 5.269862 (-2.361443) | 1.871024 / 4.565676 (-2.694653) | 0.078230 / 0.424275 (-0.346045) | 0.005158 / 0.007607 (-0.002449) | 0.345622 / 0.226044 (0.119577) | 3.357611 / 2.268929 (1.088683) | 1.844492 / 55.444624 (-53.600132) | 1.584237 / 6.876477 (-5.292240) | 1.577158 / 2.142072 (-0.564915) | 0.789702 / 4.805227 (-4.015525) | 0.132045 / 6.500664 (-6.368619) | 0.042304 / 0.075469 (-0.033165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977166 / 1.841788 (-0.864622) | 11.306118 / 8.074308 (3.231810) | 9.490778 / 10.191392 (-0.700614) | 0.143536 / 0.680424 (-0.536888) | 0.015304 / 0.534201 (-0.518897) | 0.313892 / 0.579283 (-0.265391) | 0.267009 / 0.434364 (-0.167355) | 0.345560 / 0.540337 (-0.194778) | 0.435649 / 1.386936 (-0.951287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005700 / 0.011353 (-0.005653) | 0.003490 / 0.011008 (-0.007519) | 0.049990 / 0.038508 (0.011482) | 0.032070 / 0.023109 (0.008961) | 0.272622 / 0.275898 (-0.003276) | 0.298265 / 0.323480 (-0.025215) | 0.004379 / 0.007986 (-0.003606) | 0.002786 / 0.004328 (-0.001543) | 0.048271 / 0.004250 (0.044020) | 0.040102 / 0.037052 (0.003050) | 0.286433 / 0.258489 (0.027944) | 0.319306 / 0.293841 (0.025465) | 0.032872 / 0.128546 (-0.095675) | 0.011870 / 0.075646 (-0.063776) | 0.059886 / 0.419271 (-0.359385) | 0.034281 / 0.043533 (-0.009252) | 0.275588 / 0.255139 (0.020450) | 0.292951 / 0.283200 (0.009751) | 0.018095 / 0.141683 (-0.123588) | 1.130870 / 1.452155 (-0.321285) | 1.190761 / 1.492716 (-0.301955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093346 / 0.018006 (0.075340) | 0.307506 / 0.000490 (0.307016) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022873 / 0.037411 (-0.014538) | 0.077070 / 0.014526 (0.062544) | 0.089152 / 0.176557 (-0.087404) | 0.130186 / 0.737135 (-0.606949) | 0.090244 / 0.296338 (-0.206095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297950 / 0.215209 (0.082740) | 2.942360 / 2.077655 (0.864705) | 1.614324 / 1.504120 (0.110204) | 1.495795 / 1.541195 (-0.045400) | 1.506155 / 1.468490 (0.037665) | 0.730307 / 4.584777 (-3.854470) | 0.966312 / 3.745712 (-2.779400) | 2.928955 / 5.269862 (-2.340906) | 1.940049 / 4.565676 (-2.625627) | 0.079589 / 0.424275 (-0.344686) | 0.006004 / 0.007607 (-0.001604) | 0.356630 / 0.226044 (0.130585) | 3.516652 / 2.268929 (1.247724) | 1.963196 / 55.444624 (-53.481429) | 1.674489 / 6.876477 (-5.201988) | 1.677558 / 2.142072 (-0.464514) | 0.806447 / 4.805227 (-3.998780) | 0.133819 / 6.500664 (-6.366845) | 0.040762 / 0.075469 (-0.034707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038495 / 1.841788 (-0.803293) | 11.829186 / 8.074308 (3.754878) | 10.214158 / 10.191392 (0.022766) | 0.140590 / 0.680424 (-0.539834) | 0.014729 / 0.534201 (-0.519472) | 0.300557 / 0.579283 (-0.278726) | 0.122772 / 0.434364 (-0.311592) | 0.344618 / 0.540337 (-0.195720) | 0.460064 / 1.386936 (-0.926872) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be5cff059a2a5b89d7a97bc04739c4919ab8089f \"CML watermark\")\n" ]
2,484,003,937
7,123
Make dataset viewer more flexible in displaying metadata alongside images
open
2024-08-23T22:56:01
2024-10-17T09:13:47
null
https://github.com/huggingface/datasets/issues/7123
null
egrace479
false
[ "Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\n```\r\n\r\nEDIT: ah maybe it doesn't work because you'd have to provide relative paths from the metadata files to the images", "Yes, that's part of the issue. Also, `metadata.csv` is a very ambiguous name and we generally try to avoid using the same name for different files within a dataset, as this can quickly lead to confusion.", "I think supporting `**/*-metadata.csv` or `**/*_metadata.csv` makes sense to me. If it sounds good to you feel free to open a PR to update the patterns here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d4422cc24a56dc7132ddc3fd6b285c5edbd60b8c/src/datasets/data_files.py#L104-L115" ]
2,482,491,258
7,122
[interleave_dataset] sample batches from a single source at a time
open
2024-08-23T07:21:15
2024-08-23T07:21:15
null
https://github.com/huggingface/datasets/issues/7122
null
memray
false
[]
2,480,978,483
7,121
Fix typed examples iterable state dict
closed
2024-08-22T14:45:03
2024-08-22T14:54:56
2024-08-22T14:49:06
https://github.com/huggingface/datasets/pull/7121
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7121", "html_url": "https://github.com/huggingface/datasets/pull/7121", "diff_url": "https://github.com/huggingface/datasets/pull/7121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7121.patch", "merged_at": "2024-08-22T14:49:06" }
lhoestq
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7121). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005273 / 0.011353 (-0.006079) | 0.003789 / 0.011008 (-0.007219) | 0.062811 / 0.038508 (0.024303) | 0.031055 / 0.023109 (0.007946) | 0.238663 / 0.275898 (-0.037235) | 0.269706 / 0.323480 (-0.053774) | 0.004105 / 0.007986 (-0.003881) | 0.002781 / 0.004328 (-0.001547) | 0.048800 / 0.004250 (0.044549) | 0.045759 / 0.037052 (0.008707) | 0.260467 / 0.258489 (0.001978) | 0.288800 / 0.293841 (-0.005041) | 0.029341 / 0.128546 (-0.099205) | 0.012413 / 0.075646 (-0.063233) | 0.203493 / 0.419271 (-0.215778) | 0.037270 / 0.043533 (-0.006263) | 0.246130 / 0.255139 (-0.009009) | 0.269046 / 0.283200 (-0.014154) | 0.017788 / 0.141683 (-0.123895) | 1.175537 / 1.452155 (-0.276617) | 1.197909 / 1.492716 (-0.294808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098258 / 0.018006 (0.080251) | 0.305283 / 0.000490 (0.304794) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019066 / 0.037411 (-0.018345) | 0.062723 / 0.014526 (0.048197) | 0.075827 / 0.176557 (-0.100730) | 0.121371 / 0.737135 (-0.615764) | 0.075167 / 0.296338 (-0.221171) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296650 / 0.215209 (0.081441) | 2.910593 / 2.077655 (0.832939) | 1.510798 / 1.504120 (0.006678) | 1.375461 / 1.541195 (-0.165733) | 1.386423 / 1.468490 (-0.082067) | 0.743818 / 4.584777 (-3.840959) | 2.437848 / 3.745712 (-1.307864) | 2.943661 / 5.269862 (-2.326201) | 1.888977 / 4.565676 (-2.676699) | 0.080126 / 0.424275 (-0.344149) | 0.005168 / 0.007607 (-0.002439) | 0.348699 / 0.226044 (0.122654) | 3.477686 / 2.268929 (1.208758) | 1.901282 / 55.444624 (-53.543343) | 1.574847 / 6.876477 (-5.301629) | 1.594359 / 2.142072 (-0.547714) | 0.793415 / 4.805227 (-4.011812) | 0.133982 / 6.500664 (-6.366682) | 0.042435 / 0.075469 (-0.033034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963057 / 1.841788 (-0.878731) | 11.597217 / 8.074308 (3.522909) | 9.285172 / 10.191392 (-0.906220) | 0.130510 / 0.680424 (-0.549914) | 0.013964 / 0.534201 (-0.520237) | 0.299334 / 0.579283 (-0.279949) | 0.267775 / 0.434364 (-0.166589) | 0.336922 / 0.540337 (-0.203416) | 0.430493 / 1.386936 (-0.956443) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003941 / 0.011008 (-0.007067) | 0.050204 / 0.038508 (0.011696) | 0.032275 / 0.023109 (0.009166) | 0.271076 / 0.275898 (-0.004822) | 0.295565 / 0.323480 (-0.027914) | 0.004393 / 0.007986 (-0.003592) | 0.002881 / 0.004328 (-0.001447) | 0.048032 / 0.004250 (0.043782) | 0.040430 / 0.037052 (0.003378) | 0.281631 / 0.258489 (0.023142) | 0.317964 / 0.293841 (0.024124) | 0.032318 / 0.128546 (-0.096228) | 0.012348 / 0.075646 (-0.063298) | 0.060336 / 0.419271 (-0.358936) | 0.034148 / 0.043533 (-0.009385) | 0.273803 / 0.255139 (0.018664) | 0.292068 / 0.283200 (0.008868) | 0.018693 / 0.141683 (-0.122990) | 1.155704 / 1.452155 (-0.296451) | 1.192245 / 1.492716 (-0.300472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097588 / 0.018006 (0.079582) | 0.311760 / 0.000490 (0.311270) | 0.000232 / 0.000200 (0.000032) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022825 / 0.037411 (-0.014586) | 0.077698 / 0.014526 (0.063172) | 0.088567 / 0.176557 (-0.087989) | 0.129689 / 0.737135 (-0.607446) | 0.090626 / 0.296338 (-0.205712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299791 / 0.215209 (0.084582) | 2.978558 / 2.077655 (0.900903) | 1.594095 / 1.504120 (0.089975) | 1.468476 / 1.541195 (-0.072719) | 1.482880 / 1.468490 (0.014390) | 0.717553 / 4.584777 (-3.867224) | 0.977501 / 3.745712 (-2.768211) | 2.954289 / 5.269862 (-2.315572) | 1.895473 / 4.565676 (-2.670203) | 0.078452 / 0.424275 (-0.345824) | 0.005508 / 0.007607 (-0.002099) | 0.350882 / 0.226044 (0.124837) | 3.480878 / 2.268929 (1.211949) | 1.965240 / 55.444624 (-53.479385) | 1.672448 / 6.876477 (-5.204029) | 1.674319 / 2.142072 (-0.467753) | 0.789049 / 4.805227 (-4.016178) | 0.132715 / 6.500664 (-6.367949) | 0.041081 / 0.075469 (-0.034388) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022953 / 1.841788 (-0.818834) | 12.123349 / 8.074308 (4.049041) | 10.336115 / 10.191392 (0.144723) | 0.142233 / 0.680424 (-0.538191) | 0.015416 / 0.534201 (-0.518785) | 0.303088 / 0.579283 (-0.276195) | 0.124942 / 0.434364 (-0.309422) | 0.338454 / 0.540337 (-0.201883) | 0.460039 / 1.386936 (-0.926897) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3813ce846e52824b38e53895810682f0a496a2e3 \"CML watermark\")\n" ]
2,480,674,237
7,120
don't mention the script if trust_remote_code=False
closed
2024-08-22T12:32:32
2024-08-22T14:39:52
2024-08-22T14:33:52
https://github.com/huggingface/datasets/pull/7120
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7120", "html_url": "https://github.com/huggingface/datasets/pull/7120", "diff_url": "https://github.com/huggingface/datasets/pull/7120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7120.patch", "merged_at": "2024-08-22T14:33:52" }
severo
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7120). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Note that in this case, we could even expect this kind of message:\r\n\r\n```\r\nDataFilesNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv'\r\n```\r\n\r\nWe generally return `DataFilesNotFoundError` for this case (data files passed as an argument), not sure why it does not occur with this dataset.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005484 / 0.011353 (-0.005869) | 0.003932 / 0.011008 (-0.007077) | 0.063177 / 0.038508 (0.024669) | 0.031311 / 0.023109 (0.008202) | 0.254881 / 0.275898 (-0.021017) | 0.273818 / 0.323480 (-0.049662) | 0.003312 / 0.007986 (-0.004674) | 0.003251 / 0.004328 (-0.001078) | 0.049307 / 0.004250 (0.045057) | 0.046189 / 0.037052 (0.009137) | 0.268182 / 0.258489 (0.009693) | 0.303659 / 0.293841 (0.009818) | 0.029312 / 0.128546 (-0.099234) | 0.013649 / 0.075646 (-0.061997) | 0.204240 / 0.419271 (-0.215032) | 0.036607 / 0.043533 (-0.006926) | 0.252232 / 0.255139 (-0.002907) | 0.271960 / 0.283200 (-0.011239) | 0.018043 / 0.141683 (-0.123640) | 1.148601 / 1.452155 (-0.303553) | 1.212313 / 1.492716 (-0.280403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096354 / 0.018006 (0.078348) | 0.302575 / 0.000490 (0.302085) | 0.000246 / 0.000200 (0.000046) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019023 / 0.037411 (-0.018389) | 0.064821 / 0.014526 (0.050295) | 0.077046 / 0.176557 (-0.099510) | 0.122896 / 0.737135 (-0.614239) | 0.078300 / 0.296338 (-0.218038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283681 / 0.215209 (0.068472) | 2.801473 / 2.077655 (0.723818) | 1.505611 / 1.504120 (0.001491) | 1.385832 / 1.541195 (-0.155363) | 1.430284 / 1.468490 (-0.038206) | 0.752041 / 4.584777 (-3.832736) | 2.406138 / 3.745712 (-1.339574) | 2.941370 / 5.269862 (-2.328492) | 1.887681 / 4.565676 (-2.677996) | 0.078693 / 0.424275 (-0.345582) | 0.005266 / 0.007607 (-0.002341) | 0.336484 / 0.226044 (0.110440) | 3.372262 / 2.268929 (1.103334) | 1.861541 / 55.444624 (-53.583084) | 1.572782 / 6.876477 (-5.303694) | 1.592387 / 2.142072 (-0.549685) | 0.796557 / 4.805227 (-4.008670) | 0.134923 / 6.500664 (-6.365741) | 0.043007 / 0.075469 (-0.032462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982690 / 1.841788 (-0.859097) | 11.700213 / 8.074308 (3.625905) | 9.122642 / 10.191392 (-1.068750) | 0.141430 / 0.680424 (-0.538994) | 0.014971 / 0.534201 (-0.519230) | 0.300938 / 0.579283 (-0.278345) | 0.268315 / 0.434364 (-0.166049) | 0.339891 / 0.540337 (-0.200447) | 0.428302 / 1.386936 (-0.958634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005732 / 0.011353 (-0.005621) | 0.003905 / 0.011008 (-0.007103) | 0.049900 / 0.038508 (0.011392) | 0.032255 / 0.023109 (0.009145) | 0.267929 / 0.275898 (-0.007969) | 0.295595 / 0.323480 (-0.027885) | 0.004437 / 0.007986 (-0.003549) | 0.003008 / 0.004328 (-0.001321) | 0.048357 / 0.004250 (0.044107) | 0.040118 / 0.037052 (0.003066) | 0.282859 / 0.258489 (0.024370) | 0.319243 / 0.293841 (0.025402) | 0.032793 / 0.128546 (-0.095754) | 0.012091 / 0.075646 (-0.063555) | 0.060082 / 0.419271 (-0.359189) | 0.034426 / 0.043533 (-0.009107) | 0.273668 / 0.255139 (0.018529) | 0.292110 / 0.283200 (0.008910) | 0.019002 / 0.141683 (-0.122680) | 1.165850 / 1.452155 (-0.286304) | 1.209195 / 1.492716 (-0.283521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099267 / 0.018006 (0.081261) | 0.316746 / 0.000490 (0.316256) | 0.000267 / 0.000200 (0.000067) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023117 / 0.037411 (-0.014294) | 0.076691 / 0.014526 (0.062165) | 0.092190 / 0.176557 (-0.084367) | 0.130620 / 0.737135 (-0.606515) | 0.091068 / 0.296338 (-0.205271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296419 / 0.215209 (0.081210) | 2.933964 / 2.077655 (0.856309) | 1.595015 / 1.504120 (0.090895) | 1.467610 / 1.541195 (-0.073585) | 1.487386 / 1.468490 (0.018896) | 0.730927 / 4.584777 (-3.853850) | 0.971276 / 3.745712 (-2.774436) | 2.969735 / 5.269862 (-2.300127) | 1.916126 / 4.565676 (-2.649550) | 0.078863 / 0.424275 (-0.345412) | 0.005506 / 0.007607 (-0.002101) | 0.345191 / 0.226044 (0.119147) | 3.407481 / 2.268929 (1.138553) | 1.955966 / 55.444624 (-53.488659) | 1.677365 / 6.876477 (-5.199112) | 1.716052 / 2.142072 (-0.426020) | 0.797208 / 4.805227 (-4.008020) | 0.132853 / 6.500664 (-6.367811) | 0.041691 / 0.075469 (-0.033778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.042331 / 1.841788 (-0.799456) | 12.186080 / 8.074308 (4.111772) | 10.288961 / 10.191392 (0.097569) | 0.141897 / 0.680424 (-0.538526) | 0.015321 / 0.534201 (-0.518880) | 0.308302 / 0.579283 (-0.270981) | 0.123292 / 0.434364 (-0.311072) | 0.348515 / 0.540337 (-0.191823) | 0.473045 / 1.386936 (-0.913891) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cedffa52879ebc5e4df43f0bcf8660ee7229f0dc \"CML watermark\")\n" ]
2,477,766,493
7,119
Install transformers with numpy-2 CI
closed
2024-08-21T11:14:59
2024-08-21T11:42:35
2024-08-21T11:36:50
https://github.com/huggingface/datasets/pull/7119
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7119", "html_url": "https://github.com/huggingface/datasets/pull/7119", "diff_url": "https://github.com/huggingface/datasets/pull/7119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7119.patch", "merged_at": "2024-08-21T11:36:50" }
albertvillanova
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005156 / 0.011353 (-0.006197) | 0.003365 / 0.011008 (-0.007643) | 0.063451 / 0.038508 (0.024943) | 0.029510 / 0.023109 (0.006401) | 0.244825 / 0.275898 (-0.031074) | 0.265157 / 0.323480 (-0.058323) | 0.004239 / 0.007986 (-0.003747) | 0.002732 / 0.004328 (-0.001596) | 0.050412 / 0.004250 (0.046162) | 0.043608 / 0.037052 (0.006556) | 0.256635 / 0.258489 (-0.001854) | 0.277472 / 0.293841 (-0.016369) | 0.029329 / 0.128546 (-0.099217) | 0.012318 / 0.075646 (-0.063329) | 0.204751 / 0.419271 (-0.214520) | 0.036468 / 0.043533 (-0.007065) | 0.246773 / 0.255139 (-0.008366) | 0.263932 / 0.283200 (-0.019268) | 0.017053 / 0.141683 (-0.124629) | 1.173249 / 1.452155 (-0.278905) | 1.234186 / 1.492716 (-0.258531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092398 / 0.018006 (0.074391) | 0.309473 / 0.000490 (0.308983) | 0.000220 / 0.000200 (0.000020) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018553 / 0.037411 (-0.018858) | 0.062546 / 0.014526 (0.048020) | 0.073943 / 0.176557 (-0.102613) | 0.120498 / 0.737135 (-0.616638) | 0.075185 / 0.296338 (-0.221153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296899 / 0.215209 (0.081690) | 2.919088 / 2.077655 (0.841433) | 1.533146 / 1.504120 (0.029026) | 1.395441 / 1.541195 (-0.145754) | 1.399089 / 1.468490 (-0.069401) | 0.742750 / 4.584777 (-3.842027) | 2.390317 / 3.745712 (-1.355395) | 2.883166 / 5.269862 (-2.386695) | 1.854003 / 4.565676 (-2.711674) | 0.077140 / 0.424275 (-0.347136) | 0.005176 / 0.007607 (-0.002432) | 0.349391 / 0.226044 (0.123347) | 3.466043 / 2.268929 (1.197114) | 1.870619 / 55.444624 (-53.574005) | 1.559173 / 6.876477 (-5.317303) | 1.605480 / 2.142072 (-0.536592) | 0.786753 / 4.805227 (-4.018474) | 0.134869 / 6.500664 (-6.365795) | 0.042176 / 0.075469 (-0.033293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954256 / 1.841788 (-0.887532) | 11.194758 / 8.074308 (3.120449) | 9.129670 / 10.191392 (-1.061722) | 0.138318 / 0.680424 (-0.542106) | 0.014299 / 0.534201 (-0.519902) | 0.303704 / 0.579283 (-0.275579) | 0.262513 / 0.434364 (-0.171851) | 0.346539 / 0.540337 (-0.193798) | 0.429524 / 1.386936 (-0.957412) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005692 / 0.011353 (-0.005661) | 0.003423 / 0.011008 (-0.007586) | 0.050618 / 0.038508 (0.012110) | 0.031053 / 0.023109 (0.007944) | 0.275901 / 0.275898 (0.000003) | 0.294404 / 0.323480 (-0.029076) | 0.004303 / 0.007986 (-0.003682) | 0.002728 / 0.004328 (-0.001600) | 0.049757 / 0.004250 (0.045507) | 0.039997 / 0.037052 (0.002945) | 0.287291 / 0.258489 (0.028802) | 0.319186 / 0.293841 (0.025345) | 0.032558 / 0.128546 (-0.095988) | 0.012088 / 0.075646 (-0.063558) | 0.060746 / 0.419271 (-0.358525) | 0.034046 / 0.043533 (-0.009486) | 0.276170 / 0.255139 (0.021031) | 0.293673 / 0.283200 (0.010474) | 0.018018 / 0.141683 (-0.123665) | 1.158453 / 1.452155 (-0.293701) | 1.198599 / 1.492716 (-0.294118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093134 / 0.018006 (0.075127) | 0.304511 / 0.000490 (0.304021) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022991 / 0.037411 (-0.014421) | 0.077548 / 0.014526 (0.063022) | 0.087887 / 0.176557 (-0.088670) | 0.131786 / 0.737135 (-0.605349) | 0.088747 / 0.296338 (-0.207591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302811 / 0.215209 (0.087602) | 2.959276 / 2.077655 (0.881621) | 1.591348 / 1.504120 (0.087229) | 1.464731 / 1.541195 (-0.076464) | 1.474112 / 1.468490 (0.005622) | 0.741573 / 4.584777 (-3.843204) | 0.959229 / 3.745712 (-2.786483) | 2.895750 / 5.269862 (-2.374111) | 1.896051 / 4.565676 (-2.669625) | 0.079012 / 0.424275 (-0.345264) | 0.005494 / 0.007607 (-0.002113) | 0.355699 / 0.226044 (0.129655) | 3.524833 / 2.268929 (1.255905) | 1.972358 / 55.444624 (-53.472266) | 1.667249 / 6.876477 (-5.209228) | 1.658635 / 2.142072 (-0.483438) | 0.813184 / 4.805227 (-3.992044) | 0.134226 / 6.500664 (-6.366438) | 0.041087 / 0.075469 (-0.034382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038963 / 1.841788 (-0.802824) | 11.785835 / 8.074308 (3.711526) | 10.397027 / 10.191392 (0.205635) | 0.141748 / 0.680424 (-0.538676) | 0.014738 / 0.534201 (-0.519463) | 0.300056 / 0.579283 (-0.279227) | 0.127442 / 0.434364 (-0.306922) | 0.345013 / 0.540337 (-0.195324) | 0.449598 / 1.386936 (-0.937338) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70bac27ef861b2b11f581a291a6b76adeee24f98 \"CML watermark\")\n" ]