id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
โŒ€
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
โŒ€
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,425,460,168
7,067
Convert_to_parquet fails for datasets with multiple configs
If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error: ``` Traceback (most recent call last): File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run dataset.push_to_hub( File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f) Bad request: Invalid reference for a branch: refs/pr/1 ```
closed
https://github.com/huggingface/datasets/issues/7067
2024-07-23T15:09:33
2024-07-30T10:51:02
2024-07-30T10:51:02
{ "login": "HuangZhen02", "id": 97585031, "type": "User" }
[]
false
[]
2,425,125,160
7,066
One subset per file in repo ?
Right now we consider all the files of a dataset to be the same data, e.g. ``` single_subset_dataset/ โ”œโ”€โ”€ train0.jsonl โ”œโ”€โ”€ train1.jsonl โ””โ”€โ”€ train2.jsonl ``` but in cases like this, each file is actually a different subset of the dataset and should be loaded separately ``` many_subsets_dataset/ โ”œโ”€โ”€ animals.jsonl โ”œโ”€โ”€ trees.jsonl โ””โ”€โ”€ metadata.jsonl ``` It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ?
open
https://github.com/huggingface/datasets/issues/7066
2024-07-23T12:43:59
2025-06-26T08:24:50
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
2,424,734,953
7,065
Cannot get item after loading from disk and then converting to iterable.
### Describe the bug The dataset generated from local file works fine. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` But after saving it to disk and then loading it from disk, I cannot get data as expected. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ds.save_to_disk("./train") ds = datasets.load_from_disk("./train") ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` After a long time waiting, an error occurs: ``` Loading dataset from disk: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 165/165 [00:00<00:00, 6422.18it/s] Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get if not self._poll(timeout): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll r = wait([self], timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait ready = selector.select(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module> cli.main() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module> for batch in dataloader: File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__ data = self._next_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data idx, data = self._get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data success, data = self._try_get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly ``` It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable? ### Steps to reproduce the bug 1. Create a `Dataset` from local files with `from_dict` 2. Save it to disk with `save_to_disk` 3. Load it from disk with `load_from_disk` 4. Convert to iterable with `to_iterable_dataset` 5. Loop the dataset ### Expected behavior Get items faster than the original dataset generated from dict. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.23.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
https://github.com/huggingface/datasets/issues/7065
2024-07-23T09:37:56
2024-07-23T09:37:56
null
{ "login": "happyTonakai", "id": 21305646, "type": "User" }
[]
false
[]
2,424,613,104
7,064
Add `batch` method to `Dataset` class
This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples. Key changes: - Add `batch` method to `Dataset` class in `arrow_dataset.py` - Utilize `map` method for batching Closes #7063 Once the approach is approved, i will create the tests and update the documentation.
closed
https://github.com/huggingface/datasets/pull/7064
2024-07-23T08:40:43
2024-07-25T13:51:25
2024-07-25T13:45:20
{ "login": "lappemic", "id": 61876623, "type": "User" }
[]
true
[]
2,424,488,648
7,063
Add `batch` method to `Dataset`
### Feature request Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054. ### Motivation A batched iteration speeds up data loading significantly (see e.g. #6279) ### Your contribution I plan to open a PR to implement this.
closed
https://github.com/huggingface/datasets/issues/7063
2024-07-23T07:36:59
2024-07-25T13:45:21
2024-07-25T13:45:21
{ "login": "lappemic", "id": 61876623, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,424,467,484
7,062
Avoid calling http_head for non-HTTP URLs
Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement. Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,... I discovered this while working in an unrelated issue.
closed
https://github.com/huggingface/datasets/pull/7062
2024-07-23T07:25:09
2024-07-23T14:28:27
2024-07-23T14:21:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,423,786,881
7,061
Custom Dataset | Still Raise Error while handling errors in _generate_examples
### Describe the bug I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: โ”‚ โ”‚ โ”‚ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. โ”‚ โ”‚ py:1377 in <listcomp> โ”‚ โ”‚ โ”‚ โ”‚ 1374 โ”‚ โ”‚ โ”‚ โ”‚ if all(async_result.ready() for async_result in async_results) and queue โ”‚ โ”‚ 1375 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ break โ”‚ โ”‚ 1376 โ”‚ โ”‚ # we get the result in case there's an error to raise โ”‚ โ”‚ โฑ 1377 โ”‚ โ”‚ [async_result.get() for async_result in async_results] โ”‚ โ”‚ 1378 โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ .0 = <list_iterator object at 0x7f2cc1f0ce20> โ”‚ โ”‚ โ”‚ โ”‚ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ โ”‚ โ”‚ โ”‚ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 โ”‚ โ”‚ in get โ”‚ โ”‚ โ”‚ โ”‚ 768 โ”‚ โ”‚ if self._success: โ”‚ โ”‚ 769 โ”‚ โ”‚ โ”‚ return self._value โ”‚ โ”‚ 770 โ”‚ โ”‚ else: โ”‚ โ”‚ โฑ 771 โ”‚ โ”‚ โ”‚ raise self._value โ”‚ โ”‚ 772 โ”‚ โ”‚ โ”‚ 773 โ”‚ def _set(self, i, obj): โ”‚ โ”‚ 774 โ”‚ โ”‚ self._success, self._value = obj โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> โ”‚ โ”‚ โ”‚ โ”‚ timeout = None โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
open
https://github.com/huggingface/datasets/issues/7061
2024-07-22T21:18:12
2024-09-09T14:48:07
null
{ "login": "hahmad2008", "id": 68266028, "type": "User" }
[]
false
[]
2,423,188,419
7,060
WebDataset BuilderConfig
This PR adds `WebDatasetConfig`. Closes #7055
closed
https://github.com/huggingface/datasets/pull/7060
2024-07-22T15:41:07
2024-07-23T13:28:44
2024-07-23T13:28:44
{ "login": "hlky", "id": 106811348, "type": "User" }
[]
true
[]
2,422,827,892
7,059
None values are skipped when reading jsonl in subobjects
### Describe the bug I have been fighting against my machine since this morning only to find out this is some kind of a bug. When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around. E.g., let's take this example Here are two version of a same dataset: [not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz) [buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz) ### Steps to reproduce the bug 1. Load the `buggy.tar.gz` dataset 2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` 3. Load the `not-buggy.tar.gz` dataset 4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` ### Expected behavior Both should have 4 baseline entries: 1. Buggy should have None followed by three lists 2. Non-Buggy should have four lists, and the first one should be an empty list. One does not work, 2 works. Despite accepting None in another position than the first one. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
open
https://github.com/huggingface/datasets/issues/7059
2024-07-22T13:02:42
2024-07-22T13:02:53
null
{ "login": "PonteIneptique", "id": 1929830, "type": "User" }
[]
false
[]
2,422,560,355
7,058
New feature type: Document
It would be useful for PDF. https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
open
https://github.com/huggingface/datasets/issues/7058
2024-07-22T10:49:20
2024-07-22T10:49:20
null
{ "login": "severo", "id": 1676121, "type": "User" }
[]
false
[]
2,422,498,520
7,057
Update load_hub.mdx
null
closed
https://github.com/huggingface/datasets/pull/7057
2024-07-22T10:17:46
2024-07-22T10:34:14
2024-07-22T10:28:10
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,422,192,257
7,056
Make `BufferShuffledExamplesIterable` resumable
This PR aims to implement a resumable `BufferShuffledExamplesIterable`. Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first example in the buffer dict. The idea is that since the buffer size is limited, even if the entire buffer is discarded, we can rebuild it as long as the state of the oldest example is recorded. For buffer size $B$, the expected distance between when an example is pushed and when it is yielded is $d = \sum_{k=1}^{\infty} k\frac{1}{B} (1 - \frac{1}{B} )^{k-1} =B$. Simulation experiments support these claims: ```py from random import randint BUFFER_SIZE = 1024 dists = [] buffer = [] for i in range(10000000): if i < BUFFER_SIZE: buffer.append(i) else: index = randint(0, BUFFER_SIZE - 1) dists.append(i - buffer[index]) buffer[index] = i print(f"MIN DIST: {min(dists)}\nMAX DIST: {max(dists)}\nAVG DIST: {sum(dists) / len(dists):.2f}\n") ``` which produces the following output: ```py MIN DIST: 1 MAX DIST: 15136 AVG DIST: 1023.95 ``` The overall time for reconstructing the buffer and recovery should not be too long. The following code mimics the cases of resuming online tokenization by `datasets` and `StatefulDataLoader` under distributed scenarios, ```py import pickle import time from itertools import chain from typing import Any, Dict, List import torch from datasets import load_dataset from torchdata.stateful_dataloader import StatefulDataLoader from tqdm import tqdm from transformers import AutoTokenizer, DataCollatorForLanguageModeling tokenizer = AutoTokenizer.from_pretrained('fla-hub/gla-1.3B-100B') tokenizer.pad_token = tokenizer.eos_token data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) torch.manual_seed(42) def tokenize(examples: Dict[str, List[Any]]) -> Dict[str, List[List[int]]]: input_ids = tokenizer(examples['text'])['input_ids'] input_ids = list(chain(*input_ids)) total_length = len(input_ids) chunk_size = 2048 total_length = (total_length // chunk_size) * chunk_size # the last chunk smaller than chunk_size will be discarded return {'input_ids': [input_ids[i: i+chunk_size] for i in range(0, total_length, chunk_size)]} batch_size = 16 num_workers = 5 context_length = 2048 rank = 1 world_size = 32 prefetch_factor = 2 steps = 2048 path = 'fla-hub/slimpajama-test' dataset = load_dataset( path=path, split='train', streaming=True, trust_remote_code=True ) dataset = dataset.map(tokenize, batched=True, remove_columns=next(iter(dataset)).keys()) dataset = dataset.shuffle(seed=42) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) start = time.time() for i, batch in tqdm(enumerate(loader)): if i == 0: print(f'{i}\n{batch["input_ids"]}') if i == steps - 1: print(f'{i}\n{batch["input_ids"]}') state_dict = loader.state_dict() if i == steps: print(f'{i}\n{batch["input_ids"]}') break print(f"{time.time() - start:.2f}s elapsed") print(f"{len(pickle.dumps(state_dict)) / 1024**2:.2f}MB states in total") for worker in state_dict['_snapshot']['_worker_snapshots'].keys(): print(f"{worker} {len(pickle.dumps(state_dict['_snapshot']['_worker_snapshots'][worker])) / 1024**2:.2f}MB") print(state_dict['_snapshot']['_worker_snapshots']['worker_0']['dataset_state']) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) print("Loading state dict") loader.load_state_dict(state_dict) start = time.time() for batch in loader: print(batch['input_ids']) break print(f"{time.time() - start:.2f}s elapsed") ``` and the outputs are ```py 0 tensor([[ 909, 395, 19082, ..., 13088, 16232, 395], [ 601, 28705, 28770, ..., 28733, 923, 288], [21753, 15071, 13977, ..., 9369, 28723, 415], ..., [21763, 28751, 20300, ..., 28781, 28734, 4775], [ 354, 396, 10214, ..., 298, 429, 28770], [ 333, 6149, 28768, ..., 2773, 340, 351]]) 2047 tensor([[28723, 415, 3889, ..., 272, 3065, 2609], [ 403, 3214, 3629, ..., 403, 21163, 16434], [28723, 13, 28749, ..., 28705, 28750, 28734], ..., [ 2778, 2251, 28723, ..., 354, 684, 429], [ 5659, 298, 1038, ..., 5290, 297, 22153], [ 938, 28723, 1537, ..., 9123, 28733, 12154]]) 2048 tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 65.97s elapsed 0.00MB states in total worker_0 0.00MB worker_1 0.00MB worker_2 0.00MB worker_3 0.00MB worker_4 0.00MB {'ex_iterable': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 14000}, 'num_examples_since_previous_state': 166, 'previous_state_example_idx': 7394, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 13000}}, 'num_taken': 6560, 'global_example_idx': 7560, 'buffer_state_dict': {'num_taken': 6560, 'global_example_idx': 356, 'index_offset': 0, 'first_state': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 1000}, 'num_examples_since_previous_state': 356, 'previous_state_example_idx': 0, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0}}, 'bit_generator_state': {'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'bit_generator': 'PCG64', 'has_uint32': 0, 'uinteger': 0}}} Loading state dict tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 24.60s elapsed ``` Not sure if this PR complies with the `datasets` code style. Looking for your help @lhoestq, also very willing to further improve the code if any suggestions are given.
closed
https://github.com/huggingface/datasets/pull/7056
2024-07-22T07:50:02
2025-01-31T05:34:20
2025-01-31T05:34:19
{ "login": "yzhangcs", "id": 18402347, "type": "User" }
[]
true
[]
2,421,708,891
7,055
WebDataset with different prefixes are unsupported
### Describe the bug Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k) Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given. ``` The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. ``` The purpose of this check is unclear because PyArrow supports different keys. Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset. ``` >>> from datasets import load_dataset >>> path = "shards/*.tar" >>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True) Resolving data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 152/152 [00:00<00:00, 56458.93it/s] >>> dataset IterableDataset({ features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'], n_shards: 152 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("bigdata-pw/fashion-150k") ``` ### Expected behavior Dataset loads without error ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.19 - `huggingface_hub` version: 0.23.4 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
closed
https://github.com/huggingface/datasets/issues/7055
2024-07-22T01:14:19
2024-07-24T13:26:30
2024-07-23T13:28:46
{ "login": "hlky", "id": 106811348, "type": "User" }
[]
false
[]
2,418,548,995
7,054
Add batching to `IterableDataset`
I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class. The main changes are: 1. A new `BatchedExamplesIterable` that groups examples into batches. 2. A `.batch()` method for `IterableDataset` to easily create batched versions. 3. Support for shuffling and sharding to work with PyTorch DataLoader and multiple workers. I'm not sure if this is exactly what you had in mind and also have not fully tested it atm, so I'd really appreciate your feedback. Does this seem like it's heading in the right direction? I'm happy to make any changes or explore different approaches if needed. Pinging @lhoestq
closed
https://github.com/huggingface/datasets/pull/7054
2024-07-19T10:11:47
2024-07-23T13:25:13
2024-07-23T10:34:28
{ "login": "lappemic", "id": 61876623, "type": "User" }
[]
true
[]
2,416,423,791
7,053
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
### Describe the bug in data_files.py, line 332, `fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)` If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')` So, `isinstance(fs.protocol, str) == False` and `protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise `TypeError: can only concatenate tuple (not "str") to tuple`. ### Steps to reproduce the bug Steps to reproduce: 1. Run on a cloud server like AWS, 2. `import datasets.data_files as datafile` 3. datafile.resolve_pattern('path/to/dataset', '.') 4. `TypeError: can only concatenate tuple (not "str") to tuple` ### Expected behavior Should return path of the dataset, with fs.protocol at the beginning ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.19 - Huggingface_hub version: 0.23.5 - PyArrow version: 16.1.0 - Pandas version: 1.1.5
closed
https://github.com/huggingface/datasets/issues/7053
2024-07-18T13:42:35
2024-07-18T15:17:42
2024-07-18T15:16:18
{ "login": "MatthewYZhang", "id": 48289218, "type": "User" }
[]
false
[]
2,411,682,730
7,052
Adding `Music` feature for symbolic music modality (MIDI, abc)
โš ๏ธ (WIP) โš ๏ธ ### What this PR does This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files. ###ย Motivations These two file formats are widely used in the [Music Information Retrieval (MIR)](https://en.wikipedia.org/wiki/Music_information_retrieval) for tasks such as music generation, music transcription, music synthesis or music transcription. Having a dedicated feature in the datasets library would allow to both encourage researchers to share datasets of this modality as well as making them more easily usable for end users, benefitting from the perks of the library. These file formats are supported by [symusic](https://github.com/Yikai-Liao/symusic), a lightweight Python library with C bindings (using nanobind) allowing to efficiently read, write and manipulate them. The library is actively developed, and can in the future also implement other file formats such as [musicXML](https://en.wikipedia.org/wiki/MusicXML). As such, this PR relies on it. The music data can then easily be tokenized with appropriate tokenizers such as [MidiTok](https://github.com/Natooz/MidiTok) or converted to pianorolls matrices by symusic. **Jul 16th 2024:** * the tests for the `Music` feature are currently failing due to non-supported access to the LazyBatch in `test_dataset_with_music_feature_map` and `test_dataset_with_music_feature_map_resample_music` (see TODOs). I am a beginner with pyArrow, I'll take any advice to make this work; * additional tests including the `Music` feature with parquet and WebDataset should be implemented. As of right now, I am waiting for your feedback before taking further steps; * a `MusicFolder` should also be implemented to comply with the usages of the `Image` and `Audio` features, waiting for your feedback too. CCing @lhoestq and @albertvillanova
closed
https://github.com/huggingface/datasets/pull/7052
2024-07-16T17:26:04
2024-07-29T06:47:55
2024-07-29T06:47:55
{ "login": "Natooz", "id": 56734983, "type": "User" }
[]
true
[]
2,409,353,929
7,051
How to set_epoch with interleave_datasets?
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples. I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch) Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start. How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset... Something like ``` dataset_a = load_dataset(...) dataset_b = load_dataset(...) def epoch_shuffled_dataset(ds): # How to make this maintain the number of shards in ds?? for epoch in itertools.count(): ds.set_epoch(epoch) yield from iter(ds) shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a}) interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted') ```
closed
https://github.com/huggingface/datasets/issues/7051
2024-07-15T18:24:52
2024-08-05T20:58:04
2024-08-05T20:58:04
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
2,409,048,733
7,050
add checkpoint and resume title in docs
(minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata
closed
https://github.com/huggingface/datasets/pull/7050
2024-07-15T15:38:04
2024-07-15T16:06:15
2024-07-15T15:59:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,408,514,366
7,049
Save nparray as list
### Describe the bug When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision? ### Steps to reproduce the bug the map function ```python def convert_image_to_features(inst, processor, image_dir): image_file = inst["image_url"] file = image_file.split("/")[-1] image_path = os.path.join(image_dir, file) image = Image.open(image_path) image = image.convert("RGBA") inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"] return inst ``` main function ```python map_fun = partial( convert_image_to_features, processor=processor, image_dir=image_dir ) ds = ds.map(map_fun, batched=False, num_proc=20) print(type(ds[0]["pixel_values"]) ``` ### Expected behavior (type < list>) ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.23.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
closed
https://github.com/huggingface/datasets/issues/7049
2024-07-15T11:36:11
2024-07-18T11:33:34
2024-07-18T11:33:34
{ "login": "Sakurakdx", "id": 48399040, "type": "User" }
[]
false
[]
2,408,487,547
7,048
ImportError: numpy.core.multiarray when using `filter`
### Describe the bug I can't apply the filter method on my dataset. ### Steps to reproduce the bug The following snippet generates a bug: ```python from datasets import load_dataset ami = load_dataset('kamilakesbi/ami', 'ihm') ami['train'].filter( lambda example: example["file_name"] == 'EN2001a' ) ``` I get the following error: `ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).` ### Expected behavior It should work properly! ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
closed
https://github.com/huggingface/datasets/issues/7048
2024-07-15T11:21:04
2024-07-16T10:11:25
2024-07-16T10:11:25
{ "login": "kamilakesbi", "id": 45195979, "type": "User" }
[]
false
[]
2,406,495,084
7,047
Save Dataset as Sharded Parquet
### Feature request `to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically. ### Motivation This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet. ### Your contribution I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158 to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
open
https://github.com/huggingface/datasets/issues/7047
2024-07-12T23:47:51
2024-07-17T12:07:08
null
{ "login": "tom-p-reichel", "id": 43631024, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,405,485,582
7,046
Support librosa and numpy 2.0 for Python 3.10
Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release: - https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1 - https://github.com/dofuuz/python-soxr/issues/28
closed
https://github.com/huggingface/datasets/pull/7046
2024-07-12T12:42:47
2024-07-12T13:04:40
2024-07-12T12:58:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,405,447,858
7,045
Fix tensorflow min version depending on Python version
Fix tensorflow min version depending on Python version. Related to: - #6991
closed
https://github.com/huggingface/datasets/pull/7045
2024-07-12T12:20:23
2024-07-12T12:38:53
2024-07-12T12:33:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,405,002,987
7,044
Mark tests that require librosa
Mark tests that require `librosa`. Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`: - https://github.com/dofuuz/python-soxr/issues/28
closed
https://github.com/huggingface/datasets/pull/7044
2024-07-12T08:06:59
2024-07-12T09:06:32
2024-07-12T09:00:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,404,951,714
7,043
Add decorator as explicit test dependency
Add decorator as explicit test dependency. We use `decorator` library in our CI test since PR: - #4845 However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies. I discovered this while testing Numpy 2.0 and removing incompatible libraries.
closed
https://github.com/huggingface/datasets/pull/7043
2024-07-12T07:35:23
2024-07-12T08:12:55
2024-07-12T08:07:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,404,605,836
7,042
Improved the tutorial by adding a link for loading datasets
Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets.
closed
https://github.com/huggingface/datasets/pull/7042
2024-07-12T03:49:54
2024-08-15T10:07:44
2024-08-15T10:01:59
{ "login": "AmboThom", "id": 41874659, "type": "User" }
[]
true
[]
2,404,576,038
7,041
`sort` after `filter` unreasonably slow
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
closed
https://github.com/huggingface/datasets/issues/7041
2024-07-12T03:29:27
2025-04-29T09:49:25
2025-04-29T09:49:25
{ "login": "Tobin-rgb", "id": 56711045, "type": "User" }
[]
false
[]
2,402,918,335
7,040
load `streaming=True` dataset with downloaded cache
### Describe the bug We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below: ```python def _generate_examples(self, filepath, split): for file in filepath: with fsspec.open(file, "rb") as fs: with h5py.File(fs, "r") as fp: # for event_id in sorted(list(fp.keys())): event_ids = list(fp.keys()) ...... ``` ### Steps to reproduce the bug The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples. ### Expected behavior So does the following make sense so far? 1. download the files ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True) ``` 2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`) ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true) ``` I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution. ### Environment info - `datasets` = 2.18.0 - `h5py` = 3.10.0 - `fsspec` = 2023.10.0
open
https://github.com/huggingface/datasets/issues/7040
2024-07-11T11:14:13
2024-07-11T14:11:56
null
{ "login": "wanghaoyucn", "id": 39429965, "type": "User" }
[]
false
[]
2,402,403,390
7,039
Fix export to JSON when dataset larger than batch size
Fix export to JSON (`lines=False`) when dataset larger than batch size. Fix #7037.
open
https://github.com/huggingface/datasets/pull/7039
2024-07-11T06:52:22
2024-09-28T06:10:00
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,400,192,419
7,037
A bug of Dataset.to_json() function
### Describe the bug When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again. The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size). ### Steps to reproduce the bug try this code: ```python from datasets import load_dataset import json train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"] output_path = "./harmless-base_hftojs.json" print(len(train_dataset)) train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2) with open(output_path, encoding="utf-8") as f: data = json.loads(f.read()) ``` it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709) Extra square brackets have appeared here: <img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc"> ### Expected behavior The code runs normally. ### Environment info datasets=2.20.0
open
https://github.com/huggingface/datasets/issues/7037
2024-07-10T09:11:22
2024-09-22T13:16:07
null
{ "login": "LinglingGreat", "id": 26499566, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,400,035,672
7,036
Fix doc generation when NamedSplit is used as parameter default value
Fix doc generation when `NamedSplit` is used as parameter default value. Fix #7035.
closed
https://github.com/huggingface/datasets/pull/7036
2024-07-10T07:58:46
2024-07-26T07:58:00
2024-07-26T07:51:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,400,021,225
7,035
Docs are not generated when a parameter defaults to a NamedSplit value
While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like: ```python def call_function(split=Split.TRAIN): ... ``` The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'> See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015 ``` Building the MDX files: 97%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹| 58/60 [00:00<00:00, 91.94it/s] Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files content, new_anchors, source_files, errors = resolve_autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc doc = autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc method_doc, check = document_object( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object signature = format_signature(obj) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature if param.default != inspect._empty: File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__ return not self.__eq__(other) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__ raise ValueError(f"Equality not supported between split {self} and {other}") ValueError: Equality not supported between split train and <class 'inspect._empty'> The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command build_doc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc anchors_mapping, source_files_mapping = build_mdx_files( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Equality not supported between split train and <class 'inspect._empty'> ```
closed
https://github.com/huggingface/datasets/issues/7035
2024-07-10T07:51:24
2024-07-26T07:51:53
2024-07-26T07:51:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,397,525,974
7,034
chore: fix typos in docs
null
closed
https://github.com/huggingface/datasets/pull/7034
2024-07-09T08:35:05
2024-08-13T08:22:25
2024-08-13T08:16:22
{ "login": "hattizai", "id": 150505746, "type": "User" }
[]
true
[]
2,397,419,768
7,033
`from_generator` does not allow to specify the split name
### Describe the bug I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:` It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py ### Steps to reproduce the bug ``` In [1]: from datasets import Dataset In [2]: def gen(): ...: yield {"pokemon": "bulbasaur", "type": "grass"} ...: In [3]: ds = Dataset.from_generator(gen) Generating train split: 1 examples [00:00, 133.89 examples/s] ``` ### Expected behavior It should be possible to specify any split name ### Environment info - `datasets` version: 2.19.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - `huggingface_hub` version: 0.23.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
closed
https://github.com/huggingface/datasets/issues/7033
2024-07-09T07:47:58
2024-07-26T12:56:16
2024-07-26T09:31:56
{ "login": "pminervini", "id": 227357, "type": "User" }
[]
false
[]
2,395,531,699
7,032
Register `.zstd` extension for zstd-compressed files
For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered).
closed
https://github.com/huggingface/datasets/pull/7032
2024-07-08T12:39:50
2024-07-12T15:07:03
2024-07-12T15:07:03
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
2,395,401,692
7,031
CI quality is broken: use ruff check instead
CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027 ``` error: `ruff <path>` has been removed. Use `ruff check <path>` instead. ```
closed
https://github.com/huggingface/datasets/issues/7031
2024-07-08T11:42:24
2024-07-08T11:47:29
2024-07-08T11:47:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,393,411,631
7,030
Add option to disable progress bar when reading a dataset ("Loading dataset from disk")
### Feature request Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16. ### Motivation I am reading a lot of datasets that it creates lots of logs. <img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a"> ### Your contribution Seems like an easy fix to make. I can create a PR if necessary.
closed
https://github.com/huggingface/datasets/issues/7030
2024-07-06T05:43:37
2024-07-13T14:35:59
2024-07-13T14:35:59
{ "login": "yuvalkirstain", "id": 57996478, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,391,366,696
7,029
load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error
### Describe the bug I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory. ### Steps to reproduce the bug ```python d = load_dataset( path=hugging_face_link, split=split, token=token, cache_dir="/tmp/hugging_face_cache", ) ``` ### Expected behavior Everything written to the file system as part of the load_datasets function should be in the /tmp directory. ### Environment info datasets version: 2.16.1 Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26 Python version: 3.11.9 huggingface_hub version: 0.19.4 PyArrow version: 16.1.0 Pandas version: 2.2.2 fsspec version: 2023.10.0
open
https://github.com/huggingface/datasets/issues/7029
2024-07-04T19:15:16
2024-07-17T12:44:03
null
{ "login": "sugam-nexusflow", "id": 171606538, "type": "User" }
[]
false
[]
2,391,077,531
7,028
Fix ci
...after last pr errors
closed
https://github.com/huggingface/datasets/pull/7028
2024-07-04T15:11:08
2024-07-04T15:26:35
2024-07-04T15:19:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,391,013,330
7,027
Missing line from previous pr
null
closed
https://github.com/huggingface/datasets/pull/7027
2024-07-04T14:34:29
2024-07-04T14:40:46
2024-07-04T14:34:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,390,983,889
7,026
Fix check_library_imports
move it to after the `trust_remote_code` check Note that it only affects local datasets that already exist on disk, not datasets loaded from HF directly
closed
https://github.com/huggingface/datasets/pull/7026
2024-07-04T14:18:38
2024-07-04T14:28:36
2024-07-04T14:20:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,390,488,546
7,025
feat: support non streamable arrow file binary format
Support Arrow files (`.arrow`) that are in non streamable binary file formats.
closed
https://github.com/huggingface/datasets/pull/7025
2024-07-04T10:11:12
2024-07-31T06:15:50
2024-07-31T06:09:31
{ "login": "kmehant", "id": 15800200, "type": "User" }
[]
true
[]
2,390,141,626
7,024
Streaming dataset not returning data
### Describe the bug I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly. I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset. However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`. Coud this be some sort of network / firewall issue I'm facing? ### Steps to reproduce the bug I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551 Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle) ``` commitpackft = load_dataset( "chargoddard/commitpack-ft-instruct", split="train", streaming=True ).filter(lambda example: example["language"] == "Python") def form_template(example): """Forms a template for each example following the alpaca format for CommitPack""" example["content"] = ( "### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"] ) return example dataset = commitpackft.map( form_template, remove_columns=["id", "language", "license", "instruction", "input", "output"], ).shuffle( seed=42, buffer_size=10000 ) # remove everything since its all inside "content" now validation_data = dataset.take(4000) train_data = dataset.skip(4000) ``` The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation. ### Expected behavior The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.11.7 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
https://github.com/huggingface/datasets/issues/7024
2024-07-04T07:21:47
2024-07-04T07:21:47
null
{ "login": "johnwee1", "id": 91670254, "type": "User" }
[]
false
[]
2,388,090,424
7,023
Remove dead code for pyarrow < 15.0.0
Remove dead code for pyarrow < 15.0.0. Code is dead since the merge of: - #6892 Fix #7022.
closed
https://github.com/huggingface/datasets/pull/7023
2024-07-03T09:05:03
2024-07-03T09:24:46
2024-07-03T09:17:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,388,064,650
7,022
There is dead code after we require pyarrow >= 15.0.0
There are code lines specific for pyarrow versions < 15.0.0. However, we require pyarrow >= 15.0.0 since the merge of PR: - #6892 Those code lines are now dead code and should be removed.
closed
https://github.com/huggingface/datasets/issues/7022
2024-07-03T08:52:57
2024-07-03T09:17:36
2024-07-03T09:17:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,387,948,935
7,021
Fix casting list array to fixed size list
Fix casting list array to fixed size list. This bug was introduced in [datasets-2.17.0](https://github.com/huggingface/datasets/releases/tag/2.17.0) by PR: https://github.com/huggingface/datasets/pull/6283/files#diff-1cb2b66aa9311d729cfd83013dad56cf5afcda35b39dfd0bfe9c3813a049eab0R1899 - #6283 Fix #7020.
closed
https://github.com/huggingface/datasets/pull/7021
2024-07-03T07:58:57
2024-07-03T08:47:49
2024-07-03T08:41:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,387,940,990
7,020
Casting list array to fixed size list raises error
When trying to cast a list array to fixed size list, an AttributeError is raised: > AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' Steps to reproduce the bug: ```python import pyarrow as pa from datasets.table import array_cast arr = pa.array([[0, 1]]) array_cast(arr, pa.list_(pa.int64(), 2)) ``` Stack trace: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-6cb90a1d8216> in <module> 3 4 arr = pa.array([[0, 1]]) ----> 5 array_cast(arr, pa.list_(pa.int64(), 2)) ~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1803 else: -> 1804 return func(array, *args, **kwargs) 1805 1806 return wrapper ~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str) 1920 else: 1921 array_values = array.values[ -> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length 1923 ] 1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size) AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' ```
closed
https://github.com/huggingface/datasets/issues/7020
2024-07-03T07:54:49
2024-07-03T08:41:56
2024-07-03T08:41:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,385,793,897
7,019
Support pyarrow large_list
Allow Polars round trip by supporting pyarrow large list. Fix #6834, fix #6984. Supersede and close #4800, close #6835, close #6986.
closed
https://github.com/huggingface/datasets/pull/7019
2024-07-02T09:52:52
2024-08-12T14:49:45
2024-08-12T14:43:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,383,700,286
7,018
`load_dataset` fails to load dataset saved by `save_to_disk`
### Describe the bug This code fails to load the dataset it just saved: ```python from datasets import load_dataset from transformers import AutoTokenizer MODEL = "google-bert/bert-base-cased" tokenizer = AutoTokenizer.from_pretrained(MODEL) dataset = load_dataset("yelp_review_full") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets.save_to_disk("dataset") tokenized_datasets = load_dataset("dataset/") # raises ``` It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`. I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON: ```shell $ ls -l dataset/test -rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow -rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json -rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json ``` ### Steps to reproduce the bug Execute the code above. ### Expected behavior The dataset is loaded successfully. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
https://github.com/huggingface/datasets/issues/7018
2024-07-01T12:19:19
2025-05-24T05:21:12
null
{ "login": "sliedes", "id": 2307997, "type": "User" }
[]
false
[]
2,383,647,419
7,017
Support fsspec 2024.6.1
Support fsspec 2024.6.1.
closed
https://github.com/huggingface/datasets/pull/7017
2024-07-01T11:57:15
2024-07-01T12:12:32
2024-07-01T12:06:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,383,262,608
7,016
`drop_duplicates` method
### Feature request `drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one) ### Motivation Ease of use ### Your contribution I don't think i am good enough to help
open
https://github.com/huggingface/datasets/issues/7016
2024-07-01T09:01:06
2024-07-20T06:51:58
null
{ "login": "MohamedAliRashad", "id": 26205298, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,383,151,220
7,015
add split argument to Generator
## Actual When creating a multi-split dataset using generators like ```python datasets.DatasetDict({ "val": datasets.Dataset.from_generator( generator=generator_val, features=features ), "test": datasets.Dataset.from_generator( generator=generator_test, features=features, ) }) ``` It displays (for both test and val) ``` Generating train split ``` ## Expected I would like to be able to improve this behavior by doing ```python datasets.DatasetDict({ "val": datasets.Dataset.from_generator( generator=generator_val, features=features, split="val" ), "test": datasets.Dataset.from_generator( generator=generator_test, features=features, split="test" ) }) ``` It would display ``` Generating val split ``` and ``` Generating test split ``` ## Proposal Current PR is adding an explicit `split` argument and replace the implicit "train" split in the following classes/function : * Generator * from_generator * AbstractDatasetInputStream * GeneratorDatasetInputStream Please share your feedbacks
closed
https://github.com/huggingface/datasets/pull/7015
2024-07-01T08:09:25
2024-07-26T09:37:51
2024-07-26T09:31:56
{ "login": "piercus", "id": 156736, "type": "User" }
[]
true
[]
2,382,985,847
7,014
Skip faiss tests on Windows to avoid running CI for 360 minutes
Skip faiss tests on Windows to avoid running CI for 360 minutes. Fix #7013. Revert once the underlying issue is fixed.
closed
https://github.com/huggingface/datasets/pull/7014
2024-07-01T06:45:35
2024-07-01T07:16:36
2024-07-01T07:10:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,382,976,738
7,013
CI is broken for faiss tests on Windows: node down: Not properly terminated
Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached. See: https://github.com/huggingface/datasets/actions/runs/9712659783 ``` test (integration, windows-latest, deps-minimum) The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes. test (integration, windows-latest, deps-latest) The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes. ``` ``` ____________________________ tests/test_search.py _____________________________ [gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ____________________________ tests/test_search.py _____________________________ [gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ``` ``` tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw0] node down: Not properly terminated [gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw0 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw1] node down: Not properly terminated [gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw1 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw2] node down: Not properly terminated [gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw2 ```
closed
https://github.com/huggingface/datasets/issues/7013
2024-07-01T06:40:03
2024-07-01T07:10:28
2024-07-01T07:10:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,380,934,047
7,012
Raise an error when a nested object is expected to be a mapping that displays the object
null
closed
https://github.com/huggingface/datasets/pull/7012
2024-06-28T18:10:59
2024-07-11T02:06:16
2024-07-11T02:06:16
{ "login": "sebbyjp", "id": 22511797, "type": "User" }
[]
true
[]
2,379,785,262
7,011
Re-enable raising error from huggingface-hub FutureWarning in CI
Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers - https://github.com/huggingface/transformers/pull/31007 was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0 Fix #7010.
closed
https://github.com/huggingface/datasets/pull/7011
2024-06-28T07:28:32
2024-06-28T12:25:25
2024-06-28T12:19:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,379,777,480
7,010
Re-enable raising error from huggingface-hub FutureWarning in CI
Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR: - #6876 Note that this can only be done once transformers releases the fix: - https://github.com/huggingface/transformers/pull/31007
closed
https://github.com/huggingface/datasets/issues/7010
2024-06-28T07:23:40
2024-06-28T12:19:30
2024-06-28T12:19:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,379,619,132
7,009
Support ruff 0.5.0 in CI
Support ruff 0.5.0 in CI and revert: - #7007 Fix #7008.
closed
https://github.com/huggingface/datasets/pull/7009
2024-06-28T05:37:36
2024-06-28T07:17:26
2024-06-28T07:11:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,379,591,141
7,008
Support ruff 0.5.0 in CI
Support ruff 0.5.0 in CI. Also revert: - #7007
closed
https://github.com/huggingface/datasets/issues/7008
2024-06-28T05:11:26
2024-06-28T07:11:18
2024-06-28T07:11:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,379,588,676
7,007
Fix CI by temporarily pinning ruff < 0.5.0
As a hotfix for CI, temporarily pin ruff upper version < 0.5.0. Fix #7006. Revert once root cause is fixed.
closed
https://github.com/huggingface/datasets/pull/7007
2024-06-28T05:09:17
2024-06-28T05:31:21
2024-06-28T05:25:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,379,581,543
7,006
CI is broken after ruff-0.5.0: E721
After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule. See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983 > src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
closed
https://github.com/huggingface/datasets/issues/7006
2024-06-28T05:03:28
2024-06-28T05:25:18
2024-06-28T05:25:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,378,424,349
7,005
EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
### Describe the bug while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files" ### Steps to reproduce the bug This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file. Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub). ```` from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl") ```` error: ```` EmptyDatasetError Traceback (most recent call last) Cell In[18], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("imagefolder", 4 data_dir="path/to/jsonl/file/metadata.jsonl") 5 dataset[0]["objects"] File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2589 verification_mode = VerificationMode( 2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2591 ) 2593 # Create a dataset builder -> 2594 builder_instance = load_dataset_builder( 2595 path=path, 2596 name=name, 2597 data_dir=data_dir, 2598 data_files=data_files, 2599 cache_dir=cache_dir, 2600 features=features, 2601 download_config=download_config, 2602 download_mode=download_mode, 2603 revision=revision, 2604 token=token, 2605 storage_options=storage_options, 2606 trust_remote_code=trust_remote_code, 2607 _require_default_config_name=name is None, 2608 **config_kwargs, 2609 ) 2611 # Return iterable dataset in case of streaming 2612 if streaming: File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2264 download_config = download_config.copy() if download_config else DownloadConfig() 2265 download_config.storage_options.update(storage_options) -> 2266 dataset_module = dataset_module_factory( 2267 path, 2268 revision=revision, 2269 download_config=download_config, 2270 download_mode=download_mode, 2271 data_dir=data_dir, 2272 data_files=data_files, 2273 cache_dir=cache_dir, 2274 trust_remote_code=trust_remote_code, 2275 _require_default_config_name=_require_default_config_name, 2276 _require_custom_configs=bool(config_kwargs), 2277 ) 2278 # Get dataset builder class from the processing script 2279 builder_kwargs = dataset_module.builder_kwargs File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1782 # We have several ways to get a dataset builder: 1783 # 1784 # - if path is the name of a packaged dataset module (...) 1796 1797 # Try packaged 1798 if path in _PACKAGED_DATASETS_MODULES: 1799 return PackagedDatasetModuleFactory( 1800 path, 1801 data_dir=data_dir, 1802 data_files=data_files, 1803 download_config=download_config, 1804 download_mode=download_mode, -> 1805 ).get_module() 1806 # Try locally 1807 elif path.endswith(filename): File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self) 1135 def get_module(self) -> DatasetModule: 1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix() 1137 patterns = ( 1138 sanitize_patterns(self.data_files) 1139 if self.data_files is not None -> 1140 else get_data_patterns(base_path, download_config=self.download_config) 1141 ) 1142 data_files = DataFilesDict.from_patterns( 1143 patterns, 1144 download_config=self.download_config, 1145 base_path=base_path, 1146 ) 1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config) 501 return _get_data_files_patterns(resolver) 502 except FileNotFoundError: --> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files` ``` ### Expected behavior It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files." ### Environment info I am using conda environment.
closed
https://github.com/huggingface/datasets/issues/7005
2024-06-27T15:08:26
2024-06-28T09:56:19
2024-06-28T09:56:19
{ "login": "Aki1991", "id": 117731544, "type": "User" }
[]
false
[]
2,376,064,264
7,004
Fix WebDatasets KeyError for user-defined Features when a field is missing in an example
Fixes: https://github.com/huggingface/datasets/issues/6900 Not sure if this needs any addition stuff before merging
closed
https://github.com/huggingface/datasets/pull/7004
2024-06-26T18:58:05
2024-06-29T00:15:49
2024-06-28T09:30:12
{ "login": "ProGamerGov", "id": 10626398, "type": "User" }
[]
true
[]
2,373,084,132
7,003
minor fix for bfloat16
null
closed
https://github.com/huggingface/datasets/pull/7003
2024-06-25T16:10:04
2024-06-25T16:16:11
2024-06-25T16:10:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,373,010,351
7,002
Fix dump of bfloat16 torch tensor
close https://github.com/huggingface/datasets/issues/7000
closed
https://github.com/huggingface/datasets/pull/7002
2024-06-25T15:38:09
2024-06-25T16:10:16
2024-06-25T15:51:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,372,930,879
7,001
Datasetbuilder Local Download FileNotFoundError
### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
open
https://github.com/huggingface/datasets/issues/7001
2024-06-25T15:02:34
2024-06-25T15:21:19
null
{ "login": "purefall", "id": 12601271, "type": "User" }
[]
false
[]
2,372,887,585
7,000
IterableDataset: Unsupported ScalarType BFloat16
### Describe the bug `IterableDataset.from_generator` crashes when using BFloat16: ``` File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor args = (obj.detach().cpu().numpy(),) ^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug ```python import torch from datasets import IterableDataset def demo(x): yield {"x": x} x = torch.tensor([1.], dtype=torch.bfloat16) dataset = IterableDataset.from_generator( demo, gen_kwargs=dict(x=x), ) example = next(iter(dataset)) print(example) ``` ### Expected behavior Code sample should print: ```python {'x': tensor([1.], dtype=torch.bfloat16)} ``` ### Environment info ``` datasets==2.20.0 torch==2.2.2 ```
closed
https://github.com/huggingface/datasets/issues/7000
2024-06-25T14:43:26
2024-06-25T16:04:00
2024-06-25T15:51:53
{ "login": "stoical07", "id": 170015089, "type": "User" }
[]
false
[]
2,372,124,589
6,999
Remove tasks
Remove tasks, as part of the 3.0 release.
closed
https://github.com/huggingface/datasets/pull/6999
2024-06-25T09:06:16
2024-08-21T09:07:07
2024-08-21T09:01:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,371,973,926
6,998
Fix tests using hf-internal-testing/librispeech_asr_dummy
Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet. Fix #6997.
closed
https://github.com/huggingface/datasets/pull/6998
2024-06-25T07:59:44
2024-06-25T08:22:38
2024-06-25T08:13:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,371,966,127
6,997
CI is broken for tests using hf-internal-testing/librispeech_asr_dummy
CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996 ``` FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other'] Right contains one more item: 'other' Full diff: [ 'clean', - 'other', ] FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None ``` Note that repository was recently converted to Parquet: https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a
closed
https://github.com/huggingface/datasets/issues/6997
2024-06-25T07:55:44
2024-06-25T08:13:43
2024-06-25T08:13:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,371,841,671
6,996
Remove deprecated code
Remove deprecated code, as part of the 3.0 release. First merge: - [x] #6983 - [x] #6987 - [x] #6999
closed
https://github.com/huggingface/datasets/pull/6996
2024-06-25T06:54:40
2024-08-21T09:42:52
2024-08-21T09:35:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,370,713,475
6,995
ImportError when importing datasets.load_dataset
### Describe the bug I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'. ### Steps to reproduce the bug 1. pip install git+https://github.com/huggingface/datasets 2. from datasets import load_dataset ### Expected behavior ImportError Traceback (most recent call last) Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1) ----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset [3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train") [4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test") File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7 1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. [2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) # [3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License"); (...) [12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and [13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License. [15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0" ---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset [18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction [19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63 [61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc [62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs ---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import ( [64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo, [65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd, ... [70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) ) [71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile [72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Environment info Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub $ datasets-cli env Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module> File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module> from .arrow_dataset import Dataset File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) (CS224S)
closed
https://github.com/huggingface/datasets/issues/6995
2024-06-24T17:07:22
2024-11-14T01:42:09
2024-06-25T06:11:37
{ "login": "Leo-Lsc", "id": 124846947, "type": "User" }
[]
false
[]
2,370,491,689
6,994
Fix incorrect rank value in data splitting
Fix #6990.
closed
https://github.com/huggingface/datasets/pull/6994
2024-06-24T15:07:47
2024-06-26T04:37:35
2024-06-25T16:19:17
{ "login": "yzhangcs", "id": 18402347, "type": "User" }
[]
true
[]
2,370,444,104
6,993
less script docs
+ mark as legacy in some parts of the docs since we'll not build new features for script datasets
closed
https://github.com/huggingface/datasets/pull/6993
2024-06-24T14:45:28
2024-07-08T13:10:53
2024-06-27T09:31:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,367,890,622
6,992
Dataset with streaming doesn't work with proxy
### Describe the bug I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine. ### Steps to reproduce the bug use load_dataset with streaming = True in AIMOS ### Expected behavior does not hang indefinitely and loads batches to start training run ### Environment info _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge _pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 abseil-cpp 20220623.0 h9888cd1_6 conda-forge absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge aiohttp 3.8.6 py311hf118e41_0 aiosignal 1.2.0 pyhd3eb1b0_0 archspec 0.2.3 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge async-timeout 4.0.2 py311h6ffa863_0 attrs 23.1.0 py311h6ffa863_0 av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 aws-c-auth 0.6.24 hb81f6d7_5 conda-forge aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge aws-c-common 0.8.11 h4194056_0 conda-forge aws-c-compression 0.2.16 ha19333d_3 conda-forge aws-c-event-stream 0.2.18 h12a9399_6 conda-forge aws-c-http 0.7.4 ha2cde00_2 conda-forge aws-c-io 0.13.17 h9189062_2 conda-forge aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge aws-checksums 0.1.14 ha19333d_3 conda-forge aws-crt-cpp 0.19.7 hd018011_7 conda-forge aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge blas 1.0 openblas blinker 1.8.2 pyhd8ed1ab_0 conda-forge boltons 23.0.0 py311h6ffa863_0 boost-cpp 1.82.0 h25e6d66_2 bottleneck 1.3.5 py311h34f6284_0 brotli 1.0.9 hf118e41_7 brotli-bin 1.0.9 hf118e41_7 brotli-python 1.0.9 py311h4a02239_7 bzip2 1.0.8 h7b6447c_0 c-ares 1.19.1 hf118e41_0 ca-certificates 2024.6.2 h0f6029e_0 conda-forge cachetools 5.3.3 pyhd8ed1ab_0 conda-forge certifi 2024.6.2 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311hf118e41_3 charset-normalizer 2.0.4 pyhd3eb1b0_0 click 8.1.7 unix_pyh707e725_0 conda-forge conda 24.5.0 py311h1af927a_0 conda-forge conda-content-trust 0.2.0 py311h6ffa863_0 conda-libmamba-solver 23.11.1 py311h6ffa863_0 conda-package-handling 2.2.0 py311h6ffa863_0 conda-package-streaming 0.9.0 py311h6ffa863_0 contourpy 1.0.5 py311h25e6d66_0 cryptography 41.0.3 py311hb0e80e7_0 cudatoolkit 11.8.0 hedcfb66_13 conda-forge cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 cycler 0.11.0 pyhd3eb1b0_0 datasets 2.12.0 py311h6ffa863_0 dill 0.3.6 py311h6ffa863_0 distro 1.9.0 pyhd8ed1ab_0 conda-forge ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 filelock 3.9.0 py311h6ffa863_0 fmt 9.1.0 h25e6d66_0 fonttools 4.25.0 pyhd3eb1b0_0 freetype 2.12.1 hd23a775_0 frozendict 2.4.4 py311hb02d432_0 conda-forge frozenlist 1.4.0 py311hf118e41_0 fsspec 2023.9.2 py311h6ffa863_0 gflags 2.2.2 he6710b0_0 giflib 5.2.1 hf118e41_3 glog 0.6.0 hbe088e0_0 conda-forge gmp 6.3.0 h46f38da_0 conda-forge gmpy2 2.1.5 py311h2758da7_1 conda-forge google-auth 2.30.0 pyhff2d567_0 conda-forge google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge grpc-cpp 1.51.1 h8ba971d_1 conda-forge grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 huggingface_hub 0.17.3 py311h6ffa863_0 icu 73.1 h4a02239_0 idna 3.4 py311h6ffa863_0 importlib-metadata 6.0.0 py311h6ffa863_0 jinja2 3.1.4 pyhd8ed1ab_0 conda-forge jpeg 9e hf118e41_1 jsonpatch 1.32 pyhd3eb1b0_0 jsonpointer 2.1 pyhd3eb1b0_0 kiwisolver 1.4.4 py311h4a02239_0 krb5 1.20.1 hc019ccd_1 lame 3.100 hb283c62_1003 conda-forge lcms2 2.12 h2045e0b_0 ld_impl_linux-ppc64le 2.38 hec883e6_1 lerc 3.0 h29c3540_0 leveldb 1.23 h24532b4_1 conda-forge libabseil 20220623.0 cxx17_h9235812_6 conda-forge libarchive 3.6.2 hd8ab008_2 libarrow 11.0.0 h837770b_5_cpu conda-forge libboost 1.82.0 haf51a6a_2 libbrotlicommon 1.0.9 hf118e41_7 libbrotlidec 1.0.9 hf118e41_7 libbrotlienc 1.0.9 hf118e41_7 libcrc32c 1.1.2 h3b9df90_0 conda-forge libcurl 8.4.0 h4d62439_0 libdeflate 1.17 hf118e41_1 libedit 3.1.20221030 hf118e41_0 libev 4.33 h140841e_1 libevent 2.1.10 h19c23f1_4 conda-forge libexpat 2.6.2 h46f38da_0 conda-forge libffi 3.4.4 h4a02239_0 libgcc-ng 13.2.0 h31e42bb_10 conda-forge libgfortran-ng 11.2.0 hb3889a9_1 libgfortran5 11.2.0 h1234567_1 libgomp 13.2.0 h31e42bb_10 conda-forge libgoogle-cloud 2.7.0 h11140b6_1 conda-forge libgrpc 1.51.1 h4d29a31_1 conda-forge libmamba 1.5.3 h7c6fafd_0 libmambapy 1.5.3 py311h828bf7b_0 libnghttp2 1.57.0 h44e5816_0 libnsl 2.0.1 ha17a0cc_0 conda-forge libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 libopus 1.3.1 h4e0d66e_1 conda-forge libpng 1.6.39 hf118e41_0 libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 libsolv 0.7.24 h0f529ac_0 libsqlite 3.45.3 hd4bbf49_0 conda-forge libssh2 1.10.0 h50fa78f_2 libstdcxx-ng 13.2.0 h262982c_10 conda-forge libthrift 0.18.0 h82f1162_0 conda-forge libtiff 4.5.1 h4a02239_0 libutf8proc 2.8.0 hb283c62_0 conda-forge libuuid 2.38.1 h4194056_0 conda-forge libvpx 1.13.1 h46f38da_0 conda-forge libwebp 1.3.2 h0f96ee2_0 libwebp-base 1.3.2 hf118e41_0 libxcrypt 4.4.36 ha17a0cc_1 conda-forge libxml2 2.10.4 h18e3229_1 libzlib 1.2.13 h1f2b957_6 conda-forge llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 lmdb 0.9.31 ha17a0cc_1 conda-forge lz4-c 1.9.4 h4a02239_0 markdown 3.4.4 pyhd8ed1ab_0 conda-forge markupsafe 2.1.5 py311h32d8acf_0 conda-forge matplotlib 3.8.0 py311h6ffa863_0 matplotlib-base 3.8.0 py311h52e1fcc_0 menuinst 2.1.1 py311h1af927a_0 conda-forge mpc 1.3.1 heaf1863_0 conda-forge mpfr 4.2.1 haad2271_1 conda-forge mpmath 1.3.0 pyhd8ed1ab_0 conda-forge multidict 6.0.2 py311hf118e41_0 multiprocess 0.70.14 py311h6ffa863_0 munkres 1.1.4 py_0 mypy_extensions 1.0.0 pyha770c72_0 conda-forge nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 ncurses 6.4 h4a02239_0 nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge networkx 2.8.8 pyhd8ed1ab_0 conda-forge nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0 numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 numexpr 2.8.7 py311hc46fc55_0 numpy 1.24.3 py311h148a09e_0 numpy-base 1.24.3 py311h06b82f6_0 oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge openjpeg 2.4.0 hfe35807_0 openssl 3.3.1 h1f2b957_0 conda-forge orc 1.8.2 h341c9a4_2 conda-forge packaging 23.1 py311h6ffa863_0 pandas 2.1.1 py311h52e1fcc_0 pcre2 10.42 h280155c_0 pillow 10.0.1 py311he33076b_0 pip 23.3 py311h6ffa863_0 platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge pluggy 1.0.0 py311h6ffa863_1 pooch 1.8.2 pyhd8ed1ab_0 conda-forge protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 psutil 5.9.8 py311hd26027c_0 conda-forge pyarrow 11.0.0 py311h04a18d5_1 pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge pybind11-abi 4 hd3eb1b0_1 pycosat 0.6.6 py311hf118e41_0 pycparser 2.21 pyhd3eb1b0_0 pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge pyopenssl 23.2.0 py311h6ffa863_0 pyparsing 3.0.9 py311h6ffa863_0 pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 py311h6ffa863_0 python 3.11.8 h3332dee_0_cpython conda-forge python-dateutil 2.8.2 pyhd3eb1b0_0 python-tzdata 2023.3 pyhd3eb1b0_0 python-xxhash 2.0.2 py311hf118e41_1 python_abi 3.11 4_cp311 conda-forge pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0 pytz 2023.3.post1 py311h6ffa863_0 pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge pyyaml 6.0.1 py311hf118e41_0 re2 2023.02.01 h883269e_0 conda-forge readline 8.2 hf118e41_0 regex 2023.10.3 py311hf118e41_0 reproc 14.2.4 h29c3540_1 reproc-cpp 14.2.4 h29c3540_1 requests 2.31.0 py311h6ffa863_0 requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge responses 0.13.3 pyhd3eb1b0_0 rsa 4.9 pyhd8ed1ab_0 conda-forge ruamel.yaml 0.17.21 py311hf118e41_0 s2n 1.3.37 h5e47323_0 conda-forge safetensors 0.4.0 py311hda16d9e_0 scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0 sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 setuptools 68.0.0 py311h6ffa863_0 six 1.16.0 pyhd3eb1b0_1 snappy 1.1.9 h29c3540_0 sqlite 3.41.2 hf118e41_0 sympy 1.12.1 pypyh2585a3b_103 conda-forge tabulate 0.8.10 pyhd8ed1ab_0 conda-forge tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge tk 8.6.13 hd4bbf49_0 conda-forge tokenizers 0.13.3 py311h3d4f45a_0 torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0 tornado 6.3.3 py311hf118e41_0 tqdm 4.65.0 py311h7837921_0 transformers 4.32.1 py311h6ffa863_0 truststore 0.8.0 py311h6ffa863_0 typing-extensions 4.7.1 py311h6ffa863_0 typing_extensions 4.7.1 py311h6ffa863_0 typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge tzdata 2023c h04d1e81_0 urllib3 1.26.18 py311h6ffa863_0 utf8proc 2.6.1 h140841e_0 werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge wheel 0.41.2 py311h6ffa863_0 xxhash 0.8.0 h140841e_3 xz 5.4.2 hf118e41_0 yaml 0.2.5 h7b6447c_0 yaml-cpp 0.8.0 h4a02239_0 yarl 1.8.1 py311hf118e41_0 zipp 3.11.0 py311h6ffa863_0 zlib 1.2.13 h1f2b957_6 conda-forge zstandard 0.19.0 py311hf118e41_0 zstd 1.5.5 h57e4825_0
open
https://github.com/huggingface/datasets/issues/6992
2024-06-22T16:12:08
2024-06-25T15:43:05
null
{ "login": "YHL04", "id": 57779173, "type": "User" }
[]
false
[]
2,367,711,094
6,991
Unblock NumPy 2.0
Fixes https://github.com/huggingface/datasets/issues/6980
closed
https://github.com/huggingface/datasets/pull/6991
2024-06-22T09:19:53
2024-12-25T17:57:34
2024-07-12T12:04:53
{ "login": "NeilGirdhar", "id": 730137, "type": "User" }
[]
true
[]
2,366,660,785
6,990
Problematic rank after calling `split_dataset_by_node` twice
### Describe the bug I'm trying to split `IterableDataset` by `split_dataset_by_node`. But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`. ### Steps to reproduce the bug Here is the minimal code for reproduction: ```py >>> from datasets import load_dataset >>> from datasets.distributed import split_dataset_by_node >>> dataset = load_dataset('fla-hub/slimpajama-test', split='train', streaming=True) >>> dataset = split_dataset_by_node(dataset, 1, 32) >>> dataset._distributed DistributedConfig(rank=1, world_size=32) >>> dataset = split_dataset_by_node(dataset, 1, 15) >>> dataset._distributed DistributedConfig(rank=481, world_size=480) ``` As you can see, the second rank 481 > 480, which is problematic. ### Expected behavior I think this error comes from this line @lhoestq https://github.com/huggingface/datasets/blob/a6ccf944e42c1a84de81bf326accab9999b86c90/src/datasets/iterable_dataset.py#L2943-L2944 We may need to obtain the rank first. Then the above code gives ```py >>> dataset._distributed DistributedConfig(rank=16, world_size=480) ``` ### Environment info datasets==2.20.0
closed
https://github.com/huggingface/datasets/issues/6990
2024-06-21T14:25:26
2024-06-25T16:19:19
2024-06-25T16:19:19
{ "login": "yzhangcs", "id": 18402347, "type": "User" }
[]
false
[]
2,365,556,449
6,989
cache in nfs error
### Describe the bug - When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory - When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory - The default is to use the path of tempfile.tempdir - If I modify this path to the NFS disk, an error will be reported, but the program will continue to run - https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L257 ``` Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs000000038330a012000030b4' Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server server.serve_forever() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers finalizer() File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__ res = self._callback(*self._args, **self._kwargs) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir rmtree(tempdir) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) OSError: [Errno 16] Device or resource busy: '.nfs0000000400064d4a000030e5' ``` ### Steps to reproduce the bug ``` import os import time import tempfile from datasets import load_dataset def add_column(sample): # print(type(sample)) # time.sleep(0.1) sample['__ds__stats__'] = {'data': 123} return sample def filt_column(sample): # print(type(sample)) if len(sample['content']) > 10: return True else: return False if __name__ == '__main__': input_dir = '/mnt/temp/CN/small' # some json dataset dataset = load_dataset('json', data_dir=input_dir) temp_dir = '/media/release/release/temp/temp' # a nfs folder os.makedirs(temp_dir, exist_ok=True) # change huggingface-datasets runtime cache in nfs๏ผˆdefault in /tmp๏ผ‰ tempfile.tempdir = temp_dir aa = dataset.map(add_column, num_proc=64) aa = aa.filter(filt_column, num_proc=64) print(aa) ``` ### Expected behavior no error occur ### Environment info datasets==2.18.0 ubuntu 20.04
open
https://github.com/huggingface/datasets/issues/6989
2024-06-21T02:09:22
2025-01-29T11:44:04
null
{ "login": "simplew2011", "id": 66729924, "type": "User" }
[]
false
[]
2,364,129,918
6,988
[`feat`] Move dataset card creation to method for easier overriding
Hello! ## Pull Request overview * Move dataset card creation to method for easier overriding ## Details It's common for me to fully automatically download, reformat, and upload a dataset (e.g. see https://huggingface.co/datasets?other=sentence-transformers), but one aspect that I cannot easily automate is the dataset card generation. This is because during `push_to_hub`, the dataset card is created in 3 lines of code in a much larger method. To automatically generate a dataset card, I need to either: 1. Subclass `Dataset`/`DatasetDict`, copy the entire `push_to_hub` method to override the ~3 lines used to generate the dataset card. This is not viable as the method is likely to change over time. 2. Use `push_to_hub` normally, then separately download the pushed (but empty) dataset card, update it, and reupload the modified dataset. This works fine, but prevents me from being able to return a `Dataset` to my users which will automatically use a nice dataset card. So, in this PR I'm proposing to move the dataset generation into another method so that it can be overridden more easily. For example, imagine the following use case: ````python import json from typing import Any, Dict, Optional from datasets import Dataset, load_dataset from datasets.info import DatasetInfosDict, DatasetInfo from datasets.utils.metadata import MetadataConfigs from huggingface_hub import DatasetCardData, DatasetCard TEMPLATE = r"""--- {dataset_card_data} --- # Dataset Card for {source_dataset_name} with mined hard negatives This dataset is a collection of {column_one}-{column_two}-negative triplets from the {source_dataset_name} dataset. See [{source_dataset_name}](https://huggingface.co/datasets/{source_dataset_id}) for additional information. This dataset can be used directly with Sentence Transformers to train embedding models. ## Mining Parameters The negative samples have been mined using the following parameters: - `range_min`: {range_min}, i.e. we skip the {range_min} most similar samples - `range_max`: {range_max}, i.e. we only look at the top {range_max} most similar samples - `margin`: {margin}, i.e. we require negative similarity + margin < positive similarity, so negative samples can't be more similar than the known true answer - `sampling_strategy`: {sampling_strategy}, i.e. whether to randomly sample from the candidate negatives or take the "top" negatives - `num_negatives`: {num_negatives}, i.e. we mine {num_negatives} negatives per question-answer pair ## Dataset Format - Columns: {column_one}, {column_two}, negative - Column types: str, str, str - Example: ```python {example} ``` """ class HNMDataset(Dataset): @classmethod def from_dict(cls, *args, mining_kwargs: Dict[str, Any], **kwargs) -> "HNMDataset": dataset = super().from_dict(*args, **kwargs) dataset.mining_kwargs = mining_kwargs return dataset def _create_dataset_card( self, dataset_card_data: DatasetCardData, dataset_card: Optional[DatasetCard], config_name: str, info_to_dump: DatasetInfo, metadata_config_to_dump: MetadataConfigs, ) -> DatasetCard: if dataset_card: return dataset_card DatasetInfosDict({config_name: info_to_dump}).to_dataset_card_data(dataset_card_data) MetadataConfigs({config_name: metadata_config_to_dump}).to_dataset_card_data(dataset_card_data) dataset_card_data.tags = ["sentence-transformers"] dataset_name = self.mining_kwargs["source_dataset"].info.dataset_name # Very messy, just as an example: dataset_id = list(self.mining_kwargs["source_dataset"].info.download_checksums.keys())[0].removeprefix("hf://datasets/").split("@")[0] content = TEMPLATE.format(**{ "dataset_card_data": str(dataset_card_data), "source_dataset_name": dataset_name, "source_dataset_id": dataset_id, "range_min": self.mining_kwargs["range_min"], "range_max": self.mining_kwargs["range_max"], "margin": self.mining_kwargs["margin"], "sampling_strategy": self.mining_kwargs["sampling_strategy"], "num_negatives": self.mining_kwargs["num_negatives"], "column_one": self.column_names[0], "column_two": self.column_names[1], "example": json.dumps(self[0], indent=4), }) return DatasetCard(content) source_dataset = load_dataset("sentence-transformers/gooaq", split="train[:100]") dataset = HNMDataset.from_dict({ "query": source_dataset["question"], "answer": source_dataset["answer"], # "negative": ... <- In my case, this column would be 'mined' automatically with these parameters }, mining_kwargs={ "range_min": 10, "range_max": 20, "max_score": 0.9, "margin": 0.1, "sampling_strategy": "random", "num_negatives": 3, "source_dataset": source_dataset, }) dataset.push_to_hub("tomaarsen/mining_demo", private=True) ```` In this script, I've created a subclass which stores some additional information about how the dataset was generated. It's a bit hacky (e.g. setting a `mining_kwargs` parameter in `from_dict` that wasn't created in `__init__`, but that's just a consequence of how the `from_...` methods don't accept kwargs), but it allows me to create a "hard negatives mining" function that returns a dataset which people can use locally like normal, but if they choose to upload it, then it'll automatically include some information, e.g.: https://huggingface.co/datasets/tomaarsen/mining_demo This allows others to actually find this dataset (e.g. via the `sentence-transformers` tag) and get an idea of the quality, source, etc. by looking at the model card. ## Note I'm not fixed on this solution whatsoever: I am also completely fine with other solutions, e.g. a `dataset.set_dataset_card_creator` method that allows me to provide a function without even having to subclass anything. I'm open to all ideas :) cc @albertvillanova @lhoestq cc @LysandreJik - Tom Aarsen
open
https://github.com/huggingface/datasets/pull/6988
2024-06-20T10:47:57
2024-06-21T16:04:58
null
{ "login": "tomaarsen", "id": 37621491, "type": "User" }
[]
true
[]
2,363,728,190
6,987
Remove beam
Remove beam, as part of the 3.0 release.
closed
https://github.com/huggingface/datasets/pull/6987
2024-06-20T07:27:14
2024-06-26T19:41:55
2024-06-26T19:35:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,362,584,179
6,986
Add large_list type support in string_to_arrow
add large_list type support in string_to_arrow() and _arrow_to_datasets_dtype() in features.py Fix #6984
closed
https://github.com/huggingface/datasets/pull/6986
2024-06-19T14:54:25
2024-08-12T14:43:48
2024-08-12T14:43:47
{ "login": "arthasking123", "id": 16257131, "type": "User" }
[]
true
[]
2,362,378,276
6,985
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
### Describe the bug I have been struggling with this for two days, any help would be appreciated. Python 3.10 ``` from setfit import SetFitModel from huggingface_hub import login access_token_read = "cccxxxccc" # Authenticate with the Hugging Face Hub login(token=access_token_read) # Load the models from the Hugging Face Hub trainer_relv = SetFitModel.from_pretrained("snowdere/trainer_relevance") trainer_trust = SetFitModel.from_pretrained("snowdere/trainer_trust") trainer_sent = SetFitModel.from_pretrained("snowdere/trainer_sent") trainer_topic = SetFitModel.from_pretrained("snowdere/trainer_topic") ``` ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 from setfit import SetFitModel 2 from huggingface_hub import login 4 access_token_read = "ccsddsds" File /opt/conda/lib/python3.10/site-packages/setfit/__init__.py:7 4 import os 5 import warnings ----> 7 from .data import get_templated_dataset, sample_dataset 8 from .model_card import SetFitModelCardData 9 from .modeling import SetFitHead, SetFitModel File /opt/conda/lib/python3.10/site-packages/setfit/data.py:5 3 import pandas as pd 4 import torch ----> 5 from datasets import Dataset, DatasetDict, load_dataset 6 from torch.utils.data import Dataset as TorchDataset 8 from . import logging File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18 1 # ruff: noqa 2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. 3 # (...) 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 16 __version__ = "2.19.0" ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:76 73 from tqdm.contrib.concurrent import thread_map 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns File /opt/conda/lib/python3.10/site-packages/datasets/arrow_reader.py:29 26 from typing import TYPE_CHECKING, List, Optional, Union 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 32 from .download.download_config import DownloadConfig File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information (...) 17 18 # flake8: noqa ---> 20 from .core import * File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33 30 import pyarrow as pa 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( 36 "The pyarrow installation is not built with support " 37 f"for the Parquet file format ({str(exc)})" 38 ) from None File /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ``` setfit: 1.0.3 transformers: 4.41.2 lingua-language-detector: 2.0.2 polars: 0.20.31 lightning: None google-cloud-bigquery: 3.24.0 shapely: 2.0.4 pyarrow: 16.0.0 ### Steps to reproduce the bug I have tried all version combinations for Dataset and Pyarrow, the all have the same error since a few days ago. This is accross multiple scripts I have. ### Expected behavior Just ron normally. ### Environment info 3.10
closed
https://github.com/huggingface/datasets/issues/6985
2024-06-19T13:22:28
2025-03-14T18:47:53
2024-06-25T05:40:51
{ "login": "firmai", "id": 26666267, "type": "User" }
[]
false
[]
2,362,143,554
6,984
Convert polars DataFrame back to datasets
### Feature request This returns error. ```python from datasets import Dataset dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]}) Dataset.from_polars(dsdf.to_polars()) ``` ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent. ### Motivation When datasets contain Sequence data type, it will be converted to Arrow type large_list. However, the reverse (from large_list to Sequence) does not work. ### Your contribution No
closed
https://github.com/huggingface/datasets/issues/6984
2024-06-19T11:38:48
2024-08-12T14:43:46
2024-08-12T14:43:46
{ "login": "ljw20180420", "id": 38550511, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,361,806,201
6,983
Remove metrics
Remove all metrics, as part of the 3.0 release. Note they are deprecated since 2.5.0 version.
closed
https://github.com/huggingface/datasets/pull/6983
2024-06-19T09:08:55
2024-06-28T06:57:38
2024-06-28T06:51:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,361,661,469
6,982
cannot split dataset when using load_dataset
### Describe the bug when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document, This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for days, even I load the datasets from local path, it can Generatingโ€‡trainโ€‡split and validation split but bug happen again in testโ€‡split. ### Steps to reproduce the bug from datasets import load_dataset, load_metric, Audio common_voice_train = load_dataset("mozilla-foundation/common_voice_7_0", "ja", split="train", token=selftoken, trust_remote_code=True) ### Expected behavior ``` { "name": "ValueError", "message": "Instruction \"train\" corresponds to no data!", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 3 1 from datasets import load_dataset, load_metric, Audio ----> 3 common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\",token='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')#,trust_remote_code=True)#,streaming=True) 4 common_voice_test = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"test\",token='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')#,trust_remote_code=True)#,streaming=True) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\load.py:2626, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2622 # Build dataset for splits 2623 keep_in_memory = ( 2624 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2625 ) -> 2626 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2627 # Rename and cast features to match task schema 2628 if task is not None: 2629 # To avoid issuing the same warning twice File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1266, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1263 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1265 # Create a dataset for each of the given splits -> 1266 datasets = map_nested( 1267 partial( 1268 self._build_single_dataset, 1269 run_post_process=run_post_process, 1270 verification_mode=verification_mode, 1271 in_memory=in_memory, 1272 ), 1273 split, 1274 map_tuple=True, 1275 disable_tqdm=True, 1276 ) 1277 if isinstance(datasets, dict): 1278 datasets = DatasetDict(datasets) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\utils\\py_utils.py:484, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 482 if batched: 483 data_struct = [data_struct] --> 484 mapped = function(data_struct) 485 if batched: 486 mapped = mapped[0] File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1296, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1293 split = Split(split) 1295 # Build base dataset -> 1296 ds = self._as_dataset( 1297 split=split, 1298 in_memory=in_memory, 1299 ) 1300 if run_post_process: 1301 for resource_file_name in self._post_processing_resources(split).values(): File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1370, in DatasetBuilder._as_dataset(self, split, in_memory) 1368 if self._check_legacy_cache(): 1369 dataset_name = self.name -> 1370 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1371 name=dataset_name, 1372 instructions=split, 1373 split_infos=self.info.splits.values(), 1374 in_memory=in_memory, 1375 ) 1376 fingerprint = self._get_dataset_fingerprint(split) 1377 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\arrow_reader.py:256, in BaseReader.read(self, name, instructions, split_infos, in_memory) 254 msg = f'Instruction \"{instructions}\" corresponds to no data!' 255 #msg = f'Instruction \"{self._path}\",\"{name}\",\"{instructions}\",\"{split_infos}\" corresponds to no data!' --> 256 raise ValueError(msg) 257 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ValueError: Instruction \"train\" corresponds to no data!" } ``` ### Environment info Environment: python 3.9 windows 11 pro VScode+jupyter
closed
https://github.com/huggingface/datasets/issues/6982
2024-06-19T08:07:16
2024-07-08T06:20:16
2024-07-08T06:20:16
{ "login": "cybest0608", "id": 17721894, "type": "User" }
[]
false
[]
2,361,520,022
6,981
Update docs on trust_remote_code defaults to False
Update docs on trust_remote_code defaults to False. The docs needed to be updated due to this PR: - #6954
closed
https://github.com/huggingface/datasets/pull/6981
2024-06-19T07:12:21
2024-06-19T14:32:59
2024-06-19T14:26:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,360,909,930
6,980
Support NumPy 2.0
### Feature request Support NumPy 2.0. ### Motivation NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API. Besides that, NumPy 2 provides a cleaner interface than NumPy 1. ### Tasks NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2? - [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976 - [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991
closed
https://github.com/huggingface/datasets/issues/6980
2024-06-18T23:30:22
2024-07-12T12:04:54
2024-07-12T12:04:53
{ "login": "NeilGirdhar", "id": 730137, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,360,175,363
6,979
How can I load partial parquet files only?
I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it. dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet") How can I just using 000 - 100 from a 00314 from all partially? I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
closed
https://github.com/huggingface/datasets/issues/6979
2024-06-18T15:44:16
2024-06-21T17:09:32
2024-06-21T13:32:50
{ "login": "lucasjinreal", "id": 21303438, "type": "User" }
[]
false
[]
2,359,511,469
6,978
Fix regression for pandas < 2.0.0 in JSON loader
A regression was introduced for pandas < 2.0.0 in PR: - #6914 As described in pandas docs, the `dtype_backend` parameter was first added in pandas 2.0.0: https://pandas.pydata.org/docs/reference/api/pandas.read_json.html This PR fixes the regression by passing (or not) the `dtype_backend` parameter depending on pandas version. Maybe, in a future 3.0 `datasets` release, we could just require pandas > 2.0. Reported by: - #6977
closed
https://github.com/huggingface/datasets/pull/6978
2024-06-18T10:26:34
2024-06-19T06:23:24
2024-06-19T05:50:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,359,295,045
6,977
load json file error with v2.20.0
### Describe the bug ``` load_dataset(path="json", data_files="./test.json") ``` ``` Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1997, in _prepare_split_single for _, table in generator: File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 155, in _generate_tables df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/app/t1.py", line 11, in <module> load_dataset(path=data_path, data_files="./t2.json") File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2616, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 2040, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` ``` import pandas as pd with open("./test.json", "r") as f: df = pd.read_json(f, dtype_backend="pyarrow") ``` ``` Traceback (most recent call last): File "/app/t3.py", line 3, in <module> df = pd.read_json(f, dtype_backend="pyarrow") File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) TypeError: read_json() got an unexpected keyword argument 'dtype_backend' ``` ### Steps to reproduce the bug . ### Expected behavior . ### Environment info ``` datasets 2.20.0 pandas 1.5.3 ```
closed
https://github.com/huggingface/datasets/issues/6977
2024-06-18T08:41:01
2024-06-18T10:06:10
2024-06-18T10:06:09
{ "login": "xiaoyaolangzhi", "id": 15037766, "type": "User" }
[]
false
[]
2,357,107,203
6,976
Ensure compatibility with numpy 2.0.0
Following the conversion guide, copy=False is no longer required and will result in an error: https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword. The following fix should resolve the issue. error found during testing on the MTEB repository e.g. [here](https://github.com/embeddings-benchmark/mteb/pull/938)
closed
https://github.com/huggingface/datasets/pull/6976
2024-06-17T11:29:22
2024-06-19T14:30:32
2024-06-19T14:04:34
{ "login": "KennethEnevoldsen", "id": 23721977, "type": "User" }
[]
true
[]
2,357,003,959
6,975
Set temporary numpy upper version < 2.0.0 to fix CI
Set temporary numpy upper version < 2.0.0 to fix CI. See: https://github.com/huggingface/datasets/actions/runs/9546031216/job/26308072017 ``` A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. ```
closed
https://github.com/huggingface/datasets/pull/6975
2024-06-17T10:36:54
2024-06-17T12:49:53
2024-06-17T12:43:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,355,517,362
6,973
IndexError during training with Squad dataset and T5-small model
### Describe the bug I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility. ### Steps to reproduce the bug 1.Install the required libraries: !pip install transformers datasets 2.Run the following code: !pip install transformers datasets import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, TrainingArguments, Trainer, DataCollatorWithPadding # Load a small, publicly available dataset from datasets import load_dataset dataset = load_dataset("squad", split="train[:100]") # Use a small subset for testing # Load a pre-trained model and tokenizer model_name = "t5-small" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Define a basic data collator data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=2, num_train_epochs=1, ) # Create a trainer trainer = Trainer( model=model, args=training_args, train_dataset=dataset, data_collator=data_collator, ) # Train the model trainer.train() ### Expected behavior --------------------------------------------------------------------------- IndexError Traceback (most recent call last) [<ipython-input-23-f13a4b23c001>](https://localhost:8080/#) in <cell line: 34>() 32 33 # Train the model ---> 34 trainer.train() 10 frames [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 427 if isinstance(key, int): 428 if (key < 0 and key + size < 0) or (key >= size): --> 429 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 430 return 431 elif isinstance(key, slice): IndexError: Invalid key: 42 is out of bounds for size 0 ### Environment info transformers version:4.41.2 datasets version:1.18.4 Python version:3.10.12
closed
https://github.com/huggingface/datasets/issues/6973
2024-06-16T07:53:54
2024-07-01T11:25:40
2024-07-01T11:25:40
{ "login": "ramtunguturi36", "id": 151521233, "type": "User" }
[]
false
[]
2,353,531,912
6,972
Fix webdataset pickling
...by making tracked iterables picklable. This is important to make streaming datasets compatible with multiprocessing e.g. for parallel data loading
closed
https://github.com/huggingface/datasets/pull/6972
2024-06-14T14:43:02
2024-06-14T15:43:43
2024-06-14T15:37:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,351,830,856
6,971
packaging: Remove useless dependencies
Revert changes in #6396 and #6404. CVE-2023-47248 has been fixed since PyArrow v14.0.1. Meanwhile Python requirements requires `pyarrow>=15.0.0`.
closed
https://github.com/huggingface/datasets/pull/6971
2024-06-13T18:43:43
2024-06-14T14:03:34
2024-06-14T13:57:24
{ "login": "daskol", "id": 9336514, "type": "User" }
[]
true
[]
2,351,380,029
6,970
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6970
2024-06-13T14:59:45
2024-06-13T15:06:18
2024-06-13T14:59:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,351,351,436
6,969
Release: 2.20.0
null
closed
https://github.com/huggingface/datasets/pull/6969
2024-06-13T14:48:20
2024-06-13T15:04:39
2024-06-13T14:55:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,351,331,417
6,968
Use `HF_HUB_OFFLINE` instead of `HF_DATASETS_OFFLINE`
To use `datasets` offline, one can use the `HF_DATASETS_OFFLINE` environment variable. This PR makes `HF_HUB_OFFLINE` the recommended environment variable for offline training. Goal is to be more consistent with the rest of HF ecosystem and have a single config value to set. The changes are backward-compatible meaning that: - `HF_DATASETS_OFFLINE` environment is still taken into account, though not documented - `datasets.config.HF_DATASETS_OFFLINE` still exists, though it is not used anymore (in favor of `datasets.config.HF_HUB_OFFLINE`) **Note:** it might break things in downstream libraries if they were monkeypatching `datasets.config.HF_DATASETS_OFFLINE` in their CI tests (for instance). Not much of a problem IMO.
closed
https://github.com/huggingface/datasets/pull/6968
2024-06-13T14:39:40
2024-06-13T17:31:37
2024-06-13T17:25:37
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
2,349,146,398
6,967
Method to load Laion400m
### Feature request Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99 ### Motivation The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly. ### Your contribution I cam write the loader with some help.
open
https://github.com/huggingface/datasets/issues/6967
2024-06-12T16:04:04
2024-06-12T16:04:04
null
{ "login": "humanely", "id": 6862868, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,348,934,466
6,966
Remove underlines between badges
## Before: <img width="935" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/93666e72-059b-4180-9e1d-ff176a3d9dac"> ## After: <img width="956" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/75df7c3e-f473-44f0-a872-eeecf6a85fe2">
closed
https://github.com/huggingface/datasets/pull/6966
2024-06-12T14:32:11
2024-06-19T14:16:21
2024-06-19T14:10:11
{ "login": "andrewhong04", "id": 35881688, "type": "User" }
[]
true
[]