id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 β | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k β | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,818,703,725 | https://api.github.com/repos/huggingface/datasets/issues/6064 | https://github.com/huggingface/datasets/pull/6064 | 6,064 | set dev version | closed | 3 | 2023-07-24T15:56:00 | 2023-07-24T16:05:19 | 2023-07-24T15:56:10 | lhoestq | [] | null | true |
1,818,679,485 | https://api.github.com/repos/huggingface/datasets/issues/6063 | https://github.com/huggingface/datasets/pull/6063 | 6,063 | Release: 2.14.0 | closed | 4 | 2023-07-24T15:41:19 | 2023-07-24T16:05:16 | 2023-07-24T15:47:51 | lhoestq | [] | null | true |
1,818,341,584 | https://api.github.com/repos/huggingface/datasets/issues/6062 | https://github.com/huggingface/datasets/pull/6062 | 6,062 | Improve `Dataset.from_list` docstring | closed | 4 | 2023-07-24T12:36:38 | 2023-07-24T14:43:48 | 2023-07-24T14:34:43 | mariosasko | [] | null | true |
1,818,337,136 | https://api.github.com/repos/huggingface/datasets/issues/6061 | https://github.com/huggingface/datasets/pull/6061 | 6,061 | Dill 3.7 support | closed | 5 | 2023-07-24T12:33:58 | 2023-07-24T14:13:20 | 2023-07-24T14:04:36 | mariosasko | [] | Adds support for dill 3.7. | true |
1,816,614,120 | https://api.github.com/repos/huggingface/datasets/issues/6060 | https://github.com/huggingface/datasets/issues/6060 | 6,060 | Dataset.map() execute twice when in PyTorch DDP mode | closed | 4 | 2023-07-22T05:06:43 | 2024-01-22T18:35:12 | 2024-01-22T18:35:12 | wanghaoyucn | [] | ### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. W... | false |
1,816,537,176 | https://api.github.com/repos/huggingface/datasets/issues/6059 | https://github.com/huggingface/datasets/issues/6059 | 6,059 | Provide ability to load label mappings from file | open | 3 | 2023-07-22T02:04:19 | 2024-04-16T08:07:55 | null | david-waterworth | [
"enhancement"
] | ### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be... | false |
1,815,131,397 | https://api.github.com/repos/huggingface/datasets/issues/6058 | https://github.com/huggingface/datasets/issues/6058 | 6,058 | laion-coco download error | closed | 1 | 2023-07-21T04:24:15 | 2023-07-22T01:42:06 | 2023-07-22T01:42:06 | yangyijune | [] | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no... | false |
1,815,100,151 | https://api.github.com/repos/huggingface/datasets/issues/6057 | https://github.com/huggingface/datasets/issues/6057 | 6,057 | Why is the speed difference of gen example so big? | closed | 1 | 2023-07-21T03:34:49 | 2023-10-04T18:06:16 | 2023-10-04T18:06:15 | pixeli99 | [] | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('tex... | false |
1,815,086,963 | https://api.github.com/repos/huggingface/datasets/issues/6056 | https://github.com/huggingface/datasets/pull/6056 | 6,056 | Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded | open | 6 | 2023-07-21T03:13:21 | 2023-08-17T08:26:53 | null | AntreasAntoniou | [] | Context: issue #5990
In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get th... | true |
1,813,524,145 | https://api.github.com/repos/huggingface/datasets/issues/6055 | https://github.com/huggingface/datasets/issues/6055 | 6,055 | Fix host URL in The Pile datasets | open | 0 | 2023-07-20T09:08:52 | 2023-07-20T09:09:37 | null | nickovchinnikov | [] | ### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSCo... | false |
1,813,271,304 | https://api.github.com/repos/huggingface/datasets/issues/6054 | https://github.com/huggingface/datasets/issues/6054 | 6,054 | Multi-processed `Dataset.map` slows down a lot when `import torch` | closed | 1 | 2023-07-20T06:36:14 | 2023-07-21T15:19:37 | 2023-07-21T15:19:37 | ShinoharaHare | [
"duplicate"
] | ### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows i... | false |
1,812,635,902 | https://api.github.com/repos/huggingface/datasets/issues/6053 | https://github.com/huggingface/datasets/issues/6053 | 6,053 | Change package name from "datasets" to something less generic | closed | 2 | 2023-07-19T19:53:28 | 2024-11-20T21:22:36 | 2023-10-03T16:04:09 | jack-jjm | [
"enhancement"
] | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have n... | false |
1,812,145,100 | https://api.github.com/repos/huggingface/datasets/issues/6052 | https://github.com/huggingface/datasets/pull/6052 | 6,052 | Remove `HfFileSystem` and deprecate `S3FileSystem` | closed | 10 | 2023-07-19T15:00:01 | 2023-07-19T17:39:11 | 2023-07-19T17:27:17 | mariosasko | [] | Remove the legacy `HfFileSystem` and deprecate `S3FileSystem`
cc @philschmid for the SageMaker scripts/notebooks that still use `datasets`' `S3FileSystem` | true |
1,811,549,650 | https://api.github.com/repos/huggingface/datasets/issues/6051 | https://github.com/huggingface/datasets/issues/6051 | 6,051 | Skipping shard in the remote repo and resume upload | closed | 2 | 2023-07-19T09:25:26 | 2023-07-20T18:16:01 | 2023-07-20T18:16:00 | rs9000 | [] | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enume... | false |
1,810,378,706 | https://api.github.com/repos/huggingface/datasets/issues/6049 | https://github.com/huggingface/datasets/pull/6049 | 6,049 | Update `ruff` version in pre-commit config | closed | 2 | 2023-07-18T17:13:50 | 2023-12-01T14:26:19 | 2023-12-01T14:26:19 | polinaeterna | [] | so that it corresponds to the one that is being run in CI | true |
1,809,629,346 | https://api.github.com/repos/huggingface/datasets/issues/6048 | https://github.com/huggingface/datasets/issues/6048 | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | closed | 1 | 2023-07-18T10:16:34 | 2023-07-18T16:18:39 | 2023-07-18T16:18:39 | yangy1992 | [] | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/... | false |
1,809,627,947 | https://api.github.com/repos/huggingface/datasets/issues/6047 | https://github.com/huggingface/datasets/pull/6047 | 6,047 | Bump dev version | closed | 3 | 2023-07-18T10:15:39 | 2023-07-18T10:28:01 | 2023-07-18T10:15:52 | lhoestq | [] | workaround to fix an issue with transformers CI
https://github.com/huggingface/transformers/pull/24867#discussion_r1266519626 | true |
1,808,154,414 | https://api.github.com/repos/huggingface/datasets/issues/6046 | https://github.com/huggingface/datasets/issues/6046 | 6,046 | Support proxy and user-agent in fsspec calls | open | 10 | 2023-07-17T16:39:26 | 2025-06-26T18:26:27 | null | lhoestq | [
"enhancement",
"good second issue"
] | Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem`... | false |
1,808,072,270 | https://api.github.com/repos/huggingface/datasets/issues/6045 | https://github.com/huggingface/datasets/pull/6045 | 6,045 | Check if column names match in Parquet loader only when config `features` are specified | closed | 8 | 2023-07-17T15:50:15 | 2023-07-24T14:45:56 | 2023-07-24T14:35:03 | mariosasko | [] | Fix #6039 | true |
1,808,057,906 | https://api.github.com/repos/huggingface/datasets/issues/6044 | https://github.com/huggingface/datasets/pull/6044 | 6,044 | Rename "pattern" to "path" in YAML data_files configs | closed | 10 | 2023-07-17T15:41:16 | 2023-07-19T16:59:55 | 2023-07-19T16:48:06 | lhoestq | [] | To make it easier to understand for users.
They can use "path" to specify a single path, <s>or "paths" to use a list of paths.</s>
Glob patterns are still supported though
| true |
1,807,771,750 | https://api.github.com/repos/huggingface/datasets/issues/6043 | https://github.com/huggingface/datasets/issues/6043 | 6,043 | Compression kwargs have no effect when saving datasets as csv | open | 3 | 2023-07-17T13:19:21 | 2023-07-22T17:34:18 | null | exs-avianello | [] | ### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, ... | false |
1,807,516,762 | https://api.github.com/repos/huggingface/datasets/issues/6042 | https://github.com/huggingface/datasets/pull/6042 | 6,042 | Fix unused DatasetInfosDict code in push_to_hub | closed | 3 | 2023-07-17T11:03:09 | 2023-07-18T16:17:52 | 2023-07-18T16:08:42 | lhoestq | [] | null | true |
1,807,441,055 | https://api.github.com/repos/huggingface/datasets/issues/6041 | https://github.com/huggingface/datasets/pull/6041 | 6,041 | Flatten repository_structure docs on yaml | closed | 3 | 2023-07-17T10:15:10 | 2023-07-17T10:24:51 | 2023-07-17T10:16:22 | lhoestq | [] | To have Splits, Configurations and Builder parameters at the same doc level | true |
1,807,410,238 | https://api.github.com/repos/huggingface/datasets/issues/6040 | https://github.com/huggingface/datasets/pull/6040 | 6,040 | Fix legacy_dataset_infos | closed | 3 | 2023-07-17T09:56:21 | 2023-07-17T10:24:34 | 2023-07-17T10:16:03 | lhoestq | [] | was causing transformers CI to fail
https://circleci.com/gh/huggingface/transformers/855105 | true |
1,806,508,451 | https://api.github.com/repos/huggingface/datasets/issues/6039 | https://github.com/huggingface/datasets/issues/6039 | 6,039 | Loading column subset from parquet file produces error since version 2.13 | closed | 0 | 2023-07-16T09:13:07 | 2023-07-24T14:35:04 | 2023-07-24T14:35:04 | kklemon | [] | ### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in ... | false |
1,805,960,244 | https://api.github.com/repos/huggingface/datasets/issues/6038 | https://github.com/huggingface/datasets/issues/6038 | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | closed | 1 | 2023-07-15T07:58:08 | 2023-07-24T11:54:15 | 2023-07-24T11:54:15 | BaiMeiyingxue | [] | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configurati... | false |
1,805,887,184 | https://api.github.com/repos/huggingface/datasets/issues/6037 | https://github.com/huggingface/datasets/issues/6037 | 6,037 | Documentation links to examples are broken | closed | 2 | 2023-07-15T04:54:50 | 2023-07-17T22:35:14 | 2023-07-17T15:10:32 | david-waterworth | [] | ### Describe the bug
The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example
- text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data ... | false |
1,805,138,898 | https://api.github.com/repos/huggingface/datasets/issues/6036 | https://github.com/huggingface/datasets/pull/6036 | 6,036 | Deprecate search API | open | 9 | 2023-07-14T16:22:09 | 2023-09-07T16:44:32 | null | mariosasko | [] | The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn... | true |
1,805,087,687 | https://api.github.com/repos/huggingface/datasets/issues/6035 | https://github.com/huggingface/datasets/pull/6035 | 6,035 | Dataset representation | open | 1 | 2023-07-14T15:42:37 | 2023-07-19T19:41:35 | null | Ganryuu | [] | __repr__ and _repr_html_ now both are similar to that of Polars | true |
1,804,501,361 | https://api.github.com/repos/huggingface/datasets/issues/6034 | https://github.com/huggingface/datasets/issues/6034 | 6,034 | load_dataset hangs on WSL | closed | 3 | 2023-07-14T09:03:10 | 2023-07-14T14:48:29 | 2023-07-14T14:48:29 | Andy-Zhou2 | [] | ### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not... | false |
1,804,482,051 | https://api.github.com/repos/huggingface/datasets/issues/6033 | https://github.com/huggingface/datasets/issues/6033 | 6,033 | `map` function doesn't fully utilize `input_columns`. | closed | 0 | 2023-07-14T08:49:28 | 2023-07-14T09:16:04 | 2023-07-14T09:16:04 | kwonmha | [] | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select co... | false |
1,804,358,679 | https://api.github.com/repos/huggingface/datasets/issues/6032 | https://github.com/huggingface/datasets/issues/6032 | 6,032 | DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info | open | 5 | 2023-07-14T07:22:55 | 2023-09-11T13:50:41 | null | codingl2k1 | [] | ### Describe the bug
```python
download_config = DownloadConfig(proxies={'https': '<my proxy>'})
builder = load_dataset_builder(..., download_config=download_config)
```
But, when getting the dataset_info from HfApi, the http requests not using the proxies.
### Steps to reproduce the bug
1. Setup proxies i... | false |
1,804,183,858 | https://api.github.com/repos/huggingface/datasets/issues/6031 | https://github.com/huggingface/datasets/issues/6031 | 6,031 | Argument type for map function changes when using `input_columns` for `IterableDataset` | closed | 1 | 2023-07-14T05:11:14 | 2023-07-14T14:44:15 | 2023-07-14T14:44:15 | kwonmha | [] | ### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and val... | false |
1,803,864,744 | https://api.github.com/repos/huggingface/datasets/issues/6030 | https://github.com/huggingface/datasets/pull/6030 | 6,030 | fixed typo in comment | closed | 2 | 2023-07-13T22:49:57 | 2023-07-14T14:21:58 | 2023-07-14T14:13:38 | NightMachinery | [] | This mistake was a bit confusing, so I thought it was worth sending a PR over. | true |
1,803,460,046 | https://api.github.com/repos/huggingface/datasets/issues/6029 | https://github.com/huggingface/datasets/pull/6029 | 6,029 | [docs] Fix link | closed | 3 | 2023-07-13T17:24:12 | 2023-07-13T17:47:41 | 2023-07-13T17:38:59 | stevhliu | [] | Fixes link to the builder classes :) | true |
1,803,294,981 | https://api.github.com/repos/huggingface/datasets/issues/6028 | https://github.com/huggingface/datasets/pull/6028 | 6,028 | Use new hffs | closed | 13 | 2023-07-13T15:41:44 | 2023-07-17T17:09:39 | 2023-07-17T17:01:00 | lhoestq | [] | Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem.
Switching to `HfFileSystem` will help implementing optimization in data files resolution
## Implementation details
I replaced all the from_hf_repo and from_local_or_remote in data_files.p... | true |
1,803,008,486 | https://api.github.com/repos/huggingface/datasets/issues/6027 | https://github.com/huggingface/datasets/pull/6027 | 6,027 | Delete `task_templates` in `IterableDataset` when they are no longer valid | closed | 3 | 2023-07-13T13:16:17 | 2023-07-13T14:06:20 | 2023-07-13T13:57:35 | mariosasko | [] | Fix #6025 | true |
1,802,929,222 | https://api.github.com/repos/huggingface/datasets/issues/6026 | https://github.com/huggingface/datasets/pull/6026 | 6,026 | Fix style with ruff 0.0.278 | closed | 3 | 2023-07-13T12:34:24 | 2023-07-13T12:46:26 | 2023-07-13T12:37:01 | lhoestq | [] | null | true |
1,801,852,601 | https://api.github.com/repos/huggingface/datasets/issues/6025 | https://github.com/huggingface/datasets/issues/6025 | 6,025 | Using a dataset for a use other than it was intended for. | closed | 1 | 2023-07-12T22:33:17 | 2023-07-13T13:57:36 | 2023-07-13T13:57:36 | surya-narayanan | [] | ### Describe the bug
Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason?
Here is the full stacktra... | false |
1,801,708,808 | https://api.github.com/repos/huggingface/datasets/issues/6024 | https://github.com/huggingface/datasets/pull/6024 | 6,024 | Don't reference self in Spark._validate_cache_dir | closed | 4 | 2023-07-12T20:31:16 | 2023-07-13T16:58:32 | 2023-07-13T12:37:09 | maddiedawson | [] | Fix for https://github.com/huggingface/datasets/issues/5963 | true |
1,801,272,420 | https://api.github.com/repos/huggingface/datasets/issues/6023 | https://github.com/huggingface/datasets/pull/6023 | 6,023 | Fix `ClassLabel` min max check for `None` values | closed | 3 | 2023-07-12T15:46:12 | 2023-07-12T16:29:26 | 2023-07-12T16:18:04 | mariosasko | [] | Fix #6022 | true |
1,800,092,589 | https://api.github.com/repos/huggingface/datasets/issues/6022 | https://github.com/huggingface/datasets/issues/6022 | 6,022 | Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int' | closed | 1 | 2023-07-12T03:20:17 | 2023-07-12T16:18:06 | 2023-07-12T16:18:05 | codingl2k1 | [] | ### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
... | false |
1,799,785,904 | https://api.github.com/repos/huggingface/datasets/issues/6021 | https://github.com/huggingface/datasets/pull/6021 | 6,021 | [docs] Update return statement of index search | closed | 2 | 2023-07-11T21:33:32 | 2023-07-12T17:13:02 | 2023-07-12T17:03:00 | stevhliu | [] | Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https://github.com/huggingface/transformers/issues/24739) and internal Slack [convo](https://huggingface.slack.com/archives/C01229B19EX/p1689105179711689)), and fixes the formatting because multiple return ... | true |
1,799,720,536 | https://api.github.com/repos/huggingface/datasets/issues/6020 | https://github.com/huggingface/datasets/issues/6020 | 6,020 | Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs | open | 4 | 2023-07-11T20:40:38 | 2024-10-27T06:30:13 | null | kheyer | [] | ### Describe the bug
I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dat... | false |
1,799,532,822 | https://api.github.com/repos/huggingface/datasets/issues/6019 | https://github.com/huggingface/datasets/pull/6019 | 6,019 | Improve logging | closed | 13 | 2023-07-11T18:30:23 | 2023-07-12T19:34:14 | 2023-07-12T17:19:28 | mariosasko | [] | Adds the StreamHandler (as `hfh` and `transformers` do) to the library's logger to log INFO messages and logs the messages about "loading a cached result" (and some other warnings) as INFO
(Also removes the `leave=False` arg in the progress bars to be consistent with `hfh` and `transformers` - progress bars serve as... | true |
1,799,411,999 | https://api.github.com/repos/huggingface/datasets/issues/6018 | https://github.com/huggingface/datasets/pull/6018 | 6,018 | test1 | closed | 1 | 2023-07-11T17:25:49 | 2023-07-20T10:11:41 | 2023-07-20T10:11:41 | ognjenovicj | [] | null | true |
1,799,309,132 | https://api.github.com/repos/huggingface/datasets/issues/6017 | https://github.com/huggingface/datasets/issues/6017 | 6,017 | Switch to huggingface_hub's HfFileSystem | closed | 0 | 2023-07-11T16:24:40 | 2023-07-17T17:01:01 | 2023-07-17T17:01:01 | lhoestq | [
"enhancement"
] | instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919 | false |
1,798,968,033 | https://api.github.com/repos/huggingface/datasets/issues/6016 | https://github.com/huggingface/datasets/pull/6016 | 6,016 | Dataset string representation enhancement | open | 2 | 2023-07-11T13:38:25 | 2023-07-16T10:26:18 | null | Ganryuu | [] | my attempt at #6010
not sure if this is the right way to go about it, I will wait for your feedback | true |
1,798,807,893 | https://api.github.com/repos/huggingface/datasets/issues/6015 | https://github.com/huggingface/datasets/pull/6015 | 6,015 | Add metadata ui screenshot in docs | closed | 3 | 2023-07-11T12:16:29 | 2023-07-11T16:07:28 | 2023-07-11T15:56:46 | lhoestq | [] | null | true |
1,798,213,816 | https://api.github.com/repos/huggingface/datasets/issues/6014 | https://github.com/huggingface/datasets/issues/6014 | 6,014 | Request to Share/Update Dataset Viewer Code | closed | 10 | 2023-07-11T06:36:09 | 2024-07-20T07:29:08 | 2023-09-25T12:01:17 | lilyorlilypad | [
"duplicate"
] |
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I k... | false |
1,796,083,437 | https://api.github.com/repos/huggingface/datasets/issues/6013 | https://github.com/huggingface/datasets/issues/6013 | 6,013 | [FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage | open | 2 | 2023-07-10T06:42:20 | 2025-06-19T06:30:38 | null | NightMachinery | [
"enhancement",
"good second issue"
] | ### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets wou... | false |
1,795,575,432 | https://api.github.com/repos/huggingface/datasets/issues/6012 | https://github.com/huggingface/datasets/issues/6012 | 6,012 | [FR] Transform Chaining, Lazy Mapping | open | 9 | 2023-07-09T21:40:21 | 2025-01-20T14:06:28 | null | NightMachinery | [
"enhancement"
] | ### Feature request
Currently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space.
The solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested.
The API should look ... | false |
1,795,296,568 | https://api.github.com/repos/huggingface/datasets/issues/6011 | https://github.com/huggingface/datasets/issues/6011 | 6,011 | Documentation: wiki_dpr Dataset has no metric_type for Faiss Index | closed | 2 | 2023-07-09T08:30:19 | 2023-07-11T03:02:36 | 2023-07-11T03:02:36 | YichiRockyZhang | [] | ### Describe the bug
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because ... | false |
1,793,838,152 | https://api.github.com/repos/huggingface/datasets/issues/6010 | https://github.com/huggingface/datasets/issues/6010 | 6,010 | Improve `Dataset`'s string representation | open | 3 | 2023-07-07T16:38:03 | 2023-09-01T03:45:07 | null | mariosasko | [
"enhancement"
] | Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit. | false |
1,792,059,808 | https://api.github.com/repos/huggingface/datasets/issues/6009 | https://github.com/huggingface/datasets/pull/6009 | 6,009 | Fix cast for dictionaries with no keys | closed | 3 | 2023-07-06T18:48:14 | 2023-07-07T14:13:00 | 2023-07-07T14:01:13 | mariosasko | [] | Fix #5677 | true |
1,789,869,344 | https://api.github.com/repos/huggingface/datasets/issues/6008 | https://github.com/huggingface/datasets/issues/6008 | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | closed | 3 | 2023-07-05T16:06:48 | 2023-07-10T13:46:39 | 2023-07-10T13:46:39 | andreemic | [] | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more ... | false |
1,789,782,693 | https://api.github.com/repos/huggingface/datasets/issues/6007 | https://github.com/huggingface/datasets/issues/6007 | 6,007 | Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset | open | 8 | 2023-07-05T15:16:50 | 2024-02-07T22:22:35 | null | silverriver | [
"arrow"
] | ### Describe the bug
When load a large dataset with the following code
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
```
We encountered the error: "OverflowError: Python int too large to convert to C long"
The error look something like... | false |
1,788,855,582 | https://api.github.com/repos/huggingface/datasets/issues/6006 | https://github.com/huggingface/datasets/issues/6006 | 6,006 | NotADirectoryError when loading gigawords | closed | 1 | 2023-07-05T06:23:41 | 2023-07-05T06:31:02 | 2023-07-05T06:31:01 | xipq | [] | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): ... | false |
1,788,103,576 | https://api.github.com/repos/huggingface/datasets/issues/6005 | https://github.com/huggingface/datasets/pull/6005 | 6,005 | Drop Python 3.7 support | closed | 7 | 2023-07-04T15:02:37 | 2023-07-06T15:32:41 | 2023-07-06T15:22:43 | mariosasko | [] | `hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :).
(Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7) | true |
1,786,636,368 | https://api.github.com/repos/huggingface/datasets/issues/6004 | https://github.com/huggingface/datasets/pull/6004 | 6,004 | Misc improvements | closed | 4 | 2023-07-03T18:29:14 | 2023-07-06T17:04:11 | 2023-07-06T16:55:25 | mariosasko | [] | Contains the following improvements:
* fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section
* updates `Makefile` to also run the style checks on `utils` and `setup.py`
* deletes a test for GH-hosted datasets (no longer supported)
* deletes `convert_dataset.sh` (outdated... | true |
1,786,554,110 | https://api.github.com/repos/huggingface/datasets/issues/6003 | https://github.com/huggingface/datasets/issues/6003 | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | open | 0 | 2023-07-03T17:15:31 | 2023-07-03T17:15:31 | null | PonteIneptique | [] | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
... | false |
1,786,053,060 | https://api.github.com/repos/huggingface/datasets/issues/6002 | https://github.com/huggingface/datasets/pull/6002 | 6,002 | Add KLUE-MRC metrics | closed | 1 | 2023-07-03T12:11:10 | 2023-07-09T11:57:20 | 2023-07-09T11:57:20 | ingyuseong | [] | ## Metrics for KLUE-MRC (Korean Language Understanding Evaluation β Machine Reading Comprehension)
Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue).
KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.
Specifically, in the case of... | true |
1,782,516,627 | https://api.github.com/repos/huggingface/datasets/issues/6001 | https://github.com/huggingface/datasets/pull/6001 | 6,001 | Align `column_names` type check with type hint in `sort` | closed | 3 | 2023-06-30T13:15:50 | 2023-06-30T14:18:32 | 2023-06-30T14:11:24 | mariosasko | [] | Fix #5998 | true |
1,782,456,878 | https://api.github.com/repos/huggingface/datasets/issues/6000 | https://github.com/huggingface/datasets/pull/6000 | 6,000 | Pin `joblib` to avoid `joblibspark` test failures | closed | 4 | 2023-06-30T12:36:54 | 2023-06-30T13:17:05 | 2023-06-30T13:08:27 | mariosasko | [] | `joblibspark` doesn't support the latest `joblib` release.
See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors | true |
1,781,851,513 | https://api.github.com/repos/huggingface/datasets/issues/5999 | https://github.com/huggingface/datasets/issues/5999 | 5,999 | Getting a 409 error while loading xglue dataset | closed | 1 | 2023-06-30T04:13:54 | 2023-06-30T05:57:23 | 2023-06-30T05:57:22 | Praful932 | [] | ### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the... | false |
1,781,805,018 | https://api.github.com/repos/huggingface/datasets/issues/5998 | https://github.com/huggingface/datasets/issues/5998 | 5,998 | The current implementation has a potential bug in the sort method | closed | 1 | 2023-06-30T03:16:57 | 2023-06-30T14:21:03 | 2023-06-30T14:11:25 | wangyuxinwhy | [] | ### Describe the bug
In the sort methodοΌhere's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the ... | false |
1,781,582,818 | https://api.github.com/repos/huggingface/datasets/issues/5997 | https://github.com/huggingface/datasets/issues/5997 | 5,997 | extend the map function so it can wrap around long text that does not fit in the context window | open | 2 | 2023-06-29T22:15:21 | 2023-07-03T17:58:52 | null | siddhsql | [
"enhancement"
] | ### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's con... | false |
1,779,294,374 | https://api.github.com/repos/huggingface/datasets/issues/5996 | https://github.com/huggingface/datasets/pull/5996 | 5,996 | Deprecate `use_auth_token` in favor of `token` | closed | 9 | 2023-06-28T16:26:38 | 2023-07-05T15:22:20 | 2023-07-03T16:03:33 | mariosasko | [] | ... to be consistent with `transformers` and `huggingface_hub`. | true |
1,777,088,925 | https://api.github.com/repos/huggingface/datasets/issues/5995 | https://github.com/huggingface/datasets/pull/5995 | 5,995 | Support returning dataframe in map transform | closed | 4 | 2023-06-27T14:15:08 | 2023-06-28T13:56:02 | 2023-06-28T13:46:33 | mariosasko | [] | Allow returning Pandas DataFrames in `map` transforms.
(Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
| true |
1,776,829,004 | https://api.github.com/repos/huggingface/datasets/issues/5994 | https://github.com/huggingface/datasets/pull/5994 | 5,994 | Fix select_columns columns order | closed | 4 | 2023-06-27T12:32:46 | 2023-06-27T15:40:47 | 2023-06-27T15:32:43 | lhoestq | [] | Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`.
I also fixed the same issue for `dataset.flatten()`
Close https://github.com/huggingface/datasets/issues/5993 | true |
1,776,643,555 | https://api.github.com/repos/huggingface/datasets/issues/5993 | https://github.com/huggingface/datasets/issues/5993 | 5,993 | ValueError: Table schema does not match schema used to create file | closed | 2 | 2023-06-27T10:54:07 | 2023-06-27T15:36:42 | 2023-06-27T15:32:44 | exs-avianello | [] | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset... | false |
1,776,460,964 | https://api.github.com/repos/huggingface/datasets/issues/5992 | https://github.com/huggingface/datasets/pull/5992 | 5,992 | speedup | closed | 1 | 2023-06-27T09:17:58 | 2023-06-27T09:23:07 | 2023-06-27T09:18:04 | qgallouedec | [] | null | true |
1,774,456,518 | https://api.github.com/repos/huggingface/datasets/issues/5991 | https://github.com/huggingface/datasets/issues/5991 | 5,991 | `map` with any joblib backend | open | 1 | 2023-06-26T10:33:42 | 2025-06-26T18:32:56 | null | lhoestq | [
"enhancement"
] | We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.
Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main proces... | false |
1,774,134,091 | https://api.github.com/repos/huggingface/datasets/issues/5989 | https://github.com/huggingface/datasets/issues/5989 | 5,989 | Set a rule on the config and split names | open | 3 | 2023-06-26T07:34:14 | 2023-07-19T14:22:54 | null | severo | [] | > should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| false |
1,773,257,828 | https://api.github.com/repos/huggingface/datasets/issues/5988 | https://github.com/huggingface/datasets/issues/5988 | 5,988 | ConnectionError: Couldn't reach dataset_infos.json | closed | 1 | 2023-06-25T12:39:31 | 2023-07-07T13:20:57 | 2023-07-07T13:20:57 | yulingao | [] | ### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'C... | false |
1,773,047,909 | https://api.github.com/repos/huggingface/datasets/issues/5987 | https://github.com/huggingface/datasets/issues/5987 | 5,987 | Why max_shard_size is not supported in load_dataset and passed to download_and_prepare | closed | 5 | 2023-06-25T04:19:13 | 2023-06-29T16:06:08 | 2023-06-29T16:06:08 | npuichigo | [] | ### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blo... | false |
1,772,233,111 | https://api.github.com/repos/huggingface/datasets/issues/5986 | https://github.com/huggingface/datasets/pull/5986 | 5,986 | Make IterableDataset.from_spark more efficient | closed | 6 | 2023-06-23T22:18:20 | 2023-07-07T10:05:58 | 2023-07-07T09:56:09 | mathewjacob1002 | [] | Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating. | true |
1,771,588,158 | https://api.github.com/repos/huggingface/datasets/issues/5985 | https://github.com/huggingface/datasets/issues/5985 | 5,985 | Cannot reuse tokenizer object for dataset map | closed | 2 | 2023-06-23T14:45:31 | 2023-07-21T14:09:14 | 2023-07-21T14:09:14 | vikigenius | [
"duplicate"
] | ### Describe the bug
Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.
Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like pad... | false |
1,771,571,458 | https://api.github.com/repos/huggingface/datasets/issues/5984 | https://github.com/huggingface/datasets/issues/5984 | 5,984 | AutoSharding IterableDataset's when num_workers > 1 | open | 8 | 2023-06-23T14:34:20 | 2024-03-22T15:01:14 | null | mathephysicist | [
"enhancement"
] | ### Feature request
Minimal Example
```
import torch
from datasets import IterableDataset
d = IterableDataset.from_file(<file_name>)
dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)
for sample in dl:
print(sample)
```
Warning:
Too many dataloader workers: 2 (max is dataset.n_shard... | false |
1,770,578,804 | https://api.github.com/repos/huggingface/datasets/issues/5983 | https://github.com/huggingface/datasets/pull/5983 | 5,983 | replaced PathLike as a variable for save_to_disk for dataset_path witβ¦ | closed | 0 | 2023-06-23T00:57:05 | 2023-09-11T04:17:17 | 2023-09-11T04:17:17 | benjaminbrown038 | [] | β¦h str like that of load_from_disk | true |
1,770,333,296 | https://api.github.com/repos/huggingface/datasets/issues/5982 | https://github.com/huggingface/datasets/issues/5982 | 5,982 | 404 on Datasets Documentation Page | closed | 2 | 2023-06-22T20:14:57 | 2023-06-26T15:45:03 | 2023-06-26T15:45:03 | kmulka-bloomberg | [] | ### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
#... | false |
1,770,310,087 | https://api.github.com/repos/huggingface/datasets/issues/5981 | https://github.com/huggingface/datasets/issues/5981 | 5,981 | Only two cores are getting used in sagemaker with pytorch 3.10 kernel | closed | 4 | 2023-06-22T19:57:31 | 2023-10-30T06:17:40 | 2023-07-24T11:54:52 | mmr-crexi | [] | ### Describe the bug
When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.
We have solved this in our own code by placing the following snippet in the code that is called insi... | false |
1,770,255,973 | https://api.github.com/repos/huggingface/datasets/issues/5980 | https://github.com/huggingface/datasets/issues/5980 | 5,980 | Viewing dataset card returns β502 Bad Gatewayβ | closed | 3 | 2023-06-22T19:14:48 | 2023-06-27T08:38:19 | 2023-06-26T14:42:45 | tbenthompson | [] | The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the βFiles and versionsβ tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is ... | false |
1,770,198,250 | https://api.github.com/repos/huggingface/datasets/issues/5979 | https://github.com/huggingface/datasets/pull/5979 | 5,979 | set dev version | closed | 3 | 2023-06-22T18:32:14 | 2023-06-22T18:42:22 | 2023-06-22T18:32:22 | lhoestq | [] | null | true |
1,770,187,053 | https://api.github.com/repos/huggingface/datasets/issues/5978 | https://github.com/huggingface/datasets/pull/5978 | 5,978 | Release: 2.13.1 | closed | 4 | 2023-06-22T18:23:11 | 2023-06-22T18:40:24 | 2023-06-22T18:30:16 | lhoestq | [] | null | true |
1,768,503,913 | https://api.github.com/repos/huggingface/datasets/issues/5976 | https://github.com/huggingface/datasets/pull/5976 | 5,976 | Avoid stuck map operation when subprocesses crashes | closed | 11 | 2023-06-21T21:18:31 | 2023-07-10T09:58:39 | 2023-07-10T09:50:07 | pappacena | [] | I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task ... | true |
1,768,271,343 | https://api.github.com/repos/huggingface/datasets/issues/5975 | https://github.com/huggingface/datasets/issues/5975 | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | closed | 9 | 2023-06-21T19:10:02 | 2023-06-30T05:55:39 | 2023-06-30T05:55:38 | Veluchs | [] | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I hav... | false |
1,767,981,231 | https://api.github.com/repos/huggingface/datasets/issues/5974 | https://github.com/huggingface/datasets/pull/5974 | 5,974 | Deprecate `errors` param in favor of `encoding_errors` in text builder | closed | 3 | 2023-06-21T16:31:38 | 2023-06-26T10:34:43 | 2023-06-26T10:27:40 | mariosasko | [] | For consistency with the JSON builder and Pandas | true |
1,767,897,485 | https://api.github.com/repos/huggingface/datasets/issues/5972 | https://github.com/huggingface/datasets/pull/5972 | 5,972 | Filter unsupported extensions | closed | 5 | 2023-06-21T15:43:01 | 2023-06-22T14:23:29 | 2023-06-22T14:16:26 | lhoestq | [] | I used a regex to filter the data files based on their extension for packaged builders.
I tried and a regex is 10x faster that using `in` to check if the extension is in the list of supported extensions.
Supersedes https://github.com/huggingface/datasets/pull/5850
Close https://github.com/huggingface/datasets/... | true |
1,767,053,635 | https://api.github.com/repos/huggingface/datasets/issues/5971 | https://github.com/huggingface/datasets/issues/5971 | 5,971 | Docs: make "repository structure" easier to find | open | 5 | 2023-06-21T08:26:44 | 2023-07-05T06:51:38 | null | severo | [
"documentation"
] | The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages. | false |
1,766,010,356 | https://api.github.com/repos/huggingface/datasets/issues/5970 | https://github.com/huggingface/datasets/issues/5970 | 5,970 | description disappearing from Info when Uploading a Dataset Created with `from_dict` | open | 2 | 2023-06-20T19:18:26 | 2023-06-22T14:23:56 | null | balisujohn | [] | ### Describe the bug
When uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download.
### Steps to reproduce the bug
I think the most relevant pattern in the code might be the following lines:
```
descr... | false |
1,765,529,905 | https://api.github.com/repos/huggingface/datasets/issues/5969 | https://github.com/huggingface/datasets/pull/5969 | 5,969 | Add `encoding` and `errors` params to JSON loader | closed | 4 | 2023-06-20T14:28:35 | 2023-06-21T13:39:50 | 2023-06-21T13:32:22 | mariosasko | [] | "Requested" in https://discuss.huggingface.co/t/utf-16-for-datasets/43828/3.
`pd.read_json` also has these parameters, so it makes sense to be consistent. | true |
1,765,252,561 | https://api.github.com/repos/huggingface/datasets/issues/5968 | https://github.com/huggingface/datasets/issues/5968 | 5,968 | Common Voice datasets still need `use_auth_token=True` | closed | 4 | 2023-06-20T11:58:37 | 2023-07-29T16:08:59 | 2023-07-29T16:08:58 | patrickvonplaten | [] | ### Describe the bug
We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
However it throw... | false |
1,763,926,520 | https://api.github.com/repos/huggingface/datasets/issues/5967 | https://github.com/huggingface/datasets/issues/5967 | 5,967 | Config name / split name lost after map with multiproc | open | 2 | 2023-06-19T17:27:36 | 2023-06-28T08:55:25 | null | sanchit-gandhi | [] | ### Describe the bug
Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
from transformers import AutoFeatureExtractor
import numpy as np
# load dummy dataset
libri = load_datas... | false |
1,763,885,914 | https://api.github.com/repos/huggingface/datasets/issues/5966 | https://github.com/huggingface/datasets/pull/5966 | 5,966 | Fix JSON generation in benchmarks CI | closed | 3 | 2023-06-19T16:56:06 | 2023-06-19T17:29:11 | 2023-06-19T17:22:10 | mariosasko | [] | Related to changes made in https://github.com/iterative/dvc/pull/9475 | true |
1,763,648,540 | https://api.github.com/repos/huggingface/datasets/issues/5965 | https://github.com/huggingface/datasets/issues/5965 | 5,965 | "Couldn't cast array of type" in complex datasets | closed | 4 | 2023-06-19T14:16:14 | 2023-07-26T15:13:53 | 2023-07-26T15:13:53 | piercefreeman | [] | ### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to hap... | false |
1,763,513,574 | https://api.github.com/repos/huggingface/datasets/issues/5964 | https://github.com/huggingface/datasets/pull/5964 | 5,964 | Always return list in `list_datasets` | closed | 2 | 2023-06-19T13:07:08 | 2023-06-19T17:29:37 | 2023-06-19T17:22:41 | mariosasko | [] | Fix #5925
Plus, deprecate `list_datasets`/`inspect_dataset` in favor of `huggingface_hub.list_datasets`/"git clone workflow" (downloads data files) | true |
1,762,774,457 | https://api.github.com/repos/huggingface/datasets/issues/5963 | https://github.com/huggingface/datasets/issues/5963 | 5,963 | Got an error _pickle.PicklingError use Dataset.from_spark. | closed | 5 | 2023-06-19T05:30:35 | 2023-07-24T11:55:46 | 2023-07-24T11:55:46 | yanzia12138 | [] | python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="... | false |
1,761,589,882 | https://api.github.com/repos/huggingface/datasets/issues/5962 | https://github.com/huggingface/datasets/issues/5962 | 5,962 | Issue with train_test_split maintaining the same underlying PyArrow Table | open | 0 | 2023-06-17T02:19:58 | 2023-06-17T02:19:58 | null | Oziel14 | [] | ### Describe the bug
I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.
### Steps to reproduce the bug
... | false |
1,758,525,111 | https://api.github.com/repos/huggingface/datasets/issues/5961 | https://github.com/huggingface/datasets/issues/5961 | 5,961 | IterableDataset: split by node and map may preprocess samples that will be skipped anyway | open | 9 | 2023-06-15T10:29:10 | 2023-09-01T10:35:11 | null | johnchienbronci | [] | There are two ways an iterable dataset can be split by node:
1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU
2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others.
In case 2. it's ... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.