id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,075,645,042 | https://api.github.com/repos/huggingface/datasets/issues/6580 | https://github.com/huggingface/datasets/issues/6580 | 6,580 | dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs. | closed | 0 | 2024-01-11T03:14:18 | 2024-01-20T12:46:16 | 2024-01-20T12:46:16 | kartikgupta321 | [] | ### Describe the bug
ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir.
### Steps to reproduce the bug
dataset = []
dataset_name = "ai2_arc"
possible_configs = [
'ARC-Challenge',
'ARC-Easy'
]
for config in possible_configs:
data... | false |
2,075,407,473 | https://api.github.com/repos/huggingface/datasets/issues/6579 | https://github.com/huggingface/datasets/issues/6579 | 6,579 | Unable to load `eli5` dataset with streaming | closed | 1 | 2024-01-10T23:44:20 | 2024-01-11T09:19:18 | 2024-01-11T09:19:17 | haok1402 | [] | ### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This works correctly.
```
from datasets import lo... | false |
2,074,923,321 | https://api.github.com/repos/huggingface/datasets/issues/6578 | https://github.com/huggingface/datasets/pull/6578 | 6,578 | Faster webdataset streaming | closed | 3 | 2024-01-10T18:18:09 | 2024-01-30T18:46:02 | 2024-01-30T18:39:51 | lhoestq | [] | requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files
it can be enabled using block_size=0 in fsspec
cc @rwightman | true |
2,074,790,848 | https://api.github.com/repos/huggingface/datasets/issues/6577 | https://github.com/huggingface/datasets/issues/6577 | 6,577 | 502 Server Errors when streaming large dataset | closed | 6 | 2024-01-10T16:59:36 | 2024-02-12T11:46:03 | 2024-01-15T16:05:44 | sanchit-gandhi | [
"streaming"
] | ### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: http... | false |
2,073,710,124 | https://api.github.com/repos/huggingface/datasets/issues/6576 | https://github.com/huggingface/datasets/issues/6576 | 6,576 | document page 404 not found after redirection | closed | 1 | 2024-01-10T06:48:14 | 2024-01-17T14:01:31 | 2024-01-17T14:01:31 | annahung31 | [] | ### Describe the bug
The redirected page encountered 404 not found.
### Steps to reproduce the bug
1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt
original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49
`... | false |
2,072,617,406 | https://api.github.com/repos/huggingface/datasets/issues/6575 | https://github.com/huggingface/datasets/pull/6575 | 6,575 | [IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding | closed | 2 | 2024-01-09T15:35:31 | 2024-01-11T16:16:54 | 2024-01-11T16:10:30 | lhoestq | [] | It was not taken into account e.g. when passing to a DataLoader with num_workers>0
Fix https://github.com/huggingface/datasets/issues/6565 | true |
2,072,579,549 | https://api.github.com/repos/huggingface/datasets/issues/6574 | https://github.com/huggingface/datasets/pull/6574 | 6,574 | Fix tests based on datasets that used to have scripts | closed | 2 | 2024-01-09T15:16:16 | 2024-01-09T16:11:33 | 2024-01-09T16:05:13 | lhoestq | [] | ...now that `squad` and `paws` don't have a script anymore | true |
2,072,553,951 | https://api.github.com/repos/huggingface/datasets/issues/6573 | https://github.com/huggingface/datasets/pull/6573 | 6,573 | [WebDataset] Audio support and bug fixes | closed | 2 | 2024-01-09T15:03:04 | 2024-01-11T16:17:28 | 2024-01-11T16:11:04 | lhoestq | [] | - Add audio support
- Fix an issue where user-provided features with additional fields are not taken into account
Close https://github.com/huggingface/datasets/issues/6569 | true |
2,072,384,281 | https://api.github.com/repos/huggingface/datasets/issues/6572 | https://github.com/huggingface/datasets/pull/6572 | 6,572 | Adding option for multipart achive download | closed | 1 | 2024-01-09T13:35:44 | 2024-02-25T08:13:01 | 2024-02-25T08:13:01 | jpodivin | [] | Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts.
With the new `multi_part` field of the `DownloadConfig` set, the downloa... | true |
2,072,111,000 | https://api.github.com/repos/huggingface/datasets/issues/6571 | https://github.com/huggingface/datasets/issues/6571 | 6,571 | Make DatasetDict.column_names return a list instead of dict | open | 0 | 2024-01-09T10:45:17 | 2024-01-09T10:45:17 | null | albertvillanova | [
"enhancement"
] | Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values.
However, by construction, all splits have the same column names.
I think it makes more sense to return a single list with the column names, which is the same for all the split k... | false |
2,071,805,265 | https://api.github.com/repos/huggingface/datasets/issues/6570 | https://github.com/huggingface/datasets/issues/6570 | 6,570 | No online docs for 2.16 release | closed | 7 | 2024-01-09T07:43:30 | 2024-01-09T16:45:50 | 2024-01-09T16:45:50 | albertvillanova | [
"bug",
"documentation"
] | We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1).
In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index
 . But a new issue came up :( | false |
2,069,808,842 | https://api.github.com/repos/huggingface/datasets/issues/6567 | https://github.com/huggingface/datasets/issues/6567 | 6,567 | AttributeError: 'str' object has no attribute 'to' | closed | 3 | 2024-01-08T06:40:21 | 2024-01-08T11:56:19 | 2024-01-08T10:03:17 | andysingal | [] | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer =... | false |
2,069,495,429 | https://api.github.com/repos/huggingface/datasets/issues/6566 | https://github.com/huggingface/datasets/issues/6566 | 6,566 | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets | closed | 1 | 2024-01-08T02:37:03 | 2024-06-02T14:24:39 | 2024-05-17T09:40:14 | HelloWorldBeginner | [
"bug"
] | ### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
File "/home/mini... | false |
2,068,939,670 | https://api.github.com/repos/huggingface/datasets/issues/6565 | https://github.com/huggingface/datasets/issues/6565 | 6,565 | `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader | closed | 2 | 2024-01-07T02:46:50 | 2025-03-08T09:46:05 | 2024-01-11T16:10:31 | naba89 | [] | ### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't ha... | false |
2,068,893,194 | https://api.github.com/repos/huggingface/datasets/issues/6564 | https://github.com/huggingface/datasets/issues/6564 | 6,564 | `Dataset.filter` missing `with_rank` parameter | closed | 2 | 2024-01-06T23:48:13 | 2024-01-29T16:36:55 | 2024-01-29T16:36:54 | kopyl | [] | ### Describe the bug
The issue shall be open: https://github.com/huggingface/datasets/issues/6435
When i try to pass `with_rank` to `Dataset.filter()`, i get this:
`Dataset.filter() got an unexpected keyword argument 'with_rank'`
### Steps to reproduce the bug
Run notebook:
https://colab.research.google.com... | false |
2,068,302,402 | https://api.github.com/repos/huggingface/datasets/issues/6563 | https://github.com/huggingface/datasets/issues/6563 | 6,563 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | closed | 7 | 2024-01-06T02:28:54 | 2024-03-14T02:59:42 | 2024-01-06T16:13:27 | wasertech | [] | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_... | false |
2,067,904,504 | https://api.github.com/repos/huggingface/datasets/issues/6562 | https://github.com/huggingface/datasets/issues/6562 | 6,562 | datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function | open | 0 | 2024-01-05T19:10:25 | 2024-01-05T19:10:25 | null | LsTam91 | [] | ### Describe the bug
I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow).
Seems that... | false |
2,067,404,951 | https://api.github.com/repos/huggingface/datasets/issues/6561 | https://github.com/huggingface/datasets/issues/6561 | 6,561 | Document YAML configuration with "data_dir" | open | 2 | 2024-01-05T14:03:33 | 2025-08-05T07:50:17 | null | severo | [
"documentation"
] | See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference | false |
2,065,637,625 | https://api.github.com/repos/huggingface/datasets/issues/6560 | https://github.com/huggingface/datasets/issues/6560 | 6,560 | Support Video | closed | 1 | 2024-01-04T13:10:58 | 2024-08-23T09:51:27 | 2024-08-23T09:51:27 | yuvalkirstain | [
"duplicate",
"enhancement"
] | ### Feature request
HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :)
### Motivation
Video generation :)
### Your contribution
Will probably be limited to raising this feature request ;) | false |
2,065,118,332 | https://api.github.com/repos/huggingface/datasets/issues/6559 | https://github.com/huggingface/datasets/issues/6559 | 6,559 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | closed | 8 | 2024-01-04T07:04:48 | 2024-04-03T10:40:53 | 2024-01-05T01:26:25 | zhulinJulia24 | [] | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script su... | false |
2,064,885,984 | https://api.github.com/repos/huggingface/datasets/issues/6558 | https://github.com/huggingface/datasets/issues/6558 | 6,558 | OSError: image file is truncated (1 bytes not processed) #28323 | closed | 1 | 2024-01-04T02:15:13 | 2024-02-21T00:38:12 | 2024-02-21T00:38:12 | andysingal | [] | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number... | false |
2,064,341,965 | https://api.github.com/repos/huggingface/datasets/issues/6557 | https://github.com/huggingface/datasets/pull/6557 | 6,557 | Support standalone yaml | closed | 4 | 2024-01-03T16:47:35 | 2024-01-11T17:59:51 | 2024-01-11T17:53:42 | lhoestq | [] | see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679 | true |
2,064,018,208 | https://api.github.com/repos/huggingface/datasets/issues/6556 | https://github.com/huggingface/datasets/pull/6556 | 6,556 | Fix imagefolder with one image | closed | 2 | 2024-01-03T13:13:02 | 2024-02-12T21:57:34 | 2024-01-09T13:06:30 | lhoestq | [] | A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case.
e.g. for https://huggingface.co/datasets/mu... | true |
2,063,841,286 | https://api.github.com/repos/huggingface/datasets/issues/6555 | https://github.com/huggingface/datasets/pull/6555 | 6,555 | Do not use Parquet exports if revision is passed | closed | 4 | 2024-01-03T11:33:10 | 2024-02-02T10:41:33 | 2024-02-02T10:35:28 | albertvillanova | [] | Fix #6554. | true |
2,063,839,916 | https://api.github.com/repos/huggingface/datasets/issues/6554 | https://github.com/huggingface/datasets/issues/6554 | 6,554 | Parquet exports are used even if revision is passed | closed | 1 | 2024-01-03T11:32:26 | 2024-02-02T10:35:29 | 2024-02-02T10:35:29 | albertvillanova | [
"bug"
] | We should not used Parquet exports if `revision` is passed.
I think this is a regression. | false |
2,063,474,183 | https://api.github.com/repos/huggingface/datasets/issues/6553 | https://github.com/huggingface/datasets/issues/6553 | 6,553 | Cannot import name 'load_dataset' from .... module ‘datasets’ | closed | 2 | 2024-01-03T08:18:21 | 2024-02-21T00:38:24 | 2024-02-21T00:38:24 | ciaoyizhen | [] | ### Describe the bug
use python -m pip install datasets to install
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
it doesn't work
### Environment info
datasets version==2.15.0
python == 3.10.12
linux version I don't know?? | false |
2,063,157,187 | https://api.github.com/repos/huggingface/datasets/issues/6552 | https://github.com/huggingface/datasets/issues/6552 | 6,552 | Loading a dataset from Google Colab hangs at "Resolving data files". | closed | 2 | 2024-01-03T02:18:17 | 2024-01-08T10:09:04 | 2024-01-08T10:09:04 | KelSolaar | [] | ### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:

It is happening when the `_get_origin_metadata` definition is invoked:
```python
d... | false |
2,062,768,400 | https://api.github.com/repos/huggingface/datasets/issues/6551 | https://github.com/huggingface/datasets/pull/6551 | 6,551 | Fix parallel downloads for datasets without scripts | closed | 4 | 2024-01-02T18:06:18 | 2024-01-06T20:14:57 | 2024-01-03T13:19:48 | lhoestq | [] | Enable parallel downloads using multiprocessing when `num_proc` is passed to `load_dataset`.
It was enabled for datasets with scripts already (if they passed lists to `dl_manager.download`) but not for no-script datasets (we pass dicts {split: [list of files]} to `dl_manager.download` for those ones).
I fixed thi... | true |
2,062,556,493 | https://api.github.com/repos/huggingface/datasets/issues/6550 | https://github.com/huggingface/datasets/pull/6550 | 6,550 | Multi gpu docs | closed | 4 | 2024-01-02T15:11:58 | 2024-01-31T13:45:15 | 2024-01-31T13:38:59 | lhoestq | [] | after discussions in https://github.com/huggingface/datasets/pull/6415 | true |
2,062,420,259 | https://api.github.com/repos/huggingface/datasets/issues/6549 | https://github.com/huggingface/datasets/issues/6549 | 6,549 | Loading from hf hub with clearer error message | open | 1 | 2024-01-02T13:26:34 | 2024-01-02T14:06:49 | null | thomwolf | [
"enhancement"
] | ### Feature request
Shouldn't this kinda work ?
```
Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json")
```
I got an error
```
File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, al... | false |
2,061,047,984 | https://api.github.com/repos/huggingface/datasets/issues/6548 | https://github.com/huggingface/datasets/issues/6548 | 6,548 | Skip if a dataset has issues | open | 1 | 2023-12-31T12:41:26 | 2024-01-02T10:33:17 | null | hadianasliwa | [] | ### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10... | false |
2,060,796,927 | https://api.github.com/repos/huggingface/datasets/issues/6547 | https://github.com/huggingface/datasets/pull/6547 | 6,547 | set dev version | closed | 2 | 2023-12-30T16:47:17 | 2023-12-30T16:53:38 | 2023-12-30T16:47:27 | lhoestq | [] | null | true |
2,060,796,369 | https://api.github.com/repos/huggingface/datasets/issues/6546 | https://github.com/huggingface/datasets/pull/6546 | 6,546 | Release: 2.16.1 | closed | 2 | 2023-12-30T16:44:51 | 2023-12-30T16:52:07 | 2023-12-30T16:45:52 | lhoestq | [] | null | true |
2,060,789,507 | https://api.github.com/repos/huggingface/datasets/issues/6545 | https://github.com/huggingface/datasets/issues/6545 | 6,545 | `image` column not automatically inferred if image dataset only contains 1 image | closed | 0 | 2023-12-30T16:17:29 | 2024-01-09T13:06:31 | 2024-01-09T13:06:31 | apolinario | [] | ### Describe the bug
By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset.
However, if the dataset contains only 1 image, this does not take place
### Steps to reproduce the bug
Input
(dataset with one image `multimodalart/repro_1_image`)
```py
from data... | false |
2,060,782,594 | https://api.github.com/repos/huggingface/datasets/issues/6544 | https://github.com/huggingface/datasets/pull/6544 | 6,544 | Fix custom configs from script | closed | 3 | 2023-12-30T15:51:25 | 2024-01-02T11:02:39 | 2023-12-30T16:09:49 | lhoestq | [] | We should not use the parquet export when the user is passing config_kwargs
I also fixed a regression that would disallow creating a custom config when a dataset has multiple predefined configs
fix https://github.com/huggingface/datasets/issues/6533 | true |
2,060,776,174 | https://api.github.com/repos/huggingface/datasets/issues/6543 | https://github.com/huggingface/datasets/pull/6543 | 6,543 | Fix dl_manager.extract returning FileNotFoundError | closed | 2 | 2023-12-30T15:24:50 | 2023-12-30T16:00:06 | 2023-12-30T15:53:59 | lhoestq | [] | The dl_manager base path is remote (e.g. a hf:// path), so local cached paths should be passed as absolute paths.
This could happen if users provide a relative path as `cache_dir`
fix https://github.com/huggingface/datasets/issues/6536 | true |
2,059,198,575 | https://api.github.com/repos/huggingface/datasets/issues/6542 | https://github.com/huggingface/datasets/issues/6542 | 6,542 | Datasets : wikipedia 20220301.en error | closed | 2 | 2023-12-29T08:34:51 | 2024-01-02T13:21:06 | 2024-01-02T13:20:30 | ppx666 | [] | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurre... | false |
2,058,983,826 | https://api.github.com/repos/huggingface/datasets/issues/6541 | https://github.com/huggingface/datasets/issues/6541 | 6,541 | Dataset not loading successfully. | closed | 4 | 2023-12-29T01:35:47 | 2024-01-17T00:40:46 | 2024-01-17T00:40:45 | hisushanta | [] | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
... | false |
2,058,965,157 | https://api.github.com/repos/huggingface/datasets/issues/6540 | https://github.com/huggingface/datasets/issues/6540 | 6,540 | Extreme inefficiency for `save_to_disk` when merging datasets | open | 1 | 2023-12-29T00:44:35 | 2023-12-30T15:05:48 | null | KatarinaYuan | [] | ### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much!
###... | false |
2,058,493,960 | https://api.github.com/repos/huggingface/datasets/issues/6539 | https://github.com/huggingface/datasets/issues/6539 | 6,539 | 'Repo card metadata block was not found' when loading a pragmeval dataset | open | 0 | 2023-12-28T14:18:25 | 2023-12-28T14:18:37 | null | lambdaofgod | [] | ### Describe the bug
I can't load dataset subsets of 'pragmeval'.
The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab usi... | false |
2,057,377,630 | https://api.github.com/repos/huggingface/datasets/issues/6538 | https://github.com/huggingface/datasets/issues/6538 | 6,538 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | closed | 15 | 2023-12-27T13:31:16 | 2024-01-03T10:06:47 | 2024-01-03T10:04:58 | Sonali-Behera-TRT | [] | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | false |
2,057,132,173 | https://api.github.com/repos/huggingface/datasets/issues/6537 | https://github.com/huggingface/datasets/issues/6537 | 6,537 | Adding support for netCDF (*.nc) files | open | 3 | 2023-12-27T09:27:29 | 2023-12-27T20:46:53 | null | shermansiu | [
"enhancement"
] | ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub throu... | false |
2,056,863,239 | https://api.github.com/repos/huggingface/datasets/issues/6536 | https://github.com/huggingface/datasets/issues/6536 | 6,536 | datasets.load_dataset raises FileNotFoundError for datasets==2.16.0 | closed | 2 | 2023-12-27T03:15:48 | 2023-12-30T18:58:04 | 2023-12-30T15:54:00 | ArvinZhuang | [] | ### Describe the bug
Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0`
### Steps to reproduce the bug
For example `pip install datasets==2.16.0`
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_di... | false |
2,056,264,339 | https://api.github.com/repos/huggingface/datasets/issues/6535 | https://github.com/huggingface/datasets/issues/6535 | 6,535 | IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT | open | 3 | 2023-12-26T10:14:33 | 2024-02-05T08:42:31 | null | MahavirDabas18 | [] | ### Describe the bug
I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without-
model = get_peft_model(model, config)
the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets-
IndexError: Inv... | false |
2,056,002,548 | https://api.github.com/repos/huggingface/datasets/issues/6534 | https://github.com/huggingface/datasets/issues/6534 | 6,534 | How to configure multiple folders in the same zip package | open | 1 | 2023-12-26T03:56:20 | 2023-12-26T06:31:16 | null | d710055071 | [] | How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip | false |
2,055,929,101 | https://api.github.com/repos/huggingface/datasets/issues/6533 | https://github.com/huggingface/datasets/issues/6533 | 6,533 | ted_talks_iwslt | Error: Config name is missing | closed | 2 | 2023-12-26T00:38:18 | 2023-12-30T18:58:21 | 2023-12-30T16:09:50 | rayliuca | [] | ### Describe the bug
Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing"
see also:
https://huggingface.co/datasets/ted_talks_iwslt/discussions/3
likely caused by #6493, where the `and not config_kwargs` part... | false |
2,055,631,201 | https://api.github.com/repos/huggingface/datasets/issues/6532 | https://github.com/huggingface/datasets/issues/6532 | 6,532 | [Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id | open | 10 | 2023-12-25T11:37:10 | 2025-05-05T13:25:24 | null | Yu-Shi | [
"enhancement"
] | ### Feature request
Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via r... | false |
2,055,201,605 | https://api.github.com/repos/huggingface/datasets/issues/6531 | https://github.com/huggingface/datasets/pull/6531 | 6,531 | Add polars compatibility | closed | 7 | 2023-12-24T20:03:23 | 2024-03-08T19:29:25 | 2024-03-08T15:22:58 | psmyth94 | [] | Hey there,
I've just finished adding support to convert and format to `polars.DataFrame`. This was in response to the open issue about integrating Polars [#3334](https://github.com/huggingface/datasets/issues/3334). Datasets can be switched to Polars format via `Dataset.set_format("polars")`. I've also included `to_... | true |
2,054,817,609 | https://api.github.com/repos/huggingface/datasets/issues/6530 | https://github.com/huggingface/datasets/issues/6530 | 6,530 | Impossible to save a mapped dataset to disk | open | 1 | 2023-12-23T15:18:27 | 2023-12-24T09:40:30 | null | kopyl | [] | ### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After... | false |
2,054,209,449 | https://api.github.com/repos/huggingface/datasets/issues/6529 | https://github.com/huggingface/datasets/issues/6529 | 6,529 | Impossible to only download a test split | open | 2 | 2023-12-22T16:56:32 | 2024-02-02T00:05:04 | null | ysig | [] | I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed b... | false |
2,053,996,494 | https://api.github.com/repos/huggingface/datasets/issues/6528 | https://github.com/huggingface/datasets/pull/6528 | 6,528 | set dev version | closed | 2 | 2023-12-22T14:23:18 | 2023-12-22T14:31:42 | 2023-12-22T14:25:34 | lhoestq | [] | null | true |
2,053,966,748 | https://api.github.com/repos/huggingface/datasets/issues/6527 | https://github.com/huggingface/datasets/pull/6527 | 6,527 | Release: 2.16.0 | closed | 2 | 2023-12-22T13:59:56 | 2023-12-22T14:24:12 | 2023-12-22T14:17:55 | lhoestq | [] | null | true |
2,053,726,451 | https://api.github.com/repos/huggingface/datasets/issues/6526 | https://github.com/huggingface/datasets/pull/6526 | 6,526 | Preserve order of configs and splits when using Parquet exports | closed | 2 | 2023-12-22T10:35:56 | 2023-12-22T11:42:22 | 2023-12-22T11:36:14 | albertvillanova | [] | Preserve order of configs and splits, as defined in dataset infos.
Fix #6521. | true |
2,053,119,357 | https://api.github.com/repos/huggingface/datasets/issues/6525 | https://github.com/huggingface/datasets/pull/6525 | 6,525 | BBox type | closed | 2 | 2023-12-21T22:13:27 | 2024-01-11T06:34:51 | 2023-12-21T22:39:27 | lhoestq | [] | see [internal discussion](https://huggingface.slack.com/archives/C02EK7C3SHW/p1703097195609209)
Draft to get some feedback on a possible `BBox` feature type that can be used to get object detection bounding boxes data in one format or another.
```python
>>> from datasets import load_dataset, BBox
>>> ds = load_... | true |
2,053,076,311 | https://api.github.com/repos/huggingface/datasets/issues/6524 | https://github.com/huggingface/datasets/issues/6524 | 6,524 | Streaming the Pile: Missing Files | closed | 1 | 2023-12-21T21:25:09 | 2023-12-22T09:17:05 | 2023-12-22T09:17:05 | FelixLabelle | [] | ### Describe the bug
The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved.
### Steps to reproduce the bug
To reproduce run the following code:
```
from datasets import load_dataset
dataset = load_dataset('EleutherAI/pile', 'en', split='train', streamin... | false |
2,052,643,484 | https://api.github.com/repos/huggingface/datasets/issues/6523 | https://github.com/huggingface/datasets/pull/6523 | 6,523 | fix tests | closed | 2 | 2023-12-21T15:36:21 | 2023-12-21T15:56:54 | 2023-12-21T15:50:38 | lhoestq | [] | null | true |
2,052,332,528 | https://api.github.com/repos/huggingface/datasets/issues/6522 | https://github.com/huggingface/datasets/issues/6522 | 6,522 | Loading HF Hub Dataset (private org repo) fails to load all features | open | 0 | 2023-12-21T12:26:35 | 2023-12-21T13:24:31 | null | versipellis | [] | ### Describe the bug
When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default?
### Steps to reproduce the ... | false |
2,052,229,538 | https://api.github.com/repos/huggingface/datasets/issues/6521 | https://github.com/huggingface/datasets/issues/6521 | 6,521 | The order of the splits is not preserved | closed | 1 | 2023-12-21T11:17:27 | 2023-12-22T11:36:15 | 2023-12-22T11:36:15 | albertvillanova | [
"bug"
] | We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order.
Check: In branch "main"
```python
In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA")
In [10]: dataset
Out[10]:
DatasetDict({
... | false |
2,052,059,078 | https://api.github.com/repos/huggingface/datasets/issues/6520 | https://github.com/huggingface/datasets/pull/6520 | 6,520 | Support commit_description parameter in push_to_hub | closed | 2 | 2023-12-21T09:36:11 | 2023-12-21T14:49:47 | 2023-12-21T14:43:35 | albertvillanova | [] | Support `commit_description` parameter in `push_to_hub`.
CC: @Wauplin | true |
2,050,759,824 | https://api.github.com/repos/huggingface/datasets/issues/6519 | https://github.com/huggingface/datasets/pull/6519 | 6,519 | Support push_to_hub canonical datasets | closed | 4 | 2023-12-20T15:16:45 | 2023-12-21T14:48:20 | 2023-12-21T14:40:57 | albertvillanova | [] | Support `push_to_hub` canonical datasets.
This is necessary in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet
Note that before this PR, the `repo_id` "dataset_name" was transformed to "user/dataset_name". This behavior was introduced by:
... | true |
2,050,137,038 | https://api.github.com/repos/huggingface/datasets/issues/6518 | https://github.com/huggingface/datasets/pull/6518 | 6,518 | fix get_metadata_patterns function args error | closed | 3 | 2023-12-20T09:06:22 | 2023-12-21T15:14:17 | 2023-12-21T15:07:57 | d710055071 | [] | Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517 | true |
2,050,121,588 | https://api.github.com/repos/huggingface/datasets/issues/6517 | https://github.com/huggingface/datasets/issues/6517 | 6,517 | Bug get_metadata_patterns arg error | closed | 0 | 2023-12-20T08:56:44 | 2023-12-22T00:24:23 | 2023-12-22T00:24:23 | d710055071 | [] | https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69
metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config) | false |
2,050,033,322 | https://api.github.com/repos/huggingface/datasets/issues/6516 | https://github.com/huggingface/datasets/pull/6516 | 6,516 | Support huggingface-hub pre-releases | closed | 2 | 2023-12-20T07:52:29 | 2023-12-20T08:51:34 | 2023-12-20T08:44:44 | albertvillanova | [] | Support `huggingface-hub` pre-releases.
This way we will have our CI green when testing `huggingface-hub` release candidates. See: https://github.com/huggingface/datasets/tree/ci-test-huggingface-hub-v0.20.0.rc1
Close #6513. | true |
2,049,724,251 | https://api.github.com/repos/huggingface/datasets/issues/6515 | https://github.com/huggingface/datasets/issues/6515 | 6,515 | Why call http_head() when fsspec_head() succeeds | closed | 0 | 2023-12-20T02:25:51 | 2023-12-26T05:35:46 | 2023-12-26T05:35:46 | d710055071 | [] | https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14 | false |
2,049,600,663 | https://api.github.com/repos/huggingface/datasets/issues/6514 | https://github.com/huggingface/datasets/pull/6514 | 6,514 | Cache backward compatibility with 2.15.0 | closed | 4 | 2023-12-19T23:52:25 | 2023-12-21T21:14:11 | 2023-12-21T21:07:55 | lhoestq | [] | ...for datasets without scripts
It takes into account the changes in cache from
- https://github.com/huggingface/datasets/pull/6493: switch to `config/version/commit_sha` schema
- https://github.com/huggingface/datasets/pull/6454: fix `DataFilesDict` keys ordering when hashing
requires https://github.com/huggin... | true |
2,048,869,151 | https://api.github.com/repos/huggingface/datasets/issues/6513 | https://github.com/huggingface/datasets/issues/6513 | 6,513 | Support huggingface-hub 0.20.0 | closed | 0 | 2023-12-19T15:15:46 | 2023-12-20T08:44:45 | 2023-12-20T08:44:45 | albertvillanova | [] | CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1
We need to merge:
- #6510
- #6512
- #6516 | false |
2,048,795,819 | https://api.github.com/repos/huggingface/datasets/issues/6512 | https://github.com/huggingface/datasets/pull/6512 | 6,512 | Remove deprecated HfFolder | closed | 2 | 2023-12-19T14:40:49 | 2023-12-19T20:21:13 | 2023-12-19T20:14:30 | lhoestq | [] | ...and use `huggingface_hub.get_token()` instead | true |
2,048,465,958 | https://api.github.com/repos/huggingface/datasets/issues/6511 | https://github.com/huggingface/datasets/pull/6511 | 6,511 | Implement get dataset default config name | closed | 3 | 2023-12-19T11:26:19 | 2023-12-21T14:48:57 | 2023-12-21T14:42:41 | albertvillanova | [] | Implement `get_dataset_default_config_name`.
Now that we support setting a configuration as default in `push_to_hub` (see #6500), we need a programmatically way to know in advance which is the default configuration. This will be used in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/a... | true |
2,046,928,742 | https://api.github.com/repos/huggingface/datasets/issues/6510 | https://github.com/huggingface/datasets/pull/6510 | 6,510 | Replace `list_files_info` with `list_repo_tree` in `push_to_hub` | closed | 3 | 2023-12-18T15:34:19 | 2023-12-19T18:05:47 | 2023-12-19T17:58:34 | mariosasko | [] | Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910) | true |
2,046,720,869 | https://api.github.com/repos/huggingface/datasets/issues/6509 | https://github.com/huggingface/datasets/pull/6509 | 6,509 | Better cast error when generating dataset | closed | 3 | 2023-12-18T13:57:24 | 2023-12-19T09:37:12 | 2023-12-19T09:31:03 | lhoestq | [] | I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA
Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ?
New:
```python
Traceback (most recent call last):
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py... | true |
2,045,733,273 | https://api.github.com/repos/huggingface/datasets/issues/6508 | https://github.com/huggingface/datasets/pull/6508 | 6,508 | Read GeoParquet files using parquet reader | closed | 13 | 2023-12-18T04:50:37 | 2024-01-26T18:22:35 | 2024-01-26T16:18:41 | weiji14 | [] | Let GeoParquet files with the file extension `*.geoparquet` or `*.gpq` be readable by the default parquet reader.
Those two file extensions are the ones most commonly used for GeoParquet files, and is included in the `gpq` validator tool at https://github.com/planetlabs/gpq/blob/e5576b4ee7306b4d2259d56c879465a9364da... | true |
2,045,152,928 | https://api.github.com/repos/huggingface/datasets/issues/6507 | https://github.com/huggingface/datasets/issues/6507 | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | closed | 0 | 2023-12-17T09:58:25 | 2023-12-18T11:42:49 | 2023-12-18T11:42:49 | Mcccccc1024 | [] | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
| false |
2,044,975,038 | https://api.github.com/repos/huggingface/datasets/issues/6506 | https://github.com/huggingface/datasets/issues/6506 | 6,506 | Incorrect test set labels for RTE and CoLA datasets via load_dataset | closed | 1 | 2023-12-16T22:06:08 | 2023-12-21T09:57:57 | 2023-12-21T09:57:57 | emreonal11 | [] | ### Describe the bug
The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1.
Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the t... | false |
2,044,721,288 | https://api.github.com/repos/huggingface/datasets/issues/6505 | https://github.com/huggingface/datasets/issues/6505 | 6,505 | Got stuck when I trying to load a dataset | open | 7 | 2023-12-16T11:51:07 | 2024-12-24T16:45:52 | null | yirenpingsheng | [] | ### Describe the bug
Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records.
Here is my code:
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_r... | false |
2,044,541,154 | https://api.github.com/repos/huggingface/datasets/issues/6504 | https://github.com/huggingface/datasets/issues/6504 | 6,504 | Error Pushing to Hub | closed | 0 | 2023-12-16T01:05:22 | 2023-12-16T06:20:53 | 2023-12-16T06:20:53 | Jiayi-Pan | [] | ### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_d... | false |
2,043,847,591 | https://api.github.com/repos/huggingface/datasets/issues/6503 | https://github.com/huggingface/datasets/pull/6503 | 6,503 | Fix streaming xnli | closed | 2 | 2023-12-15T14:40:57 | 2023-12-15T14:51:06 | 2023-12-15T14:44:47 | lhoestq | [] | This code was failing
```python
In [1]: from datasets import load_dataset
In [2]:
...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True)
...:
...: sample_data = next(iter(ds))["premise"] # pick up one data
...: input_text = list(sample_data.valu... | true |
2,043,771,731 | https://api.github.com/repos/huggingface/datasets/issues/6502 | https://github.com/huggingface/datasets/pull/6502 | 6,502 | Pickle support for `torch.Generator` objects | closed | 2 | 2023-12-15T13:55:12 | 2023-12-15T15:04:33 | 2023-12-15T14:58:22 | mariosasko | [] | Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616 | true |
2,043,377,240 | https://api.github.com/repos/huggingface/datasets/issues/6501 | https://github.com/huggingface/datasets/issues/6501 | 6,501 | OverflowError: value too large to convert to int32_t | open | 1 | 2023-12-15T10:10:21 | 2025-06-27T04:27:14 | null | zhangfan-algo | [] | ### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3... | false |
2,043,258,633 | https://api.github.com/repos/huggingface/datasets/issues/6500 | https://github.com/huggingface/datasets/pull/6500 | 6,500 | Enable setting config as default when push_to_hub | closed | 8 | 2023-12-15T09:17:41 | 2023-12-18T11:56:11 | 2023-12-18T11:50:03 | albertvillanova | [] | Fix #6497. | true |
2,043,166,976 | https://api.github.com/repos/huggingface/datasets/issues/6499 | https://github.com/huggingface/datasets/pull/6499 | 6,499 | docs: add reference Git over SSH | closed | 2 | 2023-12-15T08:38:31 | 2023-12-15T11:48:47 | 2023-12-15T11:42:38 | severo | [] | see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893 | true |
2,042,075,969 | https://api.github.com/repos/huggingface/datasets/issues/6498 | https://github.com/huggingface/datasets/pull/6498 | 6,498 | Fallback on dataset script if user wants to load default config | closed | 8 | 2023-12-14T16:46:01 | 2023-12-15T13:16:56 | 2023-12-15T13:10:48 | lhoestq | [] | Right now this code is failing on `main`:
```python
load_dataset("openbookqa")
```
This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one.
I fixed this by simply falling back on using th... | true |
2,041,994,274 | https://api.github.com/repos/huggingface/datasets/issues/6497 | https://github.com/huggingface/datasets/issues/6497 | 6,497 | Support setting a default config name in push_to_hub | closed | 0 | 2023-12-14T15:59:03 | 2023-12-18T11:50:04 | 2023-12-18T11:50:04 | albertvillanova | [
"enhancement"
] | In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one. | false |
2,041,589,386 | https://api.github.com/repos/huggingface/datasets/issues/6496 | https://github.com/huggingface/datasets/issues/6496 | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | open | 1 | 2023-12-14T11:24:54 | 2023-12-14T12:22:21 | null | GeorgesLorre | [] | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92b... | false |
2,039,684,839 | https://api.github.com/repos/huggingface/datasets/issues/6494 | https://github.com/huggingface/datasets/issues/6494 | 6,494 | Image Data loaded Twice | open | 0 | 2023-12-13T13:11:42 | 2023-12-13T13:11:42 | null | ArcaneLex | [] | ### Describe the bug

When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see i... | false |
2,039,708,529 | https://api.github.com/repos/huggingface/datasets/issues/6495 | https://github.com/huggingface/datasets/issues/6495 | 6,495 | Newline characters don't behave as expected when calling dataset.info | open | 0 | 2023-12-12T23:07:51 | 2023-12-13T13:24:22 | null | gerald-wrona | [] | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU... | false |
2,038,221,490 | https://api.github.com/repos/huggingface/datasets/issues/6493 | https://github.com/huggingface/datasets/pull/6493 | 6,493 | Lazy data files resolution and offline cache reload | closed | 8 | 2023-12-12T17:15:17 | 2023-12-21T15:19:20 | 2023-12-21T15:13:11 | lhoestq | [] | Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459
This PR should be merged instead of the two individually, since they are conflicting
## Offline cache reload
it can reload datasets that were pushed to hub if they exist in the cache.
examp... | true |
2,037,987,267 | https://api.github.com/repos/huggingface/datasets/issues/6492 | https://github.com/huggingface/datasets/pull/6492 | 6,492 | Make push_to_hub return CommitInfo | closed | 3 | 2023-12-12T15:18:16 | 2023-12-13T14:29:01 | 2023-12-13T14:22:41 | albertvillanova | [] | Make `push_to_hub` return `CommitInfo`.
This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID.
CC: @severo for the use case in https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/discussions/4 | true |
2,037,690,643 | https://api.github.com/repos/huggingface/datasets/issues/6491 | https://github.com/huggingface/datasets/pull/6491 | 6,491 | Fix metrics dead link | closed | 2 | 2023-12-12T12:51:49 | 2023-12-21T15:15:08 | 2023-12-21T15:08:53 | qgallouedec | [] | null | true |
2,037,204,892 | https://api.github.com/repos/huggingface/datasets/issues/6490 | https://github.com/huggingface/datasets/issues/6490 | 6,490 | `load_dataset(...,save_infos=True)` not working without loading script | open | 1 | 2023-12-12T08:09:18 | 2023-12-12T08:36:22 | null | morganveyret | [] | ### Describe the bug
It seems that saving a dataset infos back into the card file is not working for datasets without a loading script.
After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory.
Internally this is a call to `inspect.getfil... | false |
2,036,743,777 | https://api.github.com/repos/huggingface/datasets/issues/6489 | https://github.com/huggingface/datasets/issues/6489 | 6,489 | load_dataset imageflder for aws s3 path | open | 0 | 2023-12-12T00:08:43 | 2023-12-12T00:09:27 | null | segalinc | [
"enhancement"
] | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience... | false |
2,035,899,898 | https://api.github.com/repos/huggingface/datasets/issues/6488 | https://github.com/huggingface/datasets/issues/6488 | 6,488 | 429 Client Error | open | 2 | 2023-12-11T15:06:01 | 2024-06-20T05:55:45 | null | sasaadi | [] | Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it?
Thanks
Dataset:
https://huggingface.co/datasets/cerebras/SlimPajama-627B
Error:
`requests.exceptions.HTTPError: 429 Client Error: Too M... | false |
2,035,424,254 | https://api.github.com/repos/huggingface/datasets/issues/6487 | https://github.com/huggingface/datasets/pull/6487 | 6,487 | Update builder hash with info | closed | 2 | 2023-12-11T11:09:16 | 2024-01-11T06:35:07 | 2023-12-11T11:41:34 | lhoestq | [] | Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change.
This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub)
Ideally we should take the resolved files... | true |
2,035,206,206 | https://api.github.com/repos/huggingface/datasets/issues/6486 | https://github.com/huggingface/datasets/pull/6486 | 6,486 | Fix docs phrasing about supported formats when sharing a dataset | closed | 2 | 2023-12-11T09:21:22 | 2023-12-13T14:21:29 | 2023-12-13T14:15:21 | albertvillanova | [] | Fix docs phrasing. | true |
2,035,141,884 | https://api.github.com/repos/huggingface/datasets/issues/6485 | https://github.com/huggingface/datasets/issues/6485 | 6,485 | FileNotFoundError: [Errno 2] No such file or directory: 'nul' | closed | 1 | 2023-12-11T08:52:13 | 2023-12-14T08:09:08 | 2023-12-14T08:09:08 | amanyara | [] | ### Describe the bug
it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets"
i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul'

![image](htt... | false |
2,032,946,981 | https://api.github.com/repos/huggingface/datasets/issues/6483 | https://github.com/huggingface/datasets/issues/6483 | 6,483 | Iterable Dataset: rename column clashes with remove column | closed | 4 | 2023-12-08T16:11:30 | 2023-12-08T16:27:16 | 2023-12-08T16:27:04 | sanchit-gandhi | [
"streaming"
] | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typic... | false |
2,033,333,294 | https://api.github.com/repos/huggingface/datasets/issues/6484 | https://github.com/huggingface/datasets/issues/6484 | 6,484 | [Feature Request] Dataset versioning | open | 2 | 2023-12-08T16:01:35 | 2023-12-11T19:13:46 | null | kenfus | [] | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was n... | false |
2,032,675,918 | https://api.github.com/repos/huggingface/datasets/issues/6482 | https://github.com/huggingface/datasets/pull/6482 | 6,482 | Fix max lock length on unix | closed | 3 | 2023-12-08T13:39:30 | 2023-12-12T11:53:32 | 2023-12-12T11:47:27 | lhoestq | [] | reported in https://github.com/huggingface/datasets/pull/6482 | true |
2,032,650,003 | https://api.github.com/repos/huggingface/datasets/issues/6481 | https://github.com/huggingface/datasets/issues/6481 | 6,481 | using torchrun, save_to_disk suddenly shows SIGTERM | open | 0 | 2023-12-08T13:22:03 | 2023-12-08T13:22:03 | null | Ariya12138 | [] | ### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reac... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.