id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,973,937,612 | https://api.github.com/repos/huggingface/datasets/issues/6377 | https://github.com/huggingface/datasets/issues/6377 | 6,377 | Support pyarrow 14.0.0 | closed | 0 | 2023-11-02T10:22:08 | 2023-11-02T15:15:45 | 2023-11-02T15:15:45 | albertvillanova | [] | Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375 | false |
1,973,927,468 | https://api.github.com/repos/huggingface/datasets/issues/6376 | https://github.com/huggingface/datasets/issues/6376 | 6,376 | Caching problem when deleting a dataset | closed | 3 | 2023-11-02T10:15:58 | 2023-12-04T16:53:34 | 2023-12-04T16:53:33 | clefourrier | [] | ### Describe the bug
Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail.
### Steps to reproduce the bug
1. Create a dataset with n features per row
2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)`
3. Go on the hub, delete the repo at `YOUR_PATH`
4. Update... | false |
1,973,877,879 | https://api.github.com/repos/huggingface/datasets/issues/6375 | https://github.com/huggingface/datasets/pull/6375 | 6,375 | Temporarily pin pyarrow < 14.0.0 | closed | 3 | 2023-11-02T09:48:58 | 2023-11-02T10:22:33 | 2023-11-02T10:11:19 | albertvillanova | [] | Temporarily pin `pyarrow` < 14.0.0 until permanent solution is found.
Hot fix #6374. | true |
1,973,857,428 | https://api.github.com/repos/huggingface/datasets/issues/6374 | https://github.com/huggingface/datasets/issues/6374 | 6,374 | CI is broken: TypeError: Couldn't cast array | closed | 0 | 2023-11-02T09:37:06 | 2023-11-02T10:11:20 | 2023-11-02T10:11:20 | albertvillanova | [] | See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
``` | false |
1,973,349,695 | https://api.github.com/repos/huggingface/datasets/issues/6373 | https://github.com/huggingface/datasets/pull/6373 | 6,373 | Fix typo in `Dataset.map` docstring | closed | 2 | 2023-11-02T01:36:49 | 2023-11-02T15:18:22 | 2023-11-02T10:11:38 | bryant1410 | [] | null | true |
1,972,837,794 | https://api.github.com/repos/huggingface/datasets/issues/6372 | https://github.com/huggingface/datasets/pull/6372 | 6,372 | do not try to download from HF GCS for generator | closed | 2 | 2023-11-01T17:57:11 | 2023-11-02T16:02:52 | 2023-11-02T15:52:09 | yundai424 | [] | attempt to fix https://github.com/huggingface/datasets/issues/6371 | true |
1,972,807,579 | https://api.github.com/repos/huggingface/datasets/issues/6371 | https://github.com/huggingface/datasets/issues/6371 | 6,371 | `Dataset.from_generator` should not try to download from HF GCS | closed | 1 | 2023-11-01T17:36:17 | 2023-11-02T15:52:10 | 2023-11-02T15:52:10 | yundai424 | [] | ### Describe the bug
When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datas... | false |
1,972,073,909 | https://api.github.com/repos/huggingface/datasets/issues/6370 | https://github.com/huggingface/datasets/issues/6370 | 6,370 | TensorDataset format does not work with Trainer from transformers | closed | 2 | 2023-11-01T10:09:54 | 2023-11-29T16:31:08 | 2023-11-29T16:31:08 | jinzzasol | [] | ### Describe the bug
The model was built to do fine tunning on BERT model for relation extraction.
trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset`
However, in the document, the req... | false |
1,971,794,108 | https://api.github.com/repos/huggingface/datasets/issues/6369 | https://github.com/huggingface/datasets/issues/6369 | 6,369 | Multi process map did not load cache file correctly | closed | 3 | 2023-11-01T06:36:54 | 2023-11-30T16:04:46 | 2023-11-30T16:04:45 | enze5088 | [] | ### Describe the bug
When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process.

 function returns bytes instead of PIL images even when image column is not part of "columns" | closed | 1 | 2023-10-31T11:10:48 | 2023-11-02T14:21:17 | 2023-11-02T14:21:17 | leot13 | [] | ### Describe the bug
When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes.
Here is a minimal reproduction of the bug:
https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJU... | false |
1,970,140,392 | https://api.github.com/repos/huggingface/datasets/issues/6365 | https://github.com/huggingface/datasets/issues/6365 | 6,365 | Parquet size grows exponential for categorical data | closed | 1 | 2023-10-31T10:29:02 | 2023-10-31T10:49:17 | 2023-10-31T10:49:17 | aseganti | [] | ### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories ar... | false |
1,969,136,106 | https://api.github.com/repos/huggingface/datasets/issues/6364 | https://github.com/huggingface/datasets/issues/6364 | 6,364 | ArrowNotImplementedError: Unsupported cast from string to list using function cast_list | closed | 2 | 2023-10-30T20:14:01 | 2023-10-31T19:21:23 | 2023-10-31T19:21:23 | divyakrishna-devisetty | [] | Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?"... | false |
1,968,891,277 | https://api.github.com/repos/huggingface/datasets/issues/6363 | https://github.com/huggingface/datasets/issues/6363 | 6,363 | dataset.transform() hangs indefinitely while finetuning the stable diffusion XL | closed | 7 | 2023-10-30T17:34:05 | 2023-11-22T00:29:21 | 2023-11-22T00:29:21 | bhosalems | [] | ### Describe the bug
Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely.
### Steps to reproduce the bug
accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --... | false |
1,965,794,569 | https://api.github.com/repos/huggingface/datasets/issues/6362 | https://github.com/huggingface/datasets/pull/6362 | 6,362 | Simplify filesystem logic | closed | 13 | 2023-10-27T15:54:18 | 2023-11-15T14:08:29 | 2023-11-15T14:02:02 | mariosasko | [] | Simplifies the existing filesystem logic (e.g., to avoid unnecessary if-else as mentioned in https://github.com/huggingface/datasets/pull/6098#issue-1827655071) | true |
1,965,672,950 | https://api.github.com/repos/huggingface/datasets/issues/6360 | https://github.com/huggingface/datasets/issues/6360 | 6,360 | Add support for `Sequence(Audio/Image)` feature in `push_to_hub` | closed | 1 | 2023-10-27T14:39:57 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | Laurent2916 | [
"enhancement"
] | ### Feature request
Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards.
### Motivation
Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead ... | false |
1,965,378,583 | https://api.github.com/repos/huggingface/datasets/issues/6359 | https://github.com/huggingface/datasets/issues/6359 | 6,359 | Stuck in "Resolving data files..." | open | 5 | 2023-10-27T12:01:51 | 2025-03-09T02:18:19 | null | Luciennnnnnn | [] | ### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understa... | false |
1,965,014,595 | https://api.github.com/repos/huggingface/datasets/issues/6358 | https://github.com/huggingface/datasets/issues/6358 | 6,358 | Mounting datasets cache fails due to absolute paths. | closed | 5 | 2023-10-27T08:20:27 | 2024-04-10T08:50:06 | 2023-11-28T14:47:12 | charliebudd | [] | ### Describe the bug
Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache.
### Steps to reproduce the bug
1. Create a datasets cache by downloading some data
2. Mount the dataset folder into a docker contain... | false |
1,964,653,995 | https://api.github.com/repos/huggingface/datasets/issues/6357 | https://github.com/huggingface/datasets/issues/6357 | 6,357 | Allow passing a multiprocessing context to functions that support `num_proc` | open | 0 | 2023-10-27T02:31:16 | 2023-10-27T02:31:16 | null | bryant1410 | [
"enhancement"
] | ### Feature request
Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done:
```python
dataset = dataset.map(_func, num_proc=2, mp_cont... | false |
1,964,015,802 | https://api.github.com/repos/huggingface/datasets/issues/6356 | https://github.com/huggingface/datasets/pull/6356 | 6,356 | Add `fsspec` version to the `datasets-cli env` command output | closed | 3 | 2023-10-26T17:19:25 | 2023-10-26T18:42:56 | 2023-10-26T18:32:21 | mariosasko | [] | ... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes. | true |
1,963,979,896 | https://api.github.com/repos/huggingface/datasets/issues/6355 | https://github.com/huggingface/datasets/pull/6355 | 6,355 | More hub centric docs | closed | 3 | 2023-10-26T16:54:46 | 2024-01-11T06:34:16 | 2023-10-30T17:32:57 | lhoestq | [] | Let's have more hub-centric documentation in the datasets docs
Tutorials
- Add “Configure the dataset viewer” page
- Change order:
- Overview
- and more focused on the Hub rather than the library
- Then all the hub related things
- and mention how to read/write with other tools like pandas
- The... | true |
1,963,483,324 | https://api.github.com/repos/huggingface/datasets/issues/6354 | https://github.com/huggingface/datasets/issues/6354 | 6,354 | `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` | open | 3 | 2023-10-26T12:43:36 | 2024-12-10T14:06:06 | null | NazyS | [] | ### Describe the bug
Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything.
Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions
Some exampes I've encountered:
```
File "/l... | false |
1,962,646,450 | https://api.github.com/repos/huggingface/datasets/issues/6353 | https://github.com/huggingface/datasets/issues/6353 | 6,353 | load_dataset save_to_disk load_from_disk error | closed | 5 | 2023-10-26T03:47:06 | 2024-04-03T05:31:01 | 2023-10-26T10:18:04 | brisker | [] | ### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird ha... | false |
1,962,296,057 | https://api.github.com/repos/huggingface/datasets/issues/6352 | https://github.com/huggingface/datasets/issues/6352 | 6,352 | Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") | closed | 13 | 2023-10-25T21:55:31 | 2024-03-19T16:46:22 | 2023-11-07T07:26:54 | Ahmed-Roushdy | [] | I was trying to load the wiki dataset, but i got this error
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train')
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset
ds = builder_instance.as_dataset(split=split, verific... | false |
1,961,982,988 | https://api.github.com/repos/huggingface/datasets/issues/6351 | https://github.com/huggingface/datasets/pull/6351 | 6,351 | Fix use_dataset.mdx | closed | 2 | 2023-10-25T18:21:08 | 2023-10-26T17:19:49 | 2023-10-26T17:10:27 | angel-luis | [] | The current example isn't working because it can't find `labels` inside the Dataset object. So I've added an extra step to the process. Tested and working in Colab. | true |
1,961,869,203 | https://api.github.com/repos/huggingface/datasets/issues/6350 | https://github.com/huggingface/datasets/issues/6350 | 6,350 | Different objects are returned from calls that should be returning the same kind of object. | open | 2 | 2023-10-25T17:08:39 | 2023-10-26T21:03:06 | null | phalexo | [] | ### Describe the bug
1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]')
2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir)
The only difference I would expect these cal... | false |
1,961,435,673 | https://api.github.com/repos/huggingface/datasets/issues/6349 | https://github.com/huggingface/datasets/issues/6349 | 6,349 | Can't load ds = load_dataset("imdb") | closed | 4 | 2023-10-25T13:29:51 | 2024-03-20T15:09:53 | 2023-10-31T19:59:35 | vivianc2 | [] | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as we... | false |
1,961,268,504 | https://api.github.com/repos/huggingface/datasets/issues/6348 | https://github.com/huggingface/datasets/issues/6348 | 6,348 | Parquet stream-conversion fails to embed images/audio files from gated repos | open | 1 | 2023-10-25T12:12:44 | 2025-04-17T12:21:43 | null | severo | [
"bug"
] | it seems to be an issue with datasets not passing the token to embed_table_storage when generating a dataset
See https://github.com/huggingface/datasets-server/issues/2010 | false |
1,959,004,835 | https://api.github.com/repos/huggingface/datasets/issues/6347 | https://github.com/huggingface/datasets/issues/6347 | 6,347 | Incorrect example code in 'Create a dataset' docs | closed | 2 | 2023-10-24T11:01:21 | 2023-10-25T13:05:21 | 2023-10-25T13:05:21 | rwood-97 | [] | ### Describe the bug
On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect.
Currently, examples are:
``` python
from datasets import ImageFolder
dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon")
```
and
``` python... | false |
1,958,777,076 | https://api.github.com/repos/huggingface/datasets/issues/6346 | https://github.com/huggingface/datasets/pull/6346 | 6,346 | Fix UnboundLocalError if preprocessing returns an empty list | closed | 2 | 2023-10-24T08:38:43 | 2023-10-25T17:39:17 | 2023-10-25T16:36:38 | cwallenwein | [] | If this tokenization function is used with IterableDatasets and no sample is as big as the context length, `input_batch` will be an empty list.
```
def tokenize(batch, tokenizer, context_length):
outputs = tokenizer(
batch["text"],
truncation=True,
max_length=context_length,
r... | true |
1,957,707,870 | https://api.github.com/repos/huggingface/datasets/issues/6345 | https://github.com/huggingface/datasets/issues/6345 | 6,345 | support squad structure datasets using a YAML parameter | open | 0 | 2023-10-23T17:55:37 | 2023-10-23T17:55:37 | null | MajdTannous1 | [
"enhancement"
] | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should... | false |
1,957,412,169 | https://api.github.com/repos/huggingface/datasets/issues/6344 | https://github.com/huggingface/datasets/pull/6344 | 6,344 | set dev version | closed | 3 | 2023-10-23T15:13:28 | 2023-10-23T15:24:31 | 2023-10-23T15:13:38 | lhoestq | [] | null | true |
1,957,370,711 | https://api.github.com/repos/huggingface/datasets/issues/6343 | https://github.com/huggingface/datasets/pull/6343 | 6,343 | Remove unused argument in `_get_data_files_patterns` | closed | 3 | 2023-10-23T14:54:18 | 2023-11-16T09:09:42 | 2023-11-16T09:03:39 | lhoestq | [] | null | true |
1,957,344,445 | https://api.github.com/repos/huggingface/datasets/issues/6342 | https://github.com/huggingface/datasets/pull/6342 | 6,342 | Release: 2.14.6 | closed | 5 | 2023-10-23T14:43:26 | 2023-10-23T15:21:54 | 2023-10-23T15:07:25 | lhoestq | [] | null | true |
1,956,917,893 | https://api.github.com/repos/huggingface/datasets/issues/6340 | https://github.com/huggingface/datasets/pull/6340 | 6,340 | Release 2.14.5 | closed | 1 | 2023-10-23T11:10:22 | 2023-10-23T14:20:46 | 2023-10-23T11:12:40 | lhoestq | [] | (wrong release number - I was continuing the 2.14 branch but 2.14.5 was released from `main`) | true |
1,956,912,627 | https://api.github.com/repos/huggingface/datasets/issues/6339 | https://github.com/huggingface/datasets/pull/6339 | 6,339 | minor release step improvement | closed | 3 | 2023-10-23T11:07:04 | 2023-11-07T10:38:54 | 2023-11-07T10:32:41 | lhoestq | [] | null | true |
1,956,886,072 | https://api.github.com/repos/huggingface/datasets/issues/6338 | https://github.com/huggingface/datasets/pull/6338 | 6,338 | pin fsspec before it switches to glob.glob | closed | 2 | 2023-10-23T10:50:54 | 2024-01-11T06:32:56 | 2023-10-23T10:51:52 | lhoestq | [] | null | true |
1,956,875,259 | https://api.github.com/repos/huggingface/datasets/issues/6337 | https://github.com/huggingface/datasets/pull/6337 | 6,337 | Pin supported upper version of fsspec | closed | 6 | 2023-10-23T10:44:16 | 2023-10-23T12:13:20 | 2023-10-23T12:04:36 | albertvillanova | [] | Pin upper version of `fsspec` to avoid disruptions introduced by breaking changes (and the need of urgent patch releases with hotfixes) on each release on their side. See:
- #6331
- #6210
- #5731
- #5617
- #5447
I propose that we explicitly test, introduce fixes and support each new `fsspec` version release.
... | true |
1,956,827,232 | https://api.github.com/repos/huggingface/datasets/issues/6336 | https://github.com/huggingface/datasets/pull/6336 | 6,336 | unpin-fsspec | closed | 3 | 2023-10-23T10:16:46 | 2024-02-07T12:41:35 | 2023-10-23T10:17:48 | lhoestq | [] | Close #6333. | true |
1,956,740,818 | https://api.github.com/repos/huggingface/datasets/issues/6335 | https://github.com/huggingface/datasets/pull/6335 | 6,335 | Support fsspec 2023.10.0 | closed | 7 | 2023-10-23T09:29:17 | 2024-01-11T06:33:35 | 2023-11-14T14:17:40 | albertvillanova | [] | Fix #6333. | true |
1,956,719,774 | https://api.github.com/repos/huggingface/datasets/issues/6334 | https://github.com/huggingface/datasets/pull/6334 | 6,334 | datasets.filesystems: fix is_remote_filesystems | closed | 3 | 2023-10-23T09:17:54 | 2024-02-07T12:41:15 | 2023-10-23T10:14:10 | ap-- | [] | Close #6330, close #6333.
`fsspec.implementations.LocalFilesystem.protocol`
was changed from `str` "file" to `tuple[str,...]` ("file", "local") in `fsspec>=2023.10.0`
This commit supports both styles. | true |
1,956,714,423 | https://api.github.com/repos/huggingface/datasets/issues/6333 | https://github.com/huggingface/datasets/issues/6333 | 6,333 | Support fsspec 2023.10.0 | closed | 4 | 2023-10-23T09:14:53 | 2024-02-07T12:39:58 | 2024-02-07T12:39:58 | albertvillanova | [] | Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381 | false |
1,956,697,328 | https://api.github.com/repos/huggingface/datasets/issues/6332 | https://github.com/huggingface/datasets/pull/6332 | 6,332 | Replace deprecated license_file in setup.cfg | closed | 4 | 2023-10-23T09:05:26 | 2023-11-07T08:23:10 | 2023-11-07T08:09:06 | albertvillanova | [] | Replace deprecated license_file in `setup.cfg`.
See: https://github.com/huggingface/datasets/actions/runs/6610930650/job/17953825724?pr=6331
```
/tmp/pip-build-env-a51hls20/overlay/lib/python3.8/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
... | true |
1,956,671,256 | https://api.github.com/repos/huggingface/datasets/issues/6331 | https://github.com/huggingface/datasets/pull/6331 | 6,331 | Temporarily pin fsspec < 2023.10.0 | closed | 3 | 2023-10-23T08:51:50 | 2023-10-23T09:26:42 | 2023-10-23T09:17:55 | albertvillanova | [] | Temporarily pin fsspec < 2023.10.0 until permanent solution is found.
Hot fix #6330.
See: https://github.com/huggingface/datasets/actions/runs/6610904287/job/17953774987
```
...
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NotImplementedError: Loading a dataset cached in a LocalFileS... | true |
1,956,053,294 | https://api.github.com/repos/huggingface/datasets/issues/6330 | https://github.com/huggingface/datasets/issues/6330 | 6,330 | Latest fsspec==2023.10.0 issue with streaming datasets | closed | 9 | 2023-10-22T20:57:10 | 2025-06-09T22:00:16 | 2023-10-23T09:17:56 | ZachNagengast | [] | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps ... | false |
1,955,858,020 | https://api.github.com/repos/huggingface/datasets/issues/6329 | https://github.com/huggingface/datasets/issues/6329 | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | closed | 0 | 2023-10-22T11:07:46 | 2023-10-23T09:22:58 | 2023-10-23T09:22:58 | shabnam706 | [] | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| false |
1,955,857,904 | https://api.github.com/repos/huggingface/datasets/issues/6328 | https://github.com/huggingface/datasets/issues/6328 | 6,328 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | closed | 1 | 2023-10-22T11:07:21 | 2023-10-23T09:22:38 | 2023-10-23T09:22:38 | shabnam706 | [] | null | false |
1,955,470,755 | https://api.github.com/repos/huggingface/datasets/issues/6327 | https://github.com/huggingface/datasets/issues/6327 | 6,327 | FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)` | closed | 3 | 2023-10-21T12:27:03 | 2023-10-23T18:50:07 | 2023-10-23T18:50:07 | yzhangcs | [] | ### Describe the bug
Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs.
### Steps to reproduce the bug
I've downloaded the dataset and save it to the cache dir in advance. My hope is loadi... | false |
1,955,420,536 | https://api.github.com/repos/huggingface/datasets/issues/6326 | https://github.com/huggingface/datasets/pull/6326 | 6,326 | Create battery_analysis.py | closed | 0 | 2023-10-21T10:07:48 | 2023-10-23T14:56:20 | 2023-10-23T14:56:20 | vinitkm | [] | null | true |
1,955,420,178 | https://api.github.com/repos/huggingface/datasets/issues/6325 | https://github.com/huggingface/datasets/pull/6325 | 6,325 | Create battery_analysis.py | closed | 0 | 2023-10-21T10:06:37 | 2023-10-23T14:55:58 | 2023-10-23T14:55:58 | vinitkm | [] | null | true |
1,955,126,687 | https://api.github.com/repos/huggingface/datasets/issues/6324 | https://github.com/huggingface/datasets/issues/6324 | 6,324 | Conversion to Arrow fails due to wrong type heuristic | closed | 2 | 2023-10-20T23:20:58 | 2023-10-23T20:52:57 | 2023-10-23T20:52:57 | jphme | [] | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowI... | false |
1,954,245,980 | https://api.github.com/repos/huggingface/datasets/issues/6323 | https://github.com/huggingface/datasets/issues/6323 | 6,323 | Loading dataset from large GCS bucket very slow since 2.14 | open | 1 | 2023-10-20T12:59:55 | 2024-09-03T18:42:33 | null | jbcdnr | [] | ### Describe the bug
Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change:
https://github.com/huggingface/datasets/blame/bade7af74437347a76083... | false |
1,952,947,461 | https://api.github.com/repos/huggingface/datasets/issues/6322 | https://github.com/huggingface/datasets/pull/6322 | 6,322 | Fix regex `get_data_files` formatting for base paths | closed | 4 | 2023-10-19T19:45:10 | 2023-10-23T14:40:45 | 2023-10-23T14:31:21 | ZachNagengast | [] | With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`:
- Input: `hf://datasets/...`
- Output: `hf:/datasets/...`... | true |
1,952,643,483 | https://api.github.com/repos/huggingface/datasets/issues/6321 | https://github.com/huggingface/datasets/pull/6321 | 6,321 | Fix typos | closed | 2 | 2023-10-19T16:24:35 | 2023-10-19T17:18:00 | 2023-10-19T17:07:35 | python273 | [] | null | true |
1,952,618,316 | https://api.github.com/repos/huggingface/datasets/issues/6320 | https://github.com/huggingface/datasets/issues/6320 | 6,320 | Dataset slice splits can't load training and validation at the same time | closed | 1 | 2023-10-19T16:09:22 | 2023-11-30T16:21:15 | 2023-11-30T16:21:15 | timlac | [] | ### Describe the bug
According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command:
`train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")`
to load the train and test sets from the dataset.
However ex... | false |
1,952,101,717 | https://api.github.com/repos/huggingface/datasets/issues/6319 | https://github.com/huggingface/datasets/issues/6319 | 6,319 | Datasets.map is severely broken | open | 15 | 2023-10-19T12:19:33 | 2024-08-08T17:05:08 | null | phalexo | [] | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing s... | false |
1,952,100,706 | https://api.github.com/repos/huggingface/datasets/issues/6318 | https://github.com/huggingface/datasets/pull/6318 | 6,318 | Deterministic set hash | closed | 3 | 2023-10-19T12:19:13 | 2023-10-19T16:27:20 | 2023-10-19T16:16:31 | lhoestq | [] | Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.
This is useful to get deterministic hashes of tokenizers that use a trie based on python sets.
reported in https://github.com/huggingface/datasets/issues/3847 | true |
1,951,965,668 | https://api.github.com/repos/huggingface/datasets/issues/6317 | https://github.com/huggingface/datasets/issues/6317 | 6,317 | sentiment140 dataset unavailable | closed | 2 | 2023-10-19T11:25:21 | 2023-10-19T13:04:56 | 2023-10-19T13:04:56 | AndreasKarasenko | [] | ### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from data... | false |
1,951,819,869 | https://api.github.com/repos/huggingface/datasets/issues/6316 | https://github.com/huggingface/datasets/pull/6316 | 6,316 | Fix loading Hub datasets with CSV metadata file | closed | 4 | 2023-10-19T10:21:34 | 2023-10-20T06:23:21 | 2023-10-20T06:14:09 | albertvillanova | [] | Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- correspon... | true |
1,951,800,819 | https://api.github.com/repos/huggingface/datasets/issues/6315 | https://github.com/huggingface/datasets/issues/6315 | 6,315 | Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0 | closed | 0 | 2023-10-19T10:11:29 | 2023-10-20T06:14:10 | 2023-10-20T06:14:10 | albertvillanova | [
"bug"
] | When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error:
```
E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
pyarrow/error.pxi:100: ArrowInvalid
```
See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1 | false |
1,951,684,763 | https://api.github.com/repos/huggingface/datasets/issues/6314 | https://github.com/huggingface/datasets/pull/6314 | 6,314 | Support creating new branch in push_to_hub | closed | 0 | 2023-10-19T09:12:39 | 2023-10-19T09:20:06 | 2023-10-19T09:19:48 | jmif | [] | This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created. | true |
1,951,527,712 | https://api.github.com/repos/huggingface/datasets/issues/6313 | https://github.com/huggingface/datasets/pull/6313 | 6,313 | Fix commit message formatting in multi-commit uploads | closed | 2 | 2023-10-19T07:53:56 | 2023-10-20T14:06:13 | 2023-10-20T13:57:39 | qgallouedec | [] | Currently, the commit message keeps on adding:
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00000-of-00002) (part 00001-of-00002)`
Introduced in https://github.com/huggingface/datasets/pull/6269
This PR fixes this issue to have
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset... | true |
1,950,128,416 | https://api.github.com/repos/huggingface/datasets/issues/6312 | https://github.com/huggingface/datasets/pull/6312 | 6,312 | docs: resolving namespace conflict, refactored variable | closed | 1 | 2023-10-18T16:10:59 | 2023-10-19T16:31:59 | 2023-10-19T16:23:07 | smty2018 | [] | In docs of about_arrow.md, in the below example code

The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good conven... | true |
1,949,304,993 | https://api.github.com/repos/huggingface/datasets/issues/6311 | https://github.com/huggingface/datasets/issues/6311 | 6,311 | cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146 | closed | 4 | 2023-10-18T09:38:05 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | neiblegy | [] | ### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] =... | false |
1,947,457,988 | https://api.github.com/repos/huggingface/datasets/issues/6310 | https://github.com/huggingface/datasets/pull/6310 | 6,310 | Add return_file_name in load_dataset | closed | 7 | 2023-10-17T13:36:57 | 2024-08-09T11:51:55 | 2024-07-31T13:56:50 | juliendenize | [] | Proposition to fix #5806.
Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output.
There is a difference between arrow-based and folder-based datasets to return the file name:
- fo... | true |
1,946,916,969 | https://api.github.com/repos/huggingface/datasets/issues/6309 | https://github.com/huggingface/datasets/pull/6309 | 6,309 | Fix get_data_patterns for directories with the word data twice | closed | 7 | 2023-10-17T09:00:39 | 2023-10-18T14:01:52 | 2023-10-18T13:50:35 | albertvillanova | [] | Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice:
- For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`)
- The in... | true |
1,946,810,625 | https://api.github.com/repos/huggingface/datasets/issues/6308 | https://github.com/huggingface/datasets/issues/6308 | 6,308 | module 'resource' has no attribute 'error' | closed | 4 | 2023-10-17T08:08:54 | 2023-10-25T17:09:22 | 2023-10-25T17:09:22 | NeoWang9999 | [] | ### Describe the bug
just run import:
`from datasets import load_dataset`
and then:
```
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow... | false |
1,946,414,808 | https://api.github.com/repos/huggingface/datasets/issues/6307 | https://github.com/huggingface/datasets/pull/6307 | 6,307 | Fix typo in code example in docs | closed | 2 | 2023-10-17T02:28:50 | 2023-10-17T12:59:26 | 2023-10-17T06:36:19 | bryant1410 | [] | null | true |
1,946,363,452 | https://api.github.com/repos/huggingface/datasets/issues/6306 | https://github.com/huggingface/datasets/issues/6306 | 6,306 | pyinstaller : OSError: could not get source code | closed | 5 | 2023-10-17T01:41:51 | 2023-11-02T07:24:51 | 2023-10-18T14:03:42 | dusk877647949 | [] | ### Describe the bug
I ran a package with pyinstaller and got the following error:
### Steps to reproduce the bug
```
...
File "datasets\__init__.py", line 52, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_an... | false |
1,946,010,912 | https://api.github.com/repos/huggingface/datasets/issues/6305 | https://github.com/huggingface/datasets/issues/6305 | 6,305 | Cannot load dataset with `2.14.5`: `FileNotFound` error | closed | 2 | 2023-10-16T20:11:27 | 2023-10-18T13:50:36 | 2023-10-18T13:50:36 | finiteautomata | [] | ### Describe the bug
I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`.
### Steps to reproduce the bug
[Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg)
```python
D... | false |
1,945,913,521 | https://api.github.com/repos/huggingface/datasets/issues/6304 | https://github.com/huggingface/datasets/pull/6304 | 6,304 | Update README.md | closed | 1 | 2023-10-16T19:10:39 | 2023-10-17T15:13:37 | 2023-10-17T15:04:52 | smty2018 | [] | Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow
| true |
1,943,466,532 | https://api.github.com/repos/huggingface/datasets/issues/6303 | https://github.com/huggingface/datasets/issues/6303 | 6,303 | Parquet uploads off-by-one naming scheme | open | 4 | 2023-10-14T18:31:03 | 2023-10-16T16:33:21 | null | ZachNagengast | [] | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71... | false |
1,942,096,078 | https://api.github.com/repos/huggingface/datasets/issues/6302 | https://github.com/huggingface/datasets/issues/6302 | 6,302 | ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size` | closed | 2 | 2023-10-13T14:43:36 | 2023-10-17T06:52:12 | 2023-10-17T06:52:11 | Rassibassi | [] | ### Describe the bug
An example from [1], does not work when limiting shards with `max_shard_size`.
Try the following example with low `max_shard_size`, such as:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
The reason f... | false |
1,940,183,999 | https://api.github.com/repos/huggingface/datasets/issues/6301 | https://github.com/huggingface/datasets/pull/6301 | 6,301 | Unpin `tensorflow` maximum version | closed | 3 | 2023-10-12T14:58:07 | 2023-10-12T15:58:20 | 2023-10-12T15:49:54 | mariosasko | [] | Removes the temporary pin introduced in #6264 | true |
1,940,153,432 | https://api.github.com/repos/huggingface/datasets/issues/6300 | https://github.com/huggingface/datasets/pull/6300 | 6,300 | Unpin `jax` maximum version | closed | 6 | 2023-10-12T14:42:40 | 2023-10-12T16:37:55 | 2023-10-12T16:28:57 | mariosasko | [] | fix #6299
fix #6202 | true |
1,939,649,238 | https://api.github.com/repos/huggingface/datasets/issues/6299 | https://github.com/huggingface/datasets/issues/6299 | 6,299 | Support for newer versions of JAX | closed | 0 | 2023-10-12T10:03:46 | 2023-10-12T16:28:59 | 2023-10-12T16:28:59 | ddrous | [
"enhancement"
] | ### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a lim... | false |
1,938,797,389 | https://api.github.com/repos/huggingface/datasets/issues/6298 | https://github.com/huggingface/datasets/pull/6298 | 6,298 | Doc readme improvements | closed | 2 | 2023-10-11T21:51:12 | 2023-10-12T12:47:15 | 2023-10-12T12:38:19 | mariosasko | [] | Changes in the doc READMe:
* adds two new sections (to be aligned with `transformers` and `hfh`): "Previewing the documentation" and "Writing documentation examples"
* replaces the mentions of `transformers` with `datasets`
* fixes some dead links | true |
1,938,752,707 | https://api.github.com/repos/huggingface/datasets/issues/6297 | https://github.com/huggingface/datasets/pull/6297 | 6,297 | Fix ArrayXD cast | closed | 2 | 2023-10-11T21:14:59 | 2023-10-13T13:54:00 | 2023-10-13T13:45:30 | mariosasko | [] | Fix #6291 | true |
1,938,453,845 | https://api.github.com/repos/huggingface/datasets/issues/6296 | https://github.com/huggingface/datasets/pull/6296 | 6,296 | Move `exceptions.py` to `utils/exceptions.py` | closed | 6 | 2023-10-11T18:28:00 | 2024-09-03T16:00:04 | 2024-09-03T16:00:03 | mariosasko | [] | I didn't notice the path while reviewing the PR yesterday :( | true |
1,937,362,102 | https://api.github.com/repos/huggingface/datasets/issues/6295 | https://github.com/huggingface/datasets/pull/6295 | 6,295 | Fix parquet columns argument in streaming mode | closed | 3 | 2023-10-11T10:01:01 | 2023-10-11T16:30:24 | 2023-10-11T16:21:36 | lhoestq | [] | It was failing when there's a DatasetInfo with non-None info.features from the YAML (therefore containing columns that should be ignored)
Fix https://github.com/huggingface/datasets/issues/6293 | true |
1,937,359,605 | https://api.github.com/repos/huggingface/datasets/issues/6294 | https://github.com/huggingface/datasets/issues/6294 | 6,294 | IndexError: Invalid key is out of bounds for size 0 despite having a populated dataset | closed | 1 | 2023-10-11T09:59:38 | 2023-10-17T11:24:06 | 2023-10-17T11:24:06 | ZYM66 | [] | ### Describe the bug
I am encountering an `IndexError` when trying to access data from a DataLoader which wraps around a dataset I've loaded using the `datasets` library. The error suggests that the dataset size is `0`, but when I check the length and print the dataset, it's clear that it has `1166` entries.
### Step... | false |
1,937,238,047 | https://api.github.com/repos/huggingface/datasets/issues/6293 | https://github.com/huggingface/datasets/issues/6293 | 6,293 | Choose columns to stream parquet data in streaming mode | closed | 0 | 2023-10-11T08:59:36 | 2023-10-11T16:21:38 | 2023-10-11T16:21:38 | lhoestq | [
"bug"
] | Currently passing columns= to load_dataset in streaming mode fails
```
Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=... | false |
1,937,050,470 | https://api.github.com/repos/huggingface/datasets/issues/6292 | https://github.com/huggingface/datasets/issues/6292 | 6,292 | how to load the image of dtype float32 or float64 | open | 1 | 2023-10-11T07:27:16 | 2023-10-11T13:19:11 | null | wanglaofei | [] | _FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data? | false |
1,936,129,871 | https://api.github.com/repos/huggingface/datasets/issues/6291 | https://github.com/huggingface/datasets/issues/6291 | 6,291 | Casting type from Array2D int to Array2D float crashes | closed | 1 | 2023-10-10T20:10:10 | 2023-10-13T13:45:31 | 2023-10-13T13:45:31 | AlanBlanchet | [] | ### Describe the bug
I am on a school project and the initial type for feature annotations are `Array2D(shape=(None, 4))`. I am trying to cast this type to a `float64` and pyarrow gives me this error :
```
Traceback (most recent call last):
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearnin... | false |
1,935,629,679 | https://api.github.com/repos/huggingface/datasets/issues/6290 | https://github.com/huggingface/datasets/issues/6290 | 6,290 | Incremental dataset (e.g. `.push_to_hub(..., append=True)`) | open | 6 | 2023-10-10T15:18:03 | 2025-03-12T13:41:26 | null | Wauplin | [
"enhancement"
] | ### Feature request
Have the possibility to do `ds.push_to_hub(..., append=True)`.
### Motivation
Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and
this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675... | false |
1,935,628,506 | https://api.github.com/repos/huggingface/datasets/issues/6289 | https://github.com/huggingface/datasets/pull/6289 | 6,289 | testing doc-builder | closed | 2 | 2023-10-10T15:17:29 | 2023-10-13T08:57:14 | 2023-10-13T08:56:48 | mishig25 | [] | testing https://github.com/huggingface/doc-builder/pull/426 | true |
1,935,005,457 | https://api.github.com/repos/huggingface/datasets/issues/6288 | https://github.com/huggingface/datasets/issues/6288 | 6,288 | Dataset.from_pandas with a DataFrame of PIL.Images | open | 3 | 2023-10-10T10:29:16 | 2024-11-29T16:35:30 | null | lhoestq | [
"enhancement"
] | Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way | false |
1,932,758,192 | https://api.github.com/repos/huggingface/datasets/issues/6287 | https://github.com/huggingface/datasets/issues/6287 | 6,287 | map() not recognizing "text" | closed | 1 | 2023-10-09T10:27:30 | 2023-10-11T20:28:45 | 2023-10-11T20:28:45 | EngineerKhan | [] | ### Describe the bug
The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:
`
ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`
I have been trying to reproduce it in my code as:
`tokenizedData... | false |
1,932,640,128 | https://api.github.com/repos/huggingface/datasets/issues/6286 | https://github.com/huggingface/datasets/pull/6286 | 6,286 | Create DefunctDatasetError | closed | 2 | 2023-10-09T09:23:23 | 2023-10-10T07:13:22 | 2023-10-10T07:03:04 | albertvillanova | [] | Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible.
See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b | true |
1,932,306,325 | https://api.github.com/repos/huggingface/datasets/issues/6285 | https://github.com/huggingface/datasets/issues/6285 | 6,285 | TypeError: expected str, bytes or os.PathLike object, not dict | open | 4 | 2023-10-09T04:56:26 | 2023-10-10T13:17:33 | null | andysingal | [] | ### Describe the bug
my dataset is in form : train- image /n -labels
and tried the code:
```
from datasets import load_dataset
data_files = {
"train": "/content/datasets/PotholeDetectionYOLOv8-1/train/",
"validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/",
"test": "/content/dat... | false |
1,929,551,712 | https://api.github.com/repos/huggingface/datasets/issues/6284 | https://github.com/huggingface/datasets/issues/6284 | 6,284 | Add Belebele multiple-choice machine reading comprehension (MRC) dataset | closed | 1 | 2023-10-06T06:58:03 | 2023-10-06T13:26:51 | 2023-10-06T13:26:51 | rajveer43 | [
"enhancement"
] | ### Feature request
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short pass... | false |
1,928,552,257 | https://api.github.com/repos/huggingface/datasets/issues/6283 | https://github.com/huggingface/datasets/pull/6283 | 6,283 | Fix array cast/embed with null values | closed | 10 | 2023-10-05T15:24:05 | 2024-07-04T07:24:20 | 2024-02-06T19:24:19 | mariosasko | [] | Fixes issues with casting/embedding PyArrow list arrays with null values. It also bumps the required PyArrow version to 12.0.0 (over 9 months old) to simplify the implementation.
Fix #6280, fix #6311, fix #6360
(Also fixes https://github.com/huggingface/datasets/issues/5430 to make Beam compatible with PyArrow>=... | true |
1,928,473,630 | https://api.github.com/repos/huggingface/datasets/issues/6282 | https://github.com/huggingface/datasets/pull/6282 | 6,282 | Drop data_files duplicates | closed | 5 | 2023-10-05T14:43:08 | 2024-09-02T14:08:35 | 2024-09-02T14:08:35 | lhoestq | [] | I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
| true |
1,928,456,959 | https://api.github.com/repos/huggingface/datasets/issues/6281 | https://github.com/huggingface/datasets/pull/6281 | 6,281 | Improve documentation of dataset.from_generator | closed | 2 | 2023-10-05T14:34:49 | 2023-10-05T19:09:07 | 2023-10-05T18:57:41 | hartmans | [] | Improve documentation to clarify sharding behavior (#6270) | true |
1,928,215,278 | https://api.github.com/repos/huggingface/datasets/issues/6280 | https://github.com/huggingface/datasets/issues/6280 | 6,280 | Couldn't cast array of type fixed_size_list to Sequence(Value(float64)) | closed | 4 | 2023-10-05T12:48:31 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | jmif | [] | ### Describe the bug
I have a dataset with an embedding column, when I try to map that dataset I get the following exception:
```
Traceback (most recent call last):
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map
for rank, done, content... | false |
1,928,028,226 | https://api.github.com/repos/huggingface/datasets/issues/6279 | https://github.com/huggingface/datasets/issues/6279 | 6,279 | Batched IterableDataset | open | 9 | 2023-10-05T11:12:49 | 2024-11-07T10:01:22 | null | lneukom | [
"enhancement"
] | ### Feature request
Hi,
could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator.
### Motivation
The current implementation load... | false |
1,927,957,877 | https://api.github.com/repos/huggingface/datasets/issues/6278 | https://github.com/huggingface/datasets/pull/6278 | 6,278 | No data files duplicates | closed | 4 | 2023-10-05T10:31:58 | 2024-01-11T06:32:49 | 2023-10-05T14:43:17 | lhoestq | [] | I added a new DataFilesSet class to disallow duplicate data files.
I also deprecated DataFilesList.
EDIT: actually I might just add drop_duplicates=True to `.from_patterns`
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
TODO:
- [ ] tests
... | true |
1,927,044,546 | https://api.github.com/repos/huggingface/datasets/issues/6277 | https://github.com/huggingface/datasets/issues/6277 | 6,277 | FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either. | closed | 1 | 2023-10-04T22:01:25 | 2023-10-08T17:05:46 | 2023-10-08T17:05:46 | diegogonzalezc | [] | ### Describe the bug
I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows:
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub eit... | false |
1,925,961,878 | https://api.github.com/repos/huggingface/datasets/issues/6276 | https://github.com/huggingface/datasets/issues/6276 | 6,276 | I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error | open | 3 | 2023-10-04T11:03:41 | 2023-11-27T10:39:16 | null | valaofficial | [] | ### Describe the bug
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
I tried google collab and it works but because I'm on the free version the training ... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.