url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 2.19B | node_id stringlengths 18 24 | number int64 2 6.73k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 3
values | active_lock_reason null | draft null | pull_request null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5717/comments | https://api.github.com/repos/huggingface/datasets/issues/5717/events | https://github.com/huggingface/datasets/issues/5717 | 1,658,729,866 | I_kwDODunzps5i3jWK | 5,717 | Errror when saving to disk a dataset of images | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https:... | null | [
"Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately... | 2023-04-07T11:59:17 | 2024-03-12T14:15:59 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Hello!
I have an issue when I try to save on disk my dataset of images. The error I get is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5717/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5716/comments | https://api.github.com/repos/huggingface/datasets/issues/5716/events | https://github.com/huggingface/datasets/issues/5716 | 1,658,613,092 | I_kwDODunzps5i3G1k | 5,716 | Handle empty audio | {
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-y... | [] | closed | false | null | [] | null | [
"Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example(... | 2023-04-07T09:51:40 | 2023-09-27T17:47:08 | 2023-09-27T17:47:08 | NONE | null | null | null | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5716/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5715/comments | https://api.github.com/repos/huggingface/datasets/issues/5715/events | https://github.com/huggingface/datasets/issues/5715 | 1,657,479,788 | I_kwDODunzps5iyyJs | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | {
"login": "jungbaepark",
"id": 34066771,
"node_id": "MDQ6VXNlcjM0MDY2Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungbaepark",
"html_url": "https://github.com/jungbaepark",
"followers_url": "https://api.github.com/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] | 2023-04-06T13:57:48 | 2023-04-20T17:16:26 | 2023-04-20T17:16:26 | NONE | null | null | null | ### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5715/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5713/comments | https://api.github.com/repos/huggingface/datasets/issues/5713/events | https://github.com/huggingface/datasets/issues/5713 | 1,657,141,251 | I_kwDODunzps5ixfgD | 5,713 | ArrowNotImplementedError when loading dataset from the hub | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | closed | false | null | [] | null | [
"Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. ... | 2023-04-06T10:27:22 | 2023-04-06T13:06:22 | 2023-04-06T13:06:21 | CONTRIBUTOR | null | null | null | ### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5713/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5712/comments | https://api.github.com/repos/huggingface/datasets/issues/5712/events | https://github.com/huggingface/datasets/issues/5712 | 1,655,972,106 | I_kwDODunzps5itCEK | 5,712 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | {
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/... | [] | closed | false | null | [] | null | [
"Closing since this is a duplicate of #5711",
"> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate"
] | 2023-04-05T16:47:10 | 2023-04-06T08:32:37 | 2023-04-05T17:17:44 | NONE | null | null | null | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5712/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5711/comments | https://api.github.com/repos/huggingface/datasets/issues/5711/events | https://github.com/huggingface/datasets/issues/5711 | 1,655,971,647 | I_kwDODunzps5itB8_ | 5,711 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | {
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```pyt... | 2023-04-05T16:46:49 | 2023-04-07T09:16:59 | 2023-04-07T09:16:59 | NONE | null | null | null | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5711/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5710/comments | https://api.github.com/repos/huggingface/datasets/issues/5710/events | https://github.com/huggingface/datasets/issues/5710 | 1,655,703,534 | I_kwDODunzps5isAfu | 5,710 | OSError: Memory mapping file failed: Cannot allocate memory | {
"login": "Saibo-creator",
"id": 53392976,
"node_id": "MDQ6VXNlcjUzMzkyOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saibo-creator",
"html_url": "https://github.com/Saibo-creator",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they a... | 2023-04-05T14:11:26 | 2023-04-20T17:16:40 | 2023-04-20T17:16:40 | NONE | null | null | null | ### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```te... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5710/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5709/comments | https://api.github.com/repos/huggingface/datasets/issues/5709/events | https://github.com/huggingface/datasets/issues/5709 | 1,655,423,503 | I_kwDODunzps5iq8IP | 5,709 | Manually dataset info made not taken into account | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | closed | false | null | [] | null | [
"hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually... | 2023-04-05T11:15:17 | 2023-04-06T08:52:20 | 2023-04-06T08:52:19 | CONTRIBUTOR | null | null | null | ### Describe the bug
Hello,
I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hen... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5709/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5708/comments | https://api.github.com/repos/huggingface/datasets/issues/5708/events | https://github.com/huggingface/datasets/issues/5708 | 1,655,023,642 | I_kwDODunzps5ipaga | 5,708 | Dataset sizes are in MiB instead of MB in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp... | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming tha... | 2023-04-05T06:36:03 | 2023-12-21T10:20:28 | 2023-12-21T10:20:27 | MEMBER | null | null | null | As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5708/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5706/comments | https://api.github.com/repos/huggingface/datasets/issues/5706/events | https://github.com/huggingface/datasets/issues/5706 | 1,653,545,835 | I_kwDODunzps5ijxtr | 5,706 | Support categorical data types for Parquet | {
"login": "kklemon",
"id": 1430243,
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kklemon",
"html_url": "https://github.com/kklemon",
"followers_url": "https://api.github.com/users/kklemon/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mhattingpete",
"id": 22622299,
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhattingpete",
"html_url": "https://github.com/mhattingpete",
"followers_url": "https://api.github.c... | [
{
"login": "mhattingpete",
"id": 22622299,
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhattingpete",
"html_url": "https://github.com/mhattingpete",
"followers_url": "... | null | [
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:... | 2023-04-04T09:45:35 | 2023-09-22T16:53:37 | null | NONE | null | null | null | ### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parq... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5706/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5705/comments | https://api.github.com/repos/huggingface/datasets/issues/5705/events | https://github.com/huggingface/datasets/issues/5705 | 1,653,500,383 | I_kwDODunzps5ijmnf | 5,705 | Getting next item from IterableDataset took forever. | {
"login": "HongtaoYang",
"id": 16588434,
"node_id": "MDQ6VXNlcjE2NTg4NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HongtaoYang",
"html_url": "https://github.com/HongtaoYang",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...",
"Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beh... | 2023-04-04T09:16:17 | 2023-04-05T23:35:41 | 2023-04-05T23:35:41 | NONE | null | null | null | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5705/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5702/comments | https://api.github.com/repos/huggingface/datasets/issues/5702/events | https://github.com/huggingface/datasets/issues/5702 | 1,653,104,720 | I_kwDODunzps5iiGBQ | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | {
"login": "gitforziio",
"id": 10508116,
"node_id": "MDQ6VXNlcjEwNTA4MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/10508116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gitforziio",
"html_url": "https://github.com/gitforziio",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"... | 2023-04-04T03:20:43 | 2023-04-05T14:15:18 | 2023-04-05T14:15:17 | NONE | null | null | null | ### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5702/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5699/comments | https://api.github.com/repos/huggingface/datasets/issues/5699/events | https://github.com/huggingface/datasets/issues/5699 | 1,652,437,419 | I_kwDODunzps5ifjGr | 5,699 | Issue when wanting to split in memory a cached dataset | {
"login": "FrancoisNoyez",
"id": 47528215,
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancoisNoyez",
"html_url": "https://github.com/FrancoisNoyez",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)"
] | 2023-04-03T17:00:07 | 2023-04-04T16:52:42 | null | NONE | null | null | null | ### Describe the bug
**In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not No... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5699/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5698/comments | https://api.github.com/repos/huggingface/datasets/issues/5698/events | https://github.com/huggingface/datasets/issues/5698 | 1,652,183,611 | I_kwDODunzps5ielI7 | 5,698 | Add Qdrant as another search index | {
"login": "kacperlukawski",
"id": 2649301,
"node_id": "MDQ6VXNlcjI2NDkzMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kacperlukawski",
"html_url": "https://github.com/kacperlukawski",
"followers_url": "https://api.gith... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"@mariosasko I'd appreciate your feedback on this. "
] | 2023-04-03T14:25:19 | 2023-04-11T10:28:40 | null | CONTRIBUTOR | null | null | null | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search syst... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5698/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5696/comments | https://api.github.com/repos/huggingface/datasets/issues/5696/events | https://github.com/huggingface/datasets/issues/5696 | 1,651,707,008 | I_kwDODunzps5icwyA | 5,696 | Shuffle a sharded iterable dataset without seed can lead to duplicate data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [] | 2023-04-03T09:40:03 | 2023-04-04T14:58:18 | 2023-04-04T14:58:18 | MEMBER | null | null | null | As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead o... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5696/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5695/comments | https://api.github.com/repos/huggingface/datasets/issues/5695/events | https://github.com/huggingface/datasets/issues/5695 | 1,650,974,156 | I_kwDODunzps5iZ93M | 5,695 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError | {
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid... | 2023-04-02T14:42:44 | 2023-04-11T09:17:54 | 2023-04-10T08:04:04 | NONE | null | null | null | ### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the e... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5695/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5694/comments | https://api.github.com/repos/huggingface/datasets/issues/5694/events | https://github.com/huggingface/datasets/issues/5694 | 1,650,467,793 | I_kwDODunzps5iYCPR | 5,694 | Dataset configuration | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to mod... | 2023-04-01T13:08:05 | 2023-04-04T14:54:37 | null | MEMBER | null | null | null | Following discussions from https://github.com/huggingface/datasets/pull/5331
We could have something like `config.json` to define the configuration of a dataset.
```json
{
"data_dir": "data"
"data_files": {
"train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
}
}
```
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5694/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5692/comments | https://api.github.com/repos/huggingface/datasets/issues/5692/events | https://github.com/huggingface/datasets/issues/5692 | 1,649,818,644 | I_kwDODunzps5iVjwU | 5,692 | pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types | {
"login": "cyanic-selkie",
"id": 32219669,
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyanic-selkie",
"html_url": "https://github.com/cyanic-selkie",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?",
"> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n",
"@cyanic-selkie cou... | 2023-03-31T18:19:40 | 2024-01-14T07:24:21 | null | NONE | null | null | null | ### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/trai... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5692/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5690/comments | https://api.github.com/repos/huggingface/datasets/issues/5690/events | https://github.com/huggingface/datasets/issues/5690 | 1,649,289,883 | I_kwDODunzps5iTiqb | 5,690 | raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api | {
"login": "wccccp",
"id": 55964850,
"node_id": "MDQ6VXNlcjU1OTY0ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wccccp",
"html_url": "https://github.com/wccccp",
"followers_url": "https://api.github.com/users/wccccp/fo... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of ... | 2023-03-31T08:22:22 | 2023-07-21T14:21:57 | 2023-07-21T14:21:57 | NONE | null | null | null | ### Describe the bug
rta.sh
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, Dat... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5690/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5688/comments | https://api.github.com/repos/huggingface/datasets/issues/5688/events | https://github.com/huggingface/datasets/issues/5688 | 1,648,463,504 | I_kwDODunzps5iQY6Q | 5,688 | Wikipedia download_and_prepare for GCS | {
"login": "adrianfagerland",
"id": 25522531,
"node_id": "MDQ6VXNlcjI1NTIyNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adrianfagerland",
"html_url": "https://github.com/adrianfagerland",
"followers_url": "https://api... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processin... | 2023-03-30T23:43:22 | 2024-03-15T15:59:18 | 2024-03-15T15:59:18 | NONE | null | null | null | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039a... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5688/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5687/comments | https://api.github.com/repos/huggingface/datasets/issues/5687/events | https://github.com/huggingface/datasets/issues/5687 | 1,647,009,018 | I_kwDODunzps5iK1z6 | 5,687 | Document to compress data files before uploading | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`,... | 2023-03-30T06:41:07 | 2023-04-19T07:25:59 | 2023-04-19T07:25:59 | MEMBER | null | null | null | In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5687/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5685/comments | https://api.github.com/repos/huggingface/datasets/issues/5685/events | https://github.com/huggingface/datasets/issues/5685 | 1,646,048,667 | I_kwDODunzps5iHLWb | 5,685 | Broken Image render on the hub website | {
"login": "FrancescoSaverioZuppichini",
"id": 15908060,
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini",
"html_url": "https://github.com/FrancescoSaverioZuppichini"... | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard... | 2023-03-29T15:25:30 | 2023-03-30T07:54:25 | 2023-03-30T07:54:25 | NONE | null | null | null | ### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type
",
"Closed in #5693 "
] | 2023-03-29T11:44:49 | 2023-04-03T18:31:11 | 2023-04-03T18:31:11 | CONTRIBUTOR | null | null | null | Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged load... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5681/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5679/comments | https://api.github.com/repos/huggingface/datasets/issues/5679/events | https://github.com/huggingface/datasets/issues/5679 | 1,645,184,622 | I_kwDODunzps5iD4Zu | 5,679 | Allow load_dataset to take a working dir for intermediate data | {
"login": "lu-wang-dl",
"id": 38018689,
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-dl",
"html_url": "https://github.com/lu-wang-dl",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud stora... | 2023-03-29T07:21:09 | 2023-04-12T22:30:25 | null | NONE | null | null | null | ### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It wi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5679/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5678/comments | https://api.github.com/repos/huggingface/datasets/issues/5678/events | https://github.com/huggingface/datasets/issues/5678 | 1,645,018,359 | I_kwDODunzps5iDPz3 | 5,678 | Add support to create a Dataset from spark dataframe | {
"login": "lu-wang-dl",
"id": 38018689,
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-dl",
"html_url": "https://github.com/lu-wang-dl",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a b... | 2023-03-29T04:36:28 | 2023-07-21T14:15:38 | 2023-07-21T14:15:38 | NONE | null | null | null | ### Feature request
Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame.
### Motivation
Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5678/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5678/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5677/comments | https://api.github.com/repos/huggingface/datasets/issues/5677/events | https://github.com/huggingface/datasets/issues/5677 | 1,644,828,606 | I_kwDODunzps5iChe- | 5,677 | Dataset.map() crashes when any column contains more than 1000 empty dictionaries | {
"login": "mtoles",
"id": 7139344,
"node_id": "MDQ6VXNlcjcxMzkzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mtoles",
"html_url": "https://github.com/mtoles",
"followers_url": "https://api.github.com/users/mtoles/foll... | [] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https:... | null | [] | 2023-03-29T00:01:31 | 2023-07-07T14:01:14 | 2023-07-07T14:01:14 | NONE | null | null | null | ### Describe the bug
`Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty.
### Steps to reproduce the bug
Example:
```
import datasets... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5677/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5675/comments | https://api.github.com/repos/huggingface/datasets/issues/5675/events | https://github.com/huggingface/datasets/issues/5675 | 1,641,763,478 | I_kwDODunzps5h21KW | 5,675 | Filter datasets by language code | {
"login": "named-entity",
"id": 5658496,
"node_id": "MDQ6VXNlcjU2NTg0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5658496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/named-entity",
"html_url": "https://github.com/named-entity",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | [
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missi... | 2023-03-27T09:42:28 | 2023-03-30T08:08:15 | 2023-03-30T08:08:15 | NONE | null | null | null | Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search fo... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5675/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5674/comments | https://api.github.com/repos/huggingface/datasets/issues/5674/events | https://github.com/huggingface/datasets/issues/5674 | 1,641,084,105 | I_kwDODunzps5h0PTJ | 5,674 | Stored XSS | {
"login": "Fadavvi",
"id": 21213484,
"node_id": "MDQ6VXNlcjIxMjEzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/21213484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fadavvi",
"html_url": "https://github.com/Fadavvi",
"followers_url": "https://api.github.com/users/Fadavv... | [] | closed | false | null | [] | null | [
"Hi! You can contact `security@huggingface.co` to report this vulnerability."
] | 2023-03-26T20:55:58 | 2023-03-27T21:01:55 | 2023-03-27T21:01:55 | NONE | null | null | null | ### Describe the bug
I found a Stored XSS on a page that can be publicly accessible to all visitors. But I didn't find a suitable place to report.
Please guide me on this.
### Steps to reproduce the bug
Due to security restrictions, I don't want to publish it publicly.
### Expected behavior
User inputs must be ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5674/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5672/comments | https://api.github.com/repos/huggingface/datasets/issues/5672/events | https://github.com/huggingface/datasets/issues/5672 | 1,641,005,322 | I_kwDODunzps5hz8EK | 5,672 | Pushing dataset to hub crash | {
"login": "tzvc",
"id": 14275989,
"node_id": "MDQ6VXNlcjE0Mjc1OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/14275989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tzvc",
"html_url": "https://github.com/tzvc",
"followers_url": "https://api.github.com/users/tzvc/followers"... | [] | closed | false | null | [] | null | [
"Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\... | 2023-03-26T17:42:13 | 2023-03-30T08:11:05 | 2023-03-30T08:11:05 | NONE | null | null | null | ### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub b... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5672/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5671/comments | https://api.github.com/repos/huggingface/datasets/issues/5671/events | https://github.com/huggingface/datasets/issues/5671 | 1,640,840,012 | I_kwDODunzps5hzTtM | 5,671 | How to use `load_dataset('glue', 'cola')` | {
"login": "makinzm",
"id": 40193664,
"node_id": "MDQ6VXNlcjQwMTkzNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makinzm",
"html_url": "https://github.com/makinzm",
"followers_url": "https://api.github.com/users/makinz... | [] | closed | false | null | [] | null | [
"Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to im... | 2023-03-26T09:40:34 | 2023-03-28T07:43:44 | 2023-03-28T07:43:43 | NONE | null | null | null | ### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
------------------------------------------------------------------------... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5671/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5670/comments | https://api.github.com/repos/huggingface/datasets/issues/5670/events | https://github.com/huggingface/datasets/issues/5670 | 1,640,607,045 | I_kwDODunzps5hya1F | 5,670 | Unable to load multi class classification datasets | {
"login": "ysahil97",
"id": 19690506,
"node_id": "MDQ6VXNlcjE5NjkwNTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysahil97",
"html_url": "https://github.com/ysahil97",
"followers_url": "https://api.github.com/users/ysa... | [] | closed | false | null | [] | null | [
"Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)",
"Thanks @lhoestq!\r\n\r\nI'll close this issue now."
] | 2023-03-25T18:06:15 | 2023-03-27T22:54:56 | 2023-03-27T22:54:56 | NONE | null | null | null | ### Describe the bug
I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)).
While loading the dataset, I'm getting... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5670/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5669/comments | https://api.github.com/repos/huggingface/datasets/issues/5669/events | https://github.com/huggingface/datasets/issues/5669 | 1,638,070,046 | I_kwDODunzps5hovce | 5,669 | Almost identical datasets, huge performance difference | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.gith... | [] | open | false | null | [] | null | [
"Do I miss something here?",
"Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `... | 2023-03-23T18:20:20 | 2023-04-09T18:56:23 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset(... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5669/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5666/comments | https://api.github.com/repos/huggingface/datasets/issues/5666/events | https://github.com/huggingface/datasets/issues/5666 | 1,637,675,062 | I_kwDODunzps5hnPA2 | 5,666 | Support tensorflow 2.12.0 in CI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 2023-03-23T14:37:51 | 2023-03-23T16:14:54 | 2023-03-23T16:14:54 | MEMBER | null | null | null | Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5666/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5665/comments | https://api.github.com/repos/huggingface/datasets/issues/5665/events | https://github.com/huggingface/datasets/issues/5665 | 1,637,193,648 | I_kwDODunzps5hlZew | 5,665 | Feature request: IterableDataset.push_to_hub | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2023-03-23T09:53:04 | 2023-03-23T09:53:16 | null | CONTRIBUTOR | null | null | null | ### Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`.
Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
`... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5665/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5665/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5663/comments | https://api.github.com/repos/huggingface/datasets/issues/5663/events | https://github.com/huggingface/datasets/issues/5663 | 1,637,173,248 | I_kwDODunzps5hlUgA | 5,663 | CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 2023-03-23T09:39:43 | 2023-03-23T10:09:55 | 2023-03-23T10:09:55 | MEMBER | null | null | null | CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installati... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5663/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5661/comments | https://api.github.com/repos/huggingface/datasets/issues/5661/events | https://github.com/huggingface/datasets/issues/5661 | 1,637,129,445 | I_kwDODunzps5hlJzl | 5,661 | CI is broken: Unnecessary `dict` comprehension | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 2023-03-23T09:13:01 | 2023-03-23T09:37:51 | 2023-03-23T09:37:51 | MEMBER | null | null | null | CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5661/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5660/comments | https://api.github.com/repos/huggingface/datasets/issues/5660/events | https://github.com/huggingface/datasets/issues/5660 | 1,635,543,646 | I_kwDODunzps5hfGpe | 5,660 | integration with imbalanced-learn | {
"login": "tansaku",
"id": 30216,
"node_id": "MDQ6VXNlcjMwMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tansaku",
"html_url": "https://github.com/tansaku",
"followers_url": "https://api.github.com/users/tansaku/follower... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892913,
"node_id": "MDU6... | closed | false | null | [] | null | [
"You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), y... | 2023-03-22T11:05:17 | 2023-07-06T18:10:15 | 2023-07-06T18:10:15 | NONE | null | null | null | ### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I'v... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5660/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5659/comments | https://api.github.com/repos/huggingface/datasets/issues/5659/events | https://github.com/huggingface/datasets/issues/5659 | 1,635,447,540 | I_kwDODunzps5hevL0 | 5,659 | [Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | [
"cc @polinaeterna @lhoestq ",
"@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume... | 2023-03-22T10:07:33 | 2024-01-17T13:59:22 | 2023-04-07T08:51:28 | CONTRIBUTOR | null | null | null | ### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5659/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5654/comments | https://api.github.com/repos/huggingface/datasets/issues/5654/events | https://github.com/huggingface/datasets/issues/5654 | 1,633,523,705 | I_kwDODunzps5hXZf5 | 5,654 | Offset overflow when executing Dataset.map | {
"login": "jan-pair",
"id": 118280608,
"node_id": "U_kgDOBwzRoA",
"avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jan-pair",
"html_url": "https://github.com/jan-pair",
"followers_url": "https://api.github.com/users/jan-pair/... | [] | open | false | null | [] | null | [
"Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n",
"As a workaround, one can replace\r\n`return {\"hr\": to... | 2023-03-21T09:33:27 | 2023-03-21T10:32:07 | null | NONE | null | null | null | ### Describe the bug
Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big.
The map function executes all iterations, and then returns the following error:
```bash
Traceback (most recent call last): ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5654/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5653/comments | https://api.github.com/repos/huggingface/datasets/issues/5653/events | https://github.com/huggingface/datasets/issues/5653 | 1,633,254,159 | I_kwDODunzps5hWXsP | 5,653 | Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented | {
"login": "RmZeta2718",
"id": 42400165,
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RmZeta2718",
"html_url": "https://github.com/RmZeta2718",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 19358928... | closed | false | null | [] | null | [
"I agree this should be documented"
] | 2023-03-21T05:25:35 | 2023-03-24T16:36:23 | 2023-03-24T16:36:23 | NONE | null | null | null | ### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5653/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5651/comments | https://api.github.com/repos/huggingface/datasets/issues/5651/events | https://github.com/huggingface/datasets/issues/5651 | 1,631,967,509 | I_kwDODunzps5hRdkV | 5,651 | expanduser in save_to_disk | {
"login": "RmZeta2718",
"id": 42400165,
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RmZeta2718",
"html_url": "https://github.com/RmZeta2718",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"followers_url": "https://... | [
{
"login": "benjaminbrown038",
"id": 35114142,
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminbrown038",
"html_url": "https://github.com/benjaminbrown038",
"follo... | null | [
"`save_to_disk` should indeed expand `~`. Marking it as a \"good first issue\".",
"#self-assign\r\n\r\nFile path to code: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_dataset.py#L1364\r\n\r\n@RmZeta2718 I created a pull request for this issue. ",
"Hello, \r\nIt says `save_to_di... | 2023-03-20T12:02:18 | 2023-10-27T14:04:37 | 2023-10-27T14:04:37 | NONE | null | null | null | ### Describe the bug
save_to_disk() does not expand `~`
1. `dataset = load_datasets("any dataset")`
2. `dataset.save_to_disk("~/data")`
3. a folder named "~" created in current folder
4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`)
related issue https://github.... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5651/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5650/comments | https://api.github.com/repos/huggingface/datasets/issues/5650/events | https://github.com/huggingface/datasets/issues/5650 | 1,630,336,919 | I_kwDODunzps5hLPeX | 5,650 | load_dataset can't work correct with my image data | {
"login": "WiNE-iNEFF",
"id": 41611046,
"node_id": "MDQ6VXNlcjQxNjExMDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WiNE-iNEFF",
"html_url": "https://github.com/WiNE-iNEFF",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Can you post a reproducible code snippet of what you tried to do?\r\n\r\n",
"> Can you post a reproducible code snippet of what you tried to do?\n> \n> \n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\n```",
"hi @WiNE-iNEFF ! can you please also te... | 2023-03-18T13:59:13 | 2023-07-24T14:13:02 | 2023-07-24T14:13:01 | NONE | null | null | null | I have about 20000 images in my folder which divided into 4 folders with class names.
When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting imag... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5650/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5649/comments | https://api.github.com/repos/huggingface/datasets/issues/5649/events | https://github.com/huggingface/datasets/issues/5649 | 1,630,173,460 | I_kwDODunzps5hKnkU | 5,649 | The index column created with .to_sql() is dependent on the batch_size when writing | {
"login": "lsb",
"id": 45281,
"node_id": "MDQ6VXNlcjQ1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsb",
"html_url": "https://github.com/lsb",
"followers_url": "https://api.github.com/users/lsb/followers",
"following... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ",
"I think this is low enough priority for me... | 2023-03-18T05:25:17 | 2023-06-17T07:01:57 | 2023-06-17T07:01:57 | NONE | null | null | null | ### Describe the bug
It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index.
This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export.
### Steps to reproduce the ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5649/timeline | null | not_planned |
https://api.github.com/repos/huggingface/datasets/issues/5648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5648/comments | https://api.github.com/repos/huggingface/datasets/issues/5648/events | https://github.com/huggingface/datasets/issues/5648 | 1,629,253,719 | I_kwDODunzps5hHHBX | 5,648 | flatten_indices doesn't work with pandas format | {
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https:... | null | [
"Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indice... | 2023-03-17T12:44:25 | 2023-03-21T13:12:03 | null | NONE | null | null | null | ### Describe the bug
Hi,
I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output
### Steps to reproduce the bug
tabular_data = pd.DataFrame(np.r... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5648/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5647/comments | https://api.github.com/repos/huggingface/datasets/issues/5647/events | https://github.com/huggingface/datasets/issues/5647 | 1,628,225,544 | I_kwDODunzps5hDMAI | 5,647 | Make all print statements optional | {
"login": "gagan3012",
"id": 49101362,
"node_id": "MDQ6VXNlcjQ5MTAxMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gagan3012",
"html_url": "https://github.com/gagan3012",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"related to #5444 ",
"We now log these messages instead of printing them (addressed in #6019), so I'm closing this issue."
] | 2023-03-16T20:30:07 | 2023-07-21T14:20:25 | 2023-07-21T14:20:24 | NONE | null | null | null | ### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5647/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5645/comments | https://api.github.com/repos/huggingface/datasets/issues/5645/events | https://github.com/huggingface/datasets/issues/5645 | 1,627,108,278 | I_kwDODunzps5g-7O2 | 5,645 | Datasets map and select(range()) is giving dill error | {
"login": "Tanya-11",
"id": 90728105,
"node_id": "MDQ6VXNlcjkwNzI4MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tanya-11",
"html_url": "https://github.com/Tanya-11",
"followers_url": "https://api.github.com/users/Tan... | [] | closed | false | null | [] | null | [
"It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-... | 2023-03-16T10:01:28 | 2023-03-17T04:24:51 | 2023-03-17T04:24:51 | NONE | null | null | null | ### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5645/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5641/comments | https://api.github.com/repos/huggingface/datasets/issues/5641/events | https://github.com/huggingface/datasets/issues/5641 | 1,625,942,730 | I_kwDODunzps5g6erK | 5,641 | Features cannot be named "self" | {
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api... | [] | closed | false | null | [] | null | [] | 2023-03-15T17:16:40 | 2023-03-16T17:14:51 | 2023-03-16T17:14:51 | NONE | null | null | null | ### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5641/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5639/comments | https://api.github.com/repos/huggingface/datasets/issues/5639/events | https://github.com/huggingface/datasets/issues/5639 | 1,625,737,098 | I_kwDODunzps5g5seK | 5,639 | Parquet file wrongly recognized as zip prevents loading a dataset | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [] | 2023-03-15T15:20:45 | 2023-03-16T13:40:14 | 2023-03-16T13:40:14 | MEMBER | null | null | null | ### Describe the bug
When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5639/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5638/comments | https://api.github.com/repos/huggingface/datasets/issues/5638/events | https://github.com/huggingface/datasets/issues/5638 | 1,625,564,471 | I_kwDODunzps5g5CU3 | 5,638 | xPath to implement all operations for Path | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
" I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).",
"Right is there a difference betw... | 2023-03-15T13:47:11 | 2023-03-17T13:21:12 | 2023-03-17T13:21:12 | CONTRIBUTOR | null | null | null | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5638/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5637/comments | https://api.github.com/repos/huggingface/datasets/issues/5637/events | https://github.com/huggingface/datasets/issues/5637 | 1,625,295,691 | I_kwDODunzps5g4AtL | 5,637 | IterableDataset with_format does not support 'device' keyword for jax | {
"login": "Lime-Cakes",
"id": 91322985,
"node_id": "MDQ6VXNlcjkxMzIyOTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lime-Cakes",
"html_url": "https://github.com/Lime-Cakes",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is n... | 2023-03-15T11:04:12 | 2023-03-16T18:30:59 | null | NONE | null | null | null | ### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5637/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5634/comments | https://api.github.com/repos/huggingface/datasets/issues/5634/events | https://github.com/huggingface/datasets/issues/5634 | 1,622,424,174 | I_kwDODunzps5gtDpu | 5,634 | Not all progress bars are showing up when they should for downloading dataset | {
"login": "garlandz-db",
"id": 110427462,
"node_id": "U_kgDOBpT9Rg",
"avatar_url": "https://avatars.githubusercontent.com/u/110427462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garlandz-db",
"html_url": "https://github.com/garlandz-db",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nBy default, tqdm has `leave=True` to \"keep all traces of the progress bar upon the termination of iteration\". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.\r\n\r\nI feel like our TQDM bars are noisy, so I think we should always set `l... | 2023-03-13T23:04:18 | 2023-10-11T16:30:16 | 2023-10-11T16:30:16 | NONE | null | null | null | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width=... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5634/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5633/comments | https://api.github.com/repos/huggingface/datasets/issues/5633/events | https://github.com/huggingface/datasets/issues/5633 | 1,621,469,970 | I_kwDODunzps5gpasS | 5,633 | Cannot import datasets | {
"login": "eerio",
"id": 11250555,
"node_id": "MDQ6VXNlcjExMjUwNTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/11250555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eerio",
"html_url": "https://github.com/eerio",
"followers_url": "https://api.github.com/users/eerio/follow... | [] | closed | false | null | [] | null | [
"Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem... | 2023-03-13T13:14:44 | 2023-03-13T17:54:19 | 2023-03-13T17:54:19 | NONE | null | null | null | ### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Pl... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5633/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5632/comments | https://api.github.com/repos/huggingface/datasets/issues/5632/events | https://github.com/huggingface/datasets/issues/5632 | 1,621,177,391 | I_kwDODunzps5goTQv | 5,632 | Dataset cannot convert too large dictionnary | {
"login": "MaraLac",
"id": 108518627,
"node_id": "U_kgDOBnfc4w",
"avatar_url": "https://avatars.githubusercontent.com/u/108518627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaraLac",
"html_url": "https://github.com/MaraLac",
"followers_url": "https://api.github.com/users/MaraLac/foll... | [] | open | false | null | [] | null | [
"Answered on the forum:\r\n\r\n> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww · Pull Request #4800 · huggingface/datasets · GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up ... | 2023-03-13T10:14:40 | 2023-03-16T15:28:57 | null | NONE | null | null | null | ### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of m... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5632/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5631/comments | https://api.github.com/repos/huggingface/datasets/issues/5631/events | https://github.com/huggingface/datasets/issues/5631 | 1,620,442,854 | I_kwDODunzps5glf7m | 5,631 | Custom split names | {
"login": "ErfanMoosaviMonazzah",
"id": 79091831,
"node_id": "MDQ6VXNlcjc5MDkxODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErfanMoosaviMonazzah",
"html_url": "https://github.com/ErfanMoosaviMonazzah",
"followers_url... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. "
] | 2023-03-12T17:21:43 | 2023-03-24T14:13:00 | 2023-03-24T14:13:00 | NONE | null | null | null | ### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (curren... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5631/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5629/comments | https://api.github.com/repos/huggingface/datasets/issues/5629/events | https://github.com/huggingface/datasets/issues/5629 | 1,619,921,247 | I_kwDODunzps5gjglf | 5,629 | load_dataset gives "403" error when using Financial phrasebank | {
"login": "Jimchoo91",
"id": 67709789,
"node_id": "MDQ6VXNlcjY3NzA5Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/67709789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jimchoo91",
"html_url": "https://github.com/Jimchoo91",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"Hi! You seem to be using an outdated version of `datasets` that downloads the older script version. To avoid the error, you can either pass `revision=\"main\"` to `load_dataset` (this can fail if a script uses newer features of the lib) or update your installation with `pip install -U datasets` (better solution)."... | 2023-03-11T07:46:39 | 2023-03-13T18:27:26 | null | NONE | null | null | null | When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5629/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5627/comments | https://api.github.com/repos/huggingface/datasets/issues/5627/events | https://github.com/huggingface/datasets/issues/5627 | 1,619,336,609 | I_kwDODunzps5ghR2h | 5,627 | Unable to load AutoTrain-generated dataset from the hub | {
"login": "ijmiller2",
"id": 8560151,
"node_id": "MDQ6VXNlcjg1NjAxNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8560151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ijmiller2",
"html_url": "https://github.com/ijmiller2",
"followers_url": "https://api.github.com/users/ij... | [] | open | false | null | [] | null | [
"The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder",
"Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I can’t find any way of\npulling out file indices or file names from the autogenerated... | 2023-03-10T17:25:58 | 2023-03-11T15:44:42 | null | NONE | null | null | null | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5627/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5625/comments | https://api.github.com/repos/huggingface/datasets/issues/5625/events | https://github.com/huggingface/datasets/issues/5625 | 1,618,971,855 | I_kwDODunzps5gf4zP | 5,625 | Allow "jsonl" data type signifier | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"You can use \"json\" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. \"text\", \"imagefolder\", etc. I don't think the example in `transformers` is correct because of that",
"Yes, I understand the reasoning but this issue is to propose that the example in transformers (whil... | 2023-03-10T13:21:48 | 2023-03-11T10:35:39 | null | CONTRIBUTOR | null | null | null | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset scri... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5625/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5624/comments | https://api.github.com/repos/huggingface/datasets/issues/5624/events | https://github.com/huggingface/datasets/issues/5624 | 1,617,400,192 | I_kwDODunzps5gZ5GA | 5,624 | glue datasets returning -1 for test split | {
"login": "lithafnium",
"id": 8939967,
"node_id": "MDQ6VXNlcjg5Mzk5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lithafnium",
"html_url": "https://github.com/lithafnium",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answ... | 2023-03-09T14:47:18 | 2023-03-09T16:49:29 | 2023-03-09T16:49:29 | NONE | null | null | null | ### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5624/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5618/comments | https://api.github.com/repos/huggingface/datasets/issues/5618/events | https://github.com/huggingface/datasets/issues/5618 | 1,612,977,934 | I_kwDODunzps5gJBcO | 5,618 | Unpin fsspec < 2023.3.0 once issue fixed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 2023-03-07T08:41:51 | 2023-03-07T13:39:03 | 2023-03-07T13:39:03 | MEMBER | null | null | null | Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5618/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5616/comments | https://api.github.com/repos/huggingface/datasets/issues/5616/events | https://github.com/huggingface/datasets/issues/5616 | 1,612,932,508 | I_kwDODunzps5gI2Wc | 5,616 | CI is broken after fsspec-2023.3.0 release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2023-03-07T08:06:39 | 2023-03-07T08:37:29 | 2023-03-07T08:37:29 | MEMBER | null | null | null | As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5616/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5615/comments | https://api.github.com/repos/huggingface/datasets/issues/5615/events | https://github.com/huggingface/datasets/issues/5615 | 1,612,552,653 | I_kwDODunzps5gHZnN | 5,615 | IterableDataset.add_column is unable to accept another IterableDataset as a parameter. | {
"login": "zsaladin",
"id": 6466389,
"node_id": "MDQ6VXNlcjY0NjYzODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsaladin",
"html_url": "https://github.com/zsaladin",
"followers_url": "https://api.github.com/users/zsala... | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this."
] | 2023-03-07T01:52:00 | 2023-03-09T15:24:05 | 2023-03-09T15:23:54 | NONE | null | null | null | ### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5615/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5613/comments | https://api.github.com/repos/huggingface/datasets/issues/5613/events | https://github.com/huggingface/datasets/issues/5613 | 1,611,875,473 | I_kwDODunzps5gE0SR | 5,613 | Version mismatch with multiprocess and dill on Python 3.10 | {
"login": "adampauls",
"id": 1243668,
"node_id": "MDQ6VXNlcjEyNDM2Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adampauls",
"html_url": "https://github.com/adampauls",
"followers_url": "https://api.github.com/users/ad... | [] | open | false | null | [] | null | [
"Sorry, I just found https://github.com/apache/beam/issues/24458. It seems this issue is being worked on. ",
"Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says \r\n> Datasets is tested on Python 3.7+.\r\n\r\nb... | 2023-03-06T17:14:41 | 2023-09-01T18:30:08 | null | NONE | null | null | null | ### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5613/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5613/timeline | null | reopened |
https://api.github.com/repos/huggingface/datasets/issues/5612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5612/comments | https://api.github.com/repos/huggingface/datasets/issues/5612/events | https://github.com/huggingface/datasets/issues/5612 | 1,611,262,510 | I_kwDODunzps5gCeou | 5,612 | Arrow map type in parquet files unsupported | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | [
"I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.ma... | 2023-03-06T12:03:24 | 2024-03-15T18:56:12 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5612/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/datasets/issues/5612/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5610/comments | https://api.github.com/repos/huggingface/datasets/issues/5610/events | https://github.com/huggingface/datasets/issues/5610 | 1,610,698,006 | I_kwDODunzps5gAU0W | 5,610 | use datasets streaming mode in trainer ddp mode cause memory leak | {
"login": "gromzhu",
"id": 15223544,
"node_id": "MDQ6VXNlcjE1MjIzNTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15223544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gromzhu",
"html_url": "https://github.com/gromzhu",
"followers_url": "https://api.github.com/users/gromzh... | [] | open | false | null | [] | null | [
"Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n",
"found an article described a problem, may be helpful for someb... | 2023-03-06T05:26:49 | 2024-03-07T01:11:32 | null | NONE | null | null | null | ### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, Sequenti... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5610/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5609/comments | https://api.github.com/repos/huggingface/datasets/issues/5609/events | https://github.com/huggingface/datasets/issues/5609 | 1,610,062,862 | I_kwDODunzps5f95wO | 5,609 | `load_from_disk` vs `load_dataset` performance. | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.g... | [] | open | false | null | [] | null | [
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when... | 2023-03-05T05:27:15 | 2023-07-13T18:48:05 | null | NONE | null | null | null | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_di... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5609/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5608/comments | https://api.github.com/repos/huggingface/datasets/issues/5608/events | https://github.com/huggingface/datasets/issues/5608 | 1,609,996,563 | I_kwDODunzps5f9pkT | 5,608 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files. | {
"login": "jcho19",
"id": 107211437,
"node_id": "U_kgDOBmPqrQ",
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcho19",
"html_url": "https://github.com/jcho19",
"followers_url": "https://api.github.com/users/jcho19/follower... | [] | closed | false | null | [] | null | [
"Hi!\r\n\r\n> naming convention of mp3 files\r\n\r\nYes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.\r\n\r\nIf the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree ... | 2023-03-05T00:14:45 | 2023-03-12T00:02:57 | 2023-03-12T00:02:57 | NONE | null | null | null | ### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the b... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5608/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5606/comments | https://api.github.com/repos/huggingface/datasets/issues/5606/events | https://github.com/huggingface/datasets/issues/5606 | 1,608,911,632 | I_kwDODunzps5f5gsQ | 5,606 | Add `Dataset.to_list` to the API | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6... | closed | false | {
"login": "kyoto7250",
"id": 50972773,
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyoto7250",
"html_url": "https://github.com/kyoto7250",
"followers_url": "https://api.github.com/users/... | [
{
"login": "kyoto7250",
"id": 50972773,
"node_id": "MDQ6VXNlcjUwOTcyNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/50972773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyoto7250",
"html_url": "https://github.com/kyoto7250",
"followers_url": "https://a... | null | [
"Hello, I have an interest in this issue.\r\nIs the `Dataset.to_dict` you are describing correct in the code here?\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667",
"Yes, this is where `Dataset.to_dict` is defined.",
"#self-a... | 2023-03-03T16:17:10 | 2023-03-27T13:26:40 | 2023-03-27T13:26:40 | CONTRIBUTOR | null | null | null | Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5606/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5604/comments | https://api.github.com/repos/huggingface/datasets/issues/5604/events | https://github.com/huggingface/datasets/issues/5604 | 1,608,304,775 | I_kwDODunzps5f3MiH | 5,604 | Problems with downloading The Pile | {
"login": "sentialx",
"id": 11065386,
"node_id": "MDQ6VXNlcjExMDY1Mzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/11065386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sentialx",
"html_url": "https://github.com/sentialx",
"followers_url": "https://api.github.com/users/sen... | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\da... | 2023-03-03T09:52:08 | 2023-10-14T02:15:52 | 2023-03-24T12:44:25 | NONE | null | null | null | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:
`hg_user` but my repo cont... | 2023-03-02T12:08:39 | 2023-03-14T16:55:35 | 2023-03-14T16:55:34 | NONE | null | null | null | ### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingfa... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5601/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5600/comments | https://api.github.com/repos/huggingface/datasets/issues/5600/events | https://github.com/huggingface/datasets/issues/5600 | 1,606,585,596 | I_kwDODunzps5fwoz8 | 5,600 | Dataloader getitem not working for DreamboothDatasets | {
"login": "salahiguiliz",
"id": 76955987,
"node_id": "MDQ6VXNlcjc2OTU1OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/76955987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/salahiguiliz",
"html_url": "https://github.com/salahiguiliz",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data."
] | 2023-03-02T11:00:27 | 2023-03-13T17:59:35 | 2023-03-13T17:59:35 | NONE | null | null | null | ### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5600/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5597/comments | https://api.github.com/repos/huggingface/datasets/issues/5597/events | https://github.com/huggingface/datasets/issues/5597 | 1,604,928,721 | I_kwDODunzps5fqUTR | 5,597 | in-place dataset update | {
"login": "speedcell4",
"id": 3585459,
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedcell4",
"html_url": "https://github.com/speedcell4",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not load... | 2023-03-01T12:58:18 | 2023-03-02T13:30:41 | 2023-03-02T03:47:00 | NONE | null | null | null | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5597/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5596/comments | https://api.github.com/repos/huggingface/datasets/issues/5596/events | https://github.com/huggingface/datasets/issues/5596 | 1,604,919,993 | I_kwDODunzps5fqSK5 | 5,596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | {
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data",
"We've updated t... | 2023-03-01T12:53:08 | 2023-12-05T03:22:00 | 2023-03-02T11:12:11 | NONE | null | null | null | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5596/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5594/comments | https://api.github.com/repos/huggingface/datasets/issues/5594/events | https://github.com/huggingface/datasets/issues/5594 | 1,603,980,995 | I_kwDODunzps5fms7D | 5,594 | Error while downloading the xtreme udpos dataset | {
"login": "simran-khanuja",
"id": 24687672,
"node_id": "MDQ6VXNlcjI0Njg3Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simran-khanuja",
"html_url": "https://github.com/simran-khanuja",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | [
"Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir... | 2023-02-28T23:40:53 | 2023-11-04T20:45:56 | 2023-07-24T14:22:18 | NONE | null | null | null | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5594/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5586/comments | https://api.github.com/repos/huggingface/datasets/issues/5586/events | https://github.com/huggingface/datasets/issues/5586 | 1,602,961,544 | I_kwDODunzps5fi0CI | 5,586 | .sort() is broken when used after .filter(), only in 2.10.0 | {
"login": "MattYoon",
"id": 57797966,
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MattYoon",
"html_url": "https://github.com/MattYoon",
"followers_url": "https://api.github.com/users/Mat... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix"
] | 2023-02-28T12:18:09 | 2023-02-28T18:17:26 | 2023-02-28T17:21:59 | NONE | null | null | null | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5586/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5585/comments | https://api.github.com/repos/huggingface/datasets/issues/5585/events | https://github.com/huggingface/datasets/issues/5585 | 1,602,190,030 | I_kwDODunzps5ff3rO | 5,585 | Cache is not transportable | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because ... | 2023-02-28T00:53:06 | 2023-02-28T21:26:52 | 2023-02-28T21:26:52 | NONE | null | null | null | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5585/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5584/comments | https://api.github.com/repos/huggingface/datasets/issues/5584/events | https://github.com/huggingface/datasets/issues/5584 | 1,601,821,808 | I_kwDODunzps5fedxw | 5,584 | Unable to load coyo700M dataset | {
"login": "manuaero",
"id": 3059998,
"node_id": "MDQ6VXNlcjMwNTk5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3059998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manuaero",
"html_url": "https://github.com/manuaero",
"followers_url": "https://api.github.com/users/manua... | [] | closed | false | null | [] | null | [
"Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README... | 2023-02-27T19:35:03 | 2023-02-28T07:27:59 | 2023-02-28T07:27:58 | NONE | null | null | null | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coy... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5584/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5581/comments | https://api.github.com/repos/huggingface/datasets/issues/5581/events | https://github.com/huggingface/datasets/issues/5581 | 1,600,675,489 | I_kwDODunzps5faF6h | 5,581 | [DOC] Mistaken docs on set_format | {
"login": "NightMachinery",
"id": 36224762,
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NightMachinery",
"html_url": "https://github.com/NightMachinery",
"followers_url": "https://api.gi... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Thanks for reporting!"
] | 2023-02-27T08:03:09 | 2023-02-28T19:19:17 | 2023-02-28T19:19:17 | CONTRIBUTOR | null | null | null | ### Describe the bug
https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format
<img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png">
While actually running it will result in:
<img w... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5581/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5577/comments | https://api.github.com/repos/huggingface/datasets/issues/5577/events | https://github.com/huggingface/datasets/issues/5577 | 1,598,587,665 | I_kwDODunzps5fSIMR | 5,577 | Cannot load `the_pile_openwebtext2` | {
"login": "wjfwzzc",
"id": 5126316,
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjfwzzc",
"html_url": "https://github.com/wjfwzzc",
"followers_url": "https://api.github.com/users/wjfwzzc/... | [] | closed | false | null | [] | null | [
"Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n"
] | 2023-02-24T13:01:48 | 2023-02-24T14:01:09 | 2023-02-24T14:01:09 | NONE | null | null | null | ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5577/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5576/comments | https://api.github.com/repos/huggingface/datasets/issues/5576/events | https://github.com/huggingface/datasets/issues/5576 | 1,598,582,744 | I_kwDODunzps5fSG_Y | 5,576 | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. | {
"login": "wjfwzzc",
"id": 5126316,
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjfwzzc",
"html_url": "https://github.com/wjfwzzc",
"followers_url": "https://api.github.com/users/wjfwzzc/... | [] | closed | false | null | [] | null | [
"Duplicated issue."
] | 2023-02-24T12:57:49 | 2023-02-24T12:58:31 | 2023-02-24T12:58:18 | NONE | null | null | null | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked aro... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5576/timeline | null | not_planned |
https://api.github.com/repos/huggingface/datasets/issues/5575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5575/comments | https://api.github.com/repos/huggingface/datasets/issues/5575/events | https://github.com/huggingface/datasets/issues/5575 | 1,598,396,552 | I_kwDODunzps5fRZiI | 5,575 | Metadata for each column | {
"login": "parsa-ra",
"id": 11356471,
"node_id": "MDQ6VXNlcjExMzU2NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/11356471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parsa-ra",
"html_url": "https://github.com/parsa-ra",
"followers_url": "https://api.github.com/users/par... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10",
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"id": 9038583,
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"title": "3... | [
"Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = F... | 2023-02-24T10:53:44 | 2024-01-05T21:48:35 | null | NONE | null | null | null | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5575/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5575/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5574/comments | https://api.github.com/repos/huggingface/datasets/issues/5574/events | https://github.com/huggingface/datasets/issues/5574 | 1,598,104,691 | I_kwDODunzps5fQSRz | 5,574 | c4 dataset streaming fails with `FileNotFoundError` | {
"login": "krasserm",
"id": 202907,
"node_id": "MDQ6VXNlcjIwMjkwNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/202907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krasserm",
"html_url": "https://github.com/krasserm",
"followers_url": "https://api.github.com/users/krasser... | [] | closed | false | null | [] | null | [
"Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets ... | 2023-02-24T07:57:32 | 2023-12-18T07:32:32 | 2023-02-27T04:03:38 | NONE | null | null | null | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5574/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5572/comments | https://api.github.com/repos/huggingface/datasets/issues/5572/events | https://github.com/huggingface/datasets/issues/5572 | 1,597,257,624 | I_kwDODunzps5fNDeY | 5,572 | Datasets 2.10.0 does not reuse the dataset cache | {
"login": "lsb",
"id": 45281,
"node_id": "MDQ6VXNlcjQ1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lsb",
"html_url": "https://github.com/lsb",
"followers_url": "https://api.github.com/users/lsb/followers",
"following... | [] | closed | false | null | [] | null | [] | 2023-02-23T17:28:11 | 2023-02-23T18:03:55 | 2023-02-23T18:03:55 | NONE | null | null | null | ### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5572/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5571/comments | https://api.github.com/repos/huggingface/datasets/issues/5571/events | https://github.com/huggingface/datasets/issues/5571 | 1,597,198,953 | I_kwDODunzps5fM1Jp | 5,571 | load_dataset fails for JSON in windows | {
"login": "abinashsahu",
"id": 11876897,
"node_id": "MDQ6VXNlcjExODc2ODk3",
"avatar_url": "https://avatars.githubusercontent.com/u/11876897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abinashsahu",
"html_url": "https://github.com/abinashsahu",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n",
"Thanks it worked!"
] | 2023-02-23T16:50:11 | 2023-02-24T13:21:47 | 2023-02-24T13:21:47 | NONE | null | null | null | ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is di... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5571/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5570/comments | https://api.github.com/repos/huggingface/datasets/issues/5570/events | https://github.com/huggingface/datasets/issues/5570 | 1,597,190,926 | I_kwDODunzps5fMzMO | 5,570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | {
"login": "buoi",
"id": 38630200,
"node_id": "MDQ6VXNlcjM4NjMwMjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/38630200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buoi",
"html_url": "https://github.com/buoi",
"followers_url": "https://api.github.com/users/buoi/followers"... | [] | closed | false | null | [] | null | [
"Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it?",
"The error is now more informative:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the sa... | 2023-02-23T16:44:32 | 2023-07-24T15:18:50 | 2023-07-24T15:18:50 | NONE | null | null | null | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5570/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5568/comments | https://api.github.com/repos/huggingface/datasets/issues/5568/events | https://github.com/huggingface/datasets/issues/5568 | 1,596,900,532 | I_kwDODunzps5fLsS0 | 5,568 | dataset.to_iterable_dataset() loses useful info like dataset features | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6... | closed | false | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://... | [
{
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"follo... | null | [
"Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.\r\n\r\nSetting this as a good first issue if someone would like to contribute, otherwise we can take care of it :)",
"#self-assign",
"seems like the feature parameter is missing fr... | 2023-02-23T13:45:33 | 2023-02-24T13:22:36 | 2023-02-24T13:22:36 | CONTRIBUTOR | null | null | null | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5568/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5566/comments | https://api.github.com/repos/huggingface/datasets/issues/5566/events | https://github.com/huggingface/datasets/issues/5566 | 1,595,916,674 | I_kwDODunzps5fH8GC | 5,566 | Directly reading parquet files in a s3 bucket from the load_dataset method | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/sha... | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"... | open | false | null | [] | null | [
"Hi ! I think is in the scope of this other issue: to https://github.com/huggingface/datasets/issues/5281 "
] | 2023-02-22T22:13:40 | 2023-02-23T11:03:29 | null | NONE | null | null | null | ### Feature request
Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial
### Motivation
In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage.
### Yo... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5566/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5555/comments | https://api.github.com/repos/huggingface/datasets/issues/5555/events | https://github.com/huggingface/datasets/issues/5555 | 1,592,469,938 | I_kwDODunzps5e6ymy | 5,555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | {
"login": "prabhakar267",
"id": 10768588,
"node_id": "MDQ6VXNlcjEwNzY4NTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabhakar267",
"html_url": "https://github.com/prabhakar267",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | [
"Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```",
"```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese a... | 2023-02-20T21:33:45 | 2023-02-27T09:23:34 | null | NONE | null | null | null | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5555/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5548/comments | https://api.github.com/repos/huggingface/datasets/issues/5548/events | https://github.com/huggingface/datasets/issues/5548 | 1,590,835,479 | I_kwDODunzps5e0jkX | 5,548 | Apply flake8-comprehensions to codebase | {
"login": "Skylion007",
"id": 2053727,
"node_id": "MDQ6VXNlcjIwNTM3Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2053727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Skylion007",
"html_url": "https://github.com/Skylion007",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 2023-02-19T20:05:38 | 2023-02-23T13:59:41 | 2023-02-23T13:59:41 | CONTRIBUTOR | null | null | null | ### Feature request
Apply ruff flake8 comprehension checks to codebase.
### Motivation
This should strictly improve the performance / readability of the codebase by removing unnecessary iteration, function calls, etc. This should generate better Python bytecode which should strictly improve performance.
I alread... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5548/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5546/comments | https://api.github.com/repos/huggingface/datasets/issues/5546/events | https://github.com/huggingface/datasets/issues/5546 | 1,590,346,349 | I_kwDODunzps5eysJt | 5,546 | Downloaded datasets do not cache at $HF_HOME | {
"login": "ErfanMoosaviMonazzah",
"id": 79091831,
"node_id": "MDQ6VXNlcjc5MDkxODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErfanMoosaviMonazzah",
"html_url": "https://github.com/ErfanMoosaviMonazzah",
"followers_url... | [] | closed | false | null | [] | null | [
"Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?\r\n\r\nThen you can print\r\n```python\r\nprint(datasets.config.HF_CACHE_HOME)\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n```"
] | 2023-02-18T13:30:35 | 2023-07-24T14:22:43 | 2023-07-24T14:22:43 | NONE | null | null | null | ### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5546/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5543/comments | https://api.github.com/repos/huggingface/datasets/issues/5543/events | https://github.com/huggingface/datasets/issues/5543 | 1,588,951,379 | I_kwDODunzps5etXlT | 5,543 | the pile datasets url seems to change back | {
"login": "wjfwzzc",
"id": 5126316,
"node_id": "MDQ6VXNlcjUxMjYzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5126316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjfwzzc",
"html_url": "https://github.com/wjfwzzc",
"followers_url": "https://api.github.com/users/wjfwzzc/... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Thanks for reporting, @wjfwzzc.\r\n\r\nI am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1",
"Thank you. All fixes are done:\r\n- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2\r\n- [x] https://huggingface.co/dataset... | 2023-02-17T08:40:11 | 2023-02-21T06:37:00 | 2023-02-20T08:41:33 | NONE | null | null | null | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5543/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5541/comments | https://api.github.com/repos/huggingface/datasets/issues/5541/events | https://github.com/huggingface/datasets/issues/5541 | 1,588,633,555 | I_kwDODunzps5esJ_T | 5,541 | Flattening indices in selected datasets is extremely inefficient | {
"login": "marioga",
"id": 6591505,
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marioga",
"html_url": "https://github.com/marioga",
"followers_url": "https://api.github.com/users/marioga/... | [] | closed | false | null | [] | null | [
"Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -... | 2023-02-17T01:52:24 | 2023-02-22T13:15:20 | 2023-02-17T11:12:33 | CONTRIBUTOR | null | null | null | ### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. Thi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5541/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5539/comments | https://api.github.com/repos/huggingface/datasets/issues/5539/events | https://github.com/huggingface/datasets/issues/5539 | 1,587,970,083 | I_kwDODunzps5epoAj | 5,539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | {
"login": "aalbersk",
"id": 41912135,
"node_id": "MDQ6VXNlcjQxOTEyMTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aalbersk",
"html_url": "https://github.com/aalbersk",
"followers_url": "https://api.github.com/users/aal... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='t... | 2023-02-16T16:08:51 | 2023-02-22T10:30:30 | 2023-02-21T13:03:57 | NONE | null | null | null | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5539/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5538/comments | https://api.github.com/repos/huggingface/datasets/issues/5538/events | https://github.com/huggingface/datasets/issues/5538 | 1,587,732,596 | I_kwDODunzps5eouB0 | 5,538 | load_dataset in seaborn is not working for me. getting this error. | {
"login": "reemaranibarik",
"id": 125575109,
"node_id": "U_kgDOB3wfxQ",
"avatar_url": "https://avatars.githubusercontent.com/u/125575109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reemaranibarik",
"html_url": "https://github.com/reemaranibarik",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead."
] | 2023-02-16T14:01:58 | 2023-02-16T14:44:36 | 2023-02-16T14:44:36 | NONE | null | null | null | TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5538/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5537/comments | https://api.github.com/repos/huggingface/datasets/issues/5537/events | https://github.com/huggingface/datasets/issues/5537 | 1,587,567,464 | I_kwDODunzps5eoFto | 5,537 | Increase speed of data files resolution | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_k... | closed | false | {
"login": "semajyllek",
"id": 35013374,
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semajyllek",
"html_url": "https://github.com/semajyllek",
"followers_url": "https://api.github.com/use... | [
{
"login": "semajyllek",
"id": 35013374,
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semajyllek",
"html_url": "https://github.com/semajyllek",
"followers_url": "https:... | null | [
"#self-assign",
"You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exac... | 2023-02-16T12:11:45 | 2023-12-15T13:12:31 | 2023-12-15T13:12:31 | MEMBER | null | null | null | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `res... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5537/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5536/comments | https://api.github.com/repos/huggingface/datasets/issues/5536/events | https://github.com/huggingface/datasets/issues/5536 | 1,586,930,643 | I_kwDODunzps5elqPT | 5,536 | Failure to hash function when using .map() | {
"login": "venzen",
"id": 6916056,
"node_id": "MDQ6VXNlcjY5MTYwNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6916056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venzen",
"html_url": "https://github.com/venzen",
"followers_url": "https://api.github.com/users/venzen/foll... | [] | closed | false | null | [] | null | [
"Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possib... | 2023-02-16T03:12:07 | 2023-09-08T21:06:01 | 2023-02-16T14:56:41 | NONE | null | null | null | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5536/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5534/comments | https://api.github.com/repos/huggingface/datasets/issues/5534/events | https://github.com/huggingface/datasets/issues/5534 | 1,586,177,862 | I_kwDODunzps5eiydG | 5,534 | map() breaks at certain dataset size when using Array3D | {
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users... | [] | open | false | null | [] | null | [
"Hi! This code works for me locally or in Colab. What's the output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` when you run it inside your environment?",
"Thanks for looking into this!\r\nThe output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` is:\r\n```\r\n11.0.0\r\n```\r\n\... | 2023-02-15T16:34:25 | 2023-03-03T16:31:33 | null | NONE | null | null | null | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent cal... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5534/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5532/comments | https://api.github.com/repos/huggingface/datasets/issues/5532/events | https://github.com/huggingface/datasets/issues/5532 | 1,584,505,128 | I_kwDODunzps5ecaEo | 5,532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | {
"login": "Ulipenitz",
"id": 37191008,
"node_id": "MDQ6VXNlcjM3MTkxMDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ulipenitz",
"html_url": "https://github.com/Ulipenitz",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n... | 2023-02-14T16:52:29 | 2023-02-15T16:09:19 | 2023-02-15T16:09:19 | NONE | null | null | null | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5532/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/5531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5531/comments | https://api.github.com/repos/huggingface/datasets/issues/5531/events | https://github.com/huggingface/datasets/issues/5531 | 1,584,387,276 | I_kwDODunzps5eb9TM | 5,531 | Invalid Arrow data from JSONL | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 2023-02-14T15:39:49 | 2023-02-14T15:46:09 | null | MEMBER | null | null | null | This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5531/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5525/comments | https://api.github.com/repos/huggingface/datasets/issues/5525/events | https://github.com/huggingface/datasets/issues/5525 | 1,580,342,729 | I_kwDODunzps5eMh3J | 5,525 | TypeError: Couldn't cast array of type string to null | {
"login": "TJ-Solergibert",
"id": 74564958,
"node_id": "MDQ6VXNlcjc0NTY0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/74564958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TJ-Solergibert",
"html_url": "https://github.com/TJ-Solergibert",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | [
"Thanks for reporting, @TJ-Solergibert.\r\n\r\nWe cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`\r\nCould you please make it publicly accessible?\r\n",
"I swear it's public, I've checked the settings and I've been able to open it in... | 2023-02-10T21:12:36 | 2023-02-14T17:41:08 | 2023-02-14T09:35:49 | NONE | null | null | null | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5525/timeline | null | completed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.