url stringlengths 61 61 | repository_url stringclasses 1
value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 51 51 | id int64 1.92B 2.7B | node_id stringlengths 18 18 | number int64 6.27k 7.3k | title stringlengths 2 150 | user dict | labels listlengths 0 2 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 1 | milestone null | comments listlengths 0 23 | created_at timestamp[ns] | updated_at int64 1.7k 1.73k | closed_at timestamp[ns] | author_association stringclasses 4
values | active_lock_reason null | body stringlengths 3 47.9k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft null | pull_request null | is_pull_request bool 1
class | time_to_close float64 0 0 ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7100/comments | https://api.github.com/repos/huggingface/datasets/issues/7100/events | https://github.com/huggingface/datasets/issues/7100 | 2,465,529,414 | I_kwDODunzps6S9P5G | 7,100 | IterableDataset: cannot resolve features from list of numpy arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4",
"events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}",
"followers_url": "https://api.github.com/users/VeryLazyBoy/followers",
"following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}",
"gists_u... | [] | open | false | null | [] | null | [
"Assign this issue to me under Hacktoberfest with hacktoberfest label inserted on the issue"
] | 1970-01-01T00:00:00.000001 | 1,727 | null | NONE | null | ### Describe the bug
when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error.
```
Traceback (most recent call last):
File "test.py", line 6
iter_ds = iter_ds._resolve_features()
File "lib/python3.10/site-packages/datasets/iterable_dat... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7100/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7097/comments | https://api.github.com/repos/huggingface/datasets/issues/7097/events | https://github.com/huggingface/datasets/issues/7097 | 2,458,455,489 | I_kwDODunzps6SiQ3B | 7,097 | Some of DownloadConfig's properties are always being overridden in load.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4",
"events_url": "https://api.github.com/users/ductai199x/events{/privacy}",
"followers_url": "https://api.github.com/users/ductai199x/followers",
"following_url": "https://api.github.com/users/ductai199x/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,723 | null | NONE | null | ### Describe the bug
The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded.
See this im... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7097/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7093/comments | https://api.github.com/repos/huggingface/datasets/issues/7093/events | https://github.com/huggingface/datasets/issues/7093 | 2,454,413,074 | I_kwDODunzps6SS18S | 7,093 | Add Arabic Docs to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4",
"events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}",
"followers_url": "https://api.github.com/users/AhmedAlmaghz/followers",
"following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}",
"gist... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,723 | null | NONE | null | ### Feature request
Add Arabic Docs to datasets
[Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
### Motivation
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
### Your contribution
@AhmedAlmaghz
https://github.com/AhmedAlma... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7093/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7092/comments | https://api.github.com/repos/huggingface/datasets/issues/7092/events | https://github.com/huggingface/datasets/issues/7092 | 2,451,393,658 | I_kwDODunzps6SHUx6 | 7,092 | load_dataset with multiple jsonlines files interprets datastructure too early | {
"avatar_url": "https://avatars.githubusercontent.com/u/23384483?v=4",
"events_url": "https://api.github.com/users/Vipitis/events{/privacy}",
"followers_url": "https://api.github.com/users/Vipitis/followers",
"following_url": "https://api.github.com/users/Vipitis/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"I’ll take a look",
"Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined... | 1970-01-01T00:00:00.000001 | 1,723 | null | NONE | null | ### Describe the bug
likely related to #6460
using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data.
### Steps to reproduce the bug
real world example:
data is available in this [PR-bra... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7092/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7090/comments | https://api.github.com/repos/huggingface/datasets/issues/7090/events | https://github.com/huggingface/datasets/issues/7090 | 2,449,699,490 | I_kwDODunzps6SA3Ki | 7,090 | The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name | {
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Describe the bug
Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11
Failure:
```
if err_filename is not None:
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFo... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7090/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7089/comments | https://api.github.com/repos/huggingface/datasets/issues/7089/events | https://github.com/huggingface/datasets/issues/7089 | 2,449,479,500 | I_kwDODunzps6SABdM | 7,089 | Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped | {
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Describe the bug
see the subject
### Steps to reproduce the bug
regular tests
### Expected behavior
n/a
### Environment info
version 2.20.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7089/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7088/comments | https://api.github.com/repos/huggingface/datasets/issues/7088/events | https://github.com/huggingface/datasets/issues/7088 | 2,447,383,940 | I_kwDODunzps6R4B2E | 7,088 | Disable warning when using with_format format on tensors | {
"avatar_url": "https://avatars.githubusercontent.com/u/42048782?v=4",
"events_url": "https://api.github.com/users/Haislich/events{/privacy}",
"followers_url": "https://api.github.com/users/Haislich/followers",
"following_url": "https://api.github.com/users/Haislich/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Feature request
If we write this code:
```python
"""Get data and define datasets."""
from enum import StrEnum
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
class Split(StrEnum):
"""Describes what type of split to use in the dataloa... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7088/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7087/comments | https://api.github.com/repos/huggingface/datasets/issues/7087/events | https://github.com/huggingface/datasets/issues/7087 | 2,447,158,643 | I_kwDODunzps6R3K1z | 7,087 | Unable to create dataset card for Lushootseed language | {
"avatar_url": "https://avatars.githubusercontent.com/u/134876525?v=4",
"events_url": "https://api.github.com/users/vaishnavsudarshan/events{/privacy}",
"followers_url": "https://api.github.com/users/vaishnavsudarshan/followers",
"following_url": "https://api.github.com/users/vaishnavsudarshan/following{/other... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/hug... | 1970-01-01T00:00:00.000001 | 1,722 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering la... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7087/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7086/comments | https://api.github.com/repos/huggingface/datasets/issues/7086/events | https://github.com/huggingface/datasets/issues/7086 | 2,445,516,829 | I_kwDODunzps6Rw6Ad | 7,086 | load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Describe the bug
I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this.
### Steps to reproduce the bug
1. Be Me
2. Run `load_dataset("TAUR-Lab/MuSR")`
3. Hit rate limit error
4. Dataset... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7086/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7085/comments | https://api.github.com/repos/huggingface/datasets/issues/7085/events | https://github.com/huggingface/datasets/issues/7085 | 2,440,008,618 | I_kwDODunzps6Rb5Oq | 7,085 | [Regression] IterableDataset is broken on 2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our t... | 1970-01-01T00:00:00.000001 | 1,724 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable Itera... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7085/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7084/comments | https://api.github.com/repos/huggingface/datasets/issues/7084/events | https://github.com/huggingface/datasets/issues/7084 | 2,439,519,534 | I_kwDODunzps6RaB0u | 7,084 | More easily support streaming local files | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,722 | null | CONTRIBUTOR | null | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the d... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7084/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7080/comments | https://api.github.com/repos/huggingface/datasets/issues/7080/events | https://github.com/huggingface/datasets/issues/7080 | 2,434,275,664 | I_kwDODunzps6RGBlQ | 7,080 | Generating train split takes a long time | {
"avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4",
"events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}",
"followers_url": "https://api.github.com/users/alexanderswerdlow/followers",
"following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_... | [] | open | false | null | [] | null | [
"@alexanderswerdlow \r\nWhen no specific split is mentioned, the load_dataset library will load all available splits of the dataset. For example, if a dataset has \"train\" and \"test\" splits, the load_dataset function will load both into the DatasetDict object.\r\n\r\n
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebD... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7080/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7079/comments | https://api.github.com/repos/huggingface/datasets/issues/7079/events | https://github.com/huggingface/datasets/issues/7079 | 2,433,363,298 | I_kwDODunzps6RCi1i | 7,079 | HfHubHTTPError: 500 Server Error: Internal Server Error for url: | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | null | [
"same issue here. @albertvillanova @lhoestq ",
"Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https... | 1970-01-01T00:00:00.000001 | 1,726 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
newly uploaded datasets, since yesterday, yields an error.
old datasets, works fine.
Seems like the datasets api server returns a 500
I'm getting the same error, when I invoke `load_dataset` with my dataset.
Long discussion about it here, but I'm not sure anyone from huggingface have s... | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7079/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7077/comments | https://api.github.com/repos/huggingface/datasets/issues/7077/events | https://github.com/huggingface/datasets/issues/7077 | 2,432,345,489 | I_kwDODunzps6Q-qWR | 7,077 | column_names ignored by load_dataset() when loading CSV file | {
"avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4",
"events_url": "https://api.github.com/users/luismsgomes/events{/privacy}",
"followers_url": "https://api.github.com/users/luismsgomes/followers",
"following_url": "https://api.github.com/users/luismsgomes/following{/other_user}",
"gists_ur... | [] | open | false | null | [] | null | [
"I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug i... | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting da... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7077/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7073/comments | https://api.github.com/repos/huggingface/datasets/issues/7073/events | https://github.com/huggingface/datasets/issues/7073 | 2,431,706,568 | I_kwDODunzps6Q8OXI | 7,073 | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\nhttps://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs... | 1970-01-01T00:00:00.000001 | 1,722 | 1970-01-01T00:00:00.000001 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision N... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7073/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7072/comments | https://api.github.com/repos/huggingface/datasets/issues/7072/events | https://github.com/huggingface/datasets/issues/7072 | 2,430,577,916 | I_kwDODunzps6Q36z8 | 7,072 | nm | {
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26392883?v=4",
"events_url": "https://api.github.com/users/brettdavies/events{/privacy}",
"followers_url": "https://api.github.com/users/brettdavies/followers",
"following_url": "https://api.github.com/users/brettdavies/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7072/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7071/comments | https://api.github.com/repos/huggingface/datasets/issues/7071/events | https://github.com/huggingface/datasets/issues/7071 | 2,430,313,011 | I_kwDODunzps6Q26Iz | 7,071 | Filter hangs | {
"avatar_url": "https://avatars.githubusercontent.com/u/61711045?v=4",
"events_url": "https://api.github.com/users/lucienwalewski/events{/privacy}",
"followers_url": "https://api.github.com/users/lucienwalewski/followers",
"following_url": "https://api.github.com/users/lucienwalewski/following{/other_user}",
... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I hav... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7071/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7070/comments | https://api.github.com/repos/huggingface/datasets/issues/7070/events | https://github.com/huggingface/datasets/issues/7070 | 2,430,285,235 | I_kwDODunzps6Q2zWz | 7,070 | how set_transform affects batch size? | {
"avatar_url": "https://avatars.githubusercontent.com/u/103993288?v=4",
"events_url": "https://api.github.com/users/VafaKnm/events{/privacy}",
"followers_url": "https://api.github.com/users/VafaKnm/followers",
"following_url": "https://api.github.com/users/VafaKnm/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_feat... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7070/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7070/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7067/comments | https://api.github.com/repos/huggingface/datasets/issues/7067/events | https://github.com/huggingface/datasets/issues/7067 | 2,425,460,168 | I_kwDODunzps6QkZXI | 7,067 | Convert_to_parquet fails for datasets with multiple configs | {
"avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4",
"events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangZhen02/followers",
"following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}",
"gists_u... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n",
"Thanks for reporting.\r\n\r\nI will make the code more robust.",
"I have opened an issue in the huggingface-hub repo:\r\n-... | 1970-01-01T00:00:00.000001 | 1,722 | 1970-01-01T00:00:00.000001 | NONE | null | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7067/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7066/comments | https://api.github.com/repos/huggingface/datasets/issues/7066/events | https://github.com/huggingface/datasets/issues/7066 | 2,425,125,160 | I_kwDODunzps6QjHko | 7,066 | One subset per file in repo ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | null | MEMBER | null | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jso... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7066/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7065/comments | https://api.github.com/repos/huggingface/datasets/issues/7065/events | https://github.com/huggingface/datasets/issues/7065 | 2,424,734,953 | I_kwDODunzps6QhoTp | 7,065 | Cannot get item after loading from disk and then converting to iterable. | {
"avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4",
"events_url": "https://api.github.com/users/happyTonakai/events{/privacy}",
"followers_url": "https://api.github.com/users/happyTonakai/followers",
"following_url": "https://api.github.com/users/happyTonakai/following{/other_user}",
"gist... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Au... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7065/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7063/comments | https://api.github.com/repos/huggingface/datasets/issues/7063/events | https://github.com/huggingface/datasets/issues/7063 | 2,424,488,648 | I_kwDODunzps6QgsLI | 7,063 | Add `batch` method to `Dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4",
"events_url": "https://api.github.com/users/lappemic/events{/privacy}",
"followers_url": "https://api.github.com/users/lappemic/followers",
"following_url": "https://api.github.com/users/lappemic/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7063/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7061/comments | https://api.github.com/repos/huggingface/datasets/issues/7061/events | https://github.com/huggingface/datasets/issues/7061 | 2,423,786,881 | I_kwDODunzps6QeA2B | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4",
"events_url": "https://api.github.com/users/hahmad2008/events{/privacy}",
"followers_url": "https://api.github.com/users/hahmad2008/followers",
"following_url": "https://api.github.com/users/hahmad2008/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,725 | null | NONE | null | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
`... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7061/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7059/comments | https://api.github.com/repos/huggingface/datasets/issues/7059/events | https://github.com/huggingface/datasets/issues/7059 | 2,422,827,892 | I_kwDODunzps6QaWt0 | 7,059 | None values are skipped when reading jsonl in subobjects | {
"avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4",
"events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}",
"followers_url": "https://api.github.com/users/PonteIneptique/followers",
"following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}",
... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7059/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7058/comments | https://api.github.com/repos/huggingface/datasets/issues/7058/events | https://github.com/huggingface/datasets/issues/7058 | 2,422,560,355 | I_kwDODunzps6QZVZj | 7,058 | New feature type: Document | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | null | COLLABORATOR | null | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7058/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7055/comments | https://api.github.com/repos/huggingface/datasets/issues/7055/events | https://github.com/huggingface/datasets/issues/7055 | 2,421,708,891 | I_kwDODunzps6QWFhb | 7,055 | WebDataset with different prefixes are unsupported | {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.gi... | [] | closed | false | null | [] | null | [
"Since `datasets` uses is built on Arrow to store the data, it requires each sample to have the same columns.\r\n\r\nThis can be fixed by specifyign in advance the name of all the possible columns in the `dataset_info` in YAML, and missing values will be `None`",
"Thanks. This currently doesn't work for WebDatase... | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k)
Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules... | {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.gi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7055/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7053/comments | https://api.github.com/repos/huggingface/datasets/issues/7053/events | https://github.com/huggingface/datasets/issues/7053 | 2,416,423,791 | I_kwDODunzps6QB7Nv | 7,053 | Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple` | {
"avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4",
"events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}",
"followers_url": "https://api.github.com/users/MatthewYZhang/followers",
"following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}",
"g... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi,\r\n\r\nThis issue was fixed in `datasets` 2.15.0:\r\n- #6105\r\n\r\nYou will need to update your `datasets`:\r\n```\r\npip install -U datasets\r\n```",
"Duplicate of:\r\n- #6100"
] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7053/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7051/comments | https://api.github.com/repos/huggingface/datasets/issues/7051/events | https://github.com/huggingface/datasets/issues/7051 | 2,409,353,929 | I_kwDODunzps6Pm9LJ | 7,051 | How to set_epoch with interleave_datasets? | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_... | [] | closed | false | null | [] | null | [
"This is not possible right now afaik :/\r\n\r\nMaybe we could have something like this ? wdyt ?\r\n\r\n```python\r\nds = interleave_datasets(\r\n [shuffled_dataset_a, dataset_b],\r\n probabilities=probabilities,\r\n stopping_strategy='all_exhausted',\r\n reshuffle_each_iteration=True,\r\n)",
"That wo... | 1970-01-01T00:00:00.000001 | 1,722 | 1970-01-01T00:00:00.000001 | NONE | null | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I... | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7051/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7049/comments | https://api.github.com/repos/huggingface/datasets/issues/7049/events | https://github.com/huggingface/datasets/issues/7049 | 2,408,514,366 | I_kwDODunzps6PjwM- | 7,049 | Save nparray as list | {
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"In addition, when I use `set_format ` and index the ds, the following error occurs:\r\nthe code\r\n```python\r\nds.set_format(type=\"np\", colums=\"pixel_values\")\r\n```\r\nerror\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/user-attachments/assets/b28bbff2-20ea-4d28-ab62-b4ed2d944996\">\r\n",
">... | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7049/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7048/comments | https://api.github.com/repos/huggingface/datasets/issues/7048/events | https://github.com/huggingface/datasets/issues/7048 | 2,408,487,547 | I_kwDODunzps6Pjpp7 | 7,048 | ImportError: numpy.core.multiarray when using `filter` | {
"avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4",
"events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}",
"followers_url": "https://api.github.com/users/kamilakesbi/followers",
"following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"Could you please check your `numpy` version?",
"I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ",
"We recently added support for numpy 2.0, but it is not released yet.",
"Ok I see, thanks! I think we can close this issue for now as switching back to v... | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4",
"events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}",
"followers_url": "https://api.github.com/users/kamilakesbi/followers",
"following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7048/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7047/comments | https://api.github.com/repos/huggingface/datasets/issues/7047/events | https://github.com/huggingface/datasets/issues/7047 | 2,406,495,084 | I_kwDODunzps6PcDNs | 7,047 | Save Dataset as Sharded Parquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4",
"events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}",
"followers_url": "https://api.github.com/users/tom-p-reichel/followers",
"following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}",
"g... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"To anyone else who finds themselves in this predicament, it's possible to read the parquet file in the same way that datasets writes it, and then manually break it into pieces. Although, you need a couple of magic options (`thrift_*`) to deal with the huge metadata, otherwise pyarrow immediately crashes.\r\n```pyt... | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7047/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7041/comments | https://api.github.com/repos/huggingface/datasets/issues/7041/events | https://github.com/huggingface/datasets/issues/7041 | 2,404,576,038 | I_kwDODunzps6PUusm | 7,041 | `sort` after `filter` unreasonably slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/56711045?v=4",
"events_url": "https://api.github.com/users/Tobin-rgb/events{/privacy}",
"followers_url": "https://api.github.com/users/Tobin-rgb/followers",
"following_url": "https://api.github.com/users/Tobin-rgb/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | null | [
"`filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping."
... | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
as the tittle says ...
### Steps to reproduce the bug
`sort` seems to be normal.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
print("start sort")
ds = ds.sort("k")
print("f... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7041/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7040/comments | https://api.github.com/repos/huggingface/datasets/issues/7040/events | https://github.com/huggingface/datasets/issues/7040 | 2,402,918,335 | I_kwDODunzps6POZ-_ | 7,040 | load `streaming=True` dataset with downloaded cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_u... | [] | open | false | null | [] | null | [
"When you pass `streaming=True`, the cache is ignored. The remote data URL is used instead and the data is streamed from the remote server.",
"Thanks for your reply! So is there any solution to get my expected behavior besides clone the whole repo ? Or could I adjust my script to load the downloaded arrow files a... | 1970-01-01T00:00:00.000001 | 1,720 | null | NONE | null | ### Describe the bug
We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 f... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7040/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7040/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7037/comments | https://api.github.com/repos/huggingface/datasets/issues/7037/events | https://github.com/huggingface/datasets/issues/7037 | 2,400,192,419 | I_kwDODunzps6PEAej | 7,037 | A bug of Dataset.to_json() function | {
"avatar_url": "https://avatars.githubusercontent.com/u/26499566?v=4",
"events_url": "https://api.github.com/users/LinglingGreat/events{/privacy}",
"followers_url": "https://api.github.com/users/LinglingGreat/followers",
"following_url": "https://api.github.com/users/LinglingGreat/following{/other_user}",
"g... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @LinglingGreat.\r\n\r\nI confirm this is a bug.",
"@albertvillanova I would like to take a shot at this if you aren't working on it currently. Let me know!"
] | 1970-01-01T00:00:00.000001 | 1,727 | null | NONE | null | ### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the f... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7037/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7037/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7035/comments | https://api.github.com/repos/huggingface/datasets/issues/7035/events | https://github.com/huggingface/datasets/issues/7035 | 2,400,021,225 | I_kwDODunzps6PDWrp | 7,035 | Docs are not generated when a parameter defaults to a NamedSplit value | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | MEMBER | null | While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like:
```python
def call_function(split=Split.TRAIN):
...
```
The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'>
See: https://github.com/huggingface/datasets/action... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7035/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7033/comments | https://api.github.com/repos/huggingface/datasets/issues/7033/events | https://github.com/huggingface/datasets/issues/7033 | 2,397,419,768 | I_kwDODunzps6O5bj4 | 7,033 | `from_generator` does not allow to specify the split name | {
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": ... | [] | closed | false | null | [] | null | [
"Thanks for reporting, @pminervini.\r\n\r\nI agree we should give the option to define the split name.\r\n\r\nIndeed, there is a PR that addresses precisely this issue:\r\n- #7015\r\n\r\nI am reviewing it.",
"Booom! thank you guys :)"
] | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/g... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7033/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7031/comments | https://api.github.com/repos/huggingface/datasets/issues/7031/events | https://github.com/huggingface/datasets/issues/7031 | 2,395,401,692 | I_kwDODunzps6Oxu3c | 7,031 | CI quality is broken: use ruff check instead | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,720 | 1970-01-01T00:00:00.000001 | MEMBER | null | CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027
```
error: `ruff <path>` has been removed. Use `ruff check <path>` instead.
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7031/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7030/comments | https://api.github.com/repos/huggingface/datasets/issues/7030/events | https://github.com/huggingface/datasets/issues/7030 | 2,393,411,631 | I_kwDODunzps6OqJAv | 7,030 | Add option to disable progress bar when reading a dataset ("Loading dataset from disk") | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"g... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"You can disable progress bars for all of `datasets` with `disable_progress_bars`. [Link](https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars)\r\n\r\nSo you could do something like:\r\n\r\n```python\r\nfrom datasets import load_from_disk, enable_progress_bars, disable_p... | 1970-01-01T00:00:00.000001 | 1,720 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16.
### Motivation
I am reading a lot of datasets that it creates lots of logs.
<img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-... | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"g... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7030/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7030/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7029/comments | https://api.github.com/repos/huggingface/datasets/issues/7029/events | https://github.com/huggingface/datasets/issues/7029 | 2,391,366,696 | I_kwDODunzps6OiVwo | 7,029 | load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error | {
"avatar_url": "https://avatars.githubusercontent.com/u/171606538?v=4",
"events_url": "https://api.github.com/users/sugam-nexusflow/events{/privacy}",
"followers_url": "https://api.github.com/users/sugam-nexusflow/followers",
"following_url": "https://api.github.com/users/sugam-nexusflow/following{/other_user}... | [] | open | false | null | [] | null | [
"hi ! can you share the full stack trace ? this should help locate what files is not written in the cache_dir"
] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7029/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7024/comments | https://api.github.com/repos/huggingface/datasets/issues/7024/events | https://github.com/huggingface/datasets/issues/7024 | 2,390,141,626 | I_kwDODunzps6Odqq6 | 7,024 | Streaming dataset not returning data | {
"avatar_url": "https://avatars.githubusercontent.com/u/91670254?v=4",
"events_url": "https://api.github.com/users/johnwee1/events{/privacy}",
"followers_url": "https://api.github.com/users/johnwee1/followers",
"following_url": "https://api.github.com/users/johnwee1/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,720 | null | NONE | null | ### Describe the bug
I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly.
I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7024/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7022/comments | https://api.github.com/repos/huggingface/datasets/issues/7022/events | https://github.com/huggingface/datasets/issues/7022 | 2,388,064,650 | I_kwDODunzps6OVvmK | 7,022 | There is dead code after we require pyarrow >= 15.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7022/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7020/comments | https://api.github.com/repos/huggingface/datasets/issues/7020/events | https://github.com/huggingface/datasets/issues/7020 | 2,387,940,990 | I_kwDODunzps6OVRZ- | 7,020 | Casting list array to fixed size list raises error | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | When trying to cast a list array to fixed size list, an AttributeError is raised:
> AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
Steps to reproduce the bug:
```python
import pyarrow as pa
from datasets.table import array_cast
arr = pa.array([[0, 1]])
array_cast(arr, pa.lis... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7020/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7018/comments | https://api.github.com/repos/huggingface/datasets/issues/7018/events | https://github.com/huggingface/datasets/issues/7018 | 2,383,700,286 | I_kwDODunzps6OFGE- | 7,018 | `load_dataset` fails to load dataset saved by `save_to_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/2307997?v=4",
"events_url": "https://api.github.com/users/sliedes/events{/privacy}",
"followers_url": "https://api.github.com/users/sliedes/followers",
"following_url": "https://api.github.com/users/sliedes/following{/other_user}",
"gists_url": "https:/... | [] | open | false | null | [] | null | [
"In my case the error was:\r\n```\r\nValueError: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.\r\n```\r\nDid you try `load_from_disk`?",
"More generally, any reason there is no API consistency between save_to_disk and push_to_hub ? \r\n\r\nWould be nice... | 1970-01-01T00:00:00.000001 | 1,722 | null | NONE | null | ### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_functi... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7018/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7016/comments | https://api.github.com/repos/huggingface/datasets/issues/7016/events | https://github.com/huggingface/datasets/issues/7016 | 2,383,262,608 | I_kwDODunzps6ODbOQ | 7,016 | `drop_duplicates` method | {
"avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4",
"events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}",
"followers_url": "https://api.github.com/users/MohamedAliRashad/followers",
"following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_use... | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
... | open | false | null | [] | null | [
"There is an open issue #2514 about this which also proposes solutions."
] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Feature request
`drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one)
### Motivation
Ease of use
### Your contribution
I don't think i am good enough to help | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7016/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7013/comments | https://api.github.com/repos/huggingface/datasets/issues/7013/events | https://github.com/huggingface/datasets/issues/7013 | 2,382,976,738 | I_kwDODunzps6OCVbi | 7,013 | CI is broken for faiss tests on Windows: node down: Not properly terminated | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time o... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7013/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7010/comments | https://api.github.com/repos/huggingface/datasets/issues/7010/events | https://github.com/huggingface/datasets/issues/7010 | 2,379,777,480 | I_kwDODunzps6N2IXI | 7,010 | Re-enable raising error from huggingface-hub FutureWarning in CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR:
- #6876
Note that this can only be done once transformers releases the fix:
- https://github.com/huggingface/transformers/pull/31007 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7010/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7008/comments | https://api.github.com/repos/huggingface/datasets/issues/7008/events | https://github.com/huggingface/datasets/issues/7008 | 2,379,591,141 | I_kwDODunzps6N1a3l | 7,008 | Support ruff 0.5.0 in CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7008/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7006/comments | https://api.github.com/repos/huggingface/datasets/issues/7006/events | https://github.com/huggingface/datasets/issues/7006 | 2,379,581,543 | I_kwDODunzps6N1Yhn | 7,006 | CI is broken after ruff-0.5.0: E721 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstanc... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7006/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7005/comments | https://api.github.com/repos/huggingface/datasets/issues/7005/events | https://github.com/huggingface/datasets/issues/7005 | 2,378,424,349 | I_kwDODunzps6Nw-Ad | 7,005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | {
"avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4",
"events_url": "https://api.github.com/users/Aki1991/events{/privacy}",
"followers_url": "https://api.github.com/users/Aki1991/followers",
"following_url": "https://api.github.com/users/Aki1991/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | [
"Hi ! `data_dir=` is for directories, can you try using `data_files=` instead ?",
"If you are trying to load your image dataset from a local folder, you should replace \"data_dir=path/to/jsonl/metadata.jsonl\" with the real folder path in your computer.\r\n\r\nhttps://huggingface.co/docs/datasets/en/image_load#im... | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/117731544?v=4",
"events_url": "https://api.github.com/users/Aki1991/events{/privacy}",
"followers_url": "https://api.github.com/users/Aki1991/followers",
"following_url": "https://api.github.com/users/Aki1991/following{/other_user}",
"gists_url": "https... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7005/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7001/comments | https://api.github.com/repos/huggingface/datasets/issues/7001/events | https://github.com/huggingface/datasets/issues/7001 | 2,372,930,879 | I_kwDODunzps6NcA0_ | 7,001 | Datasetbuilder Local Download FileNotFoundError | {
"avatar_url": "https://avatars.githubusercontent.com/u/12601271?v=4",
"events_url": "https://api.github.com/users/purefall/events{/privacy}",
"followers_url": "https://api.github.com/users/purefall/followers",
"following_url": "https://api.github.com/users/purefall/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | [
"Ok it seems the solution is to use the directory string without the trailing \"/\" which in my case as: \r\n\r\n`parquet_dir = \"~/data/Parquet\" `\r\n\r\nStill i think this is a weird behavior... "
] | 1970-01-01T00:00:00.000001 | 1,719 | null | NONE | null | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7001/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7000/comments | https://api.github.com/repos/huggingface/datasets/issues/7000/events | https://github.com/huggingface/datasets/issues/7000 | 2,372,887,585 | I_kwDODunzps6Nb2Qh | 7,000 | IterableDataset: Unsupported ScalarType BFloat16 | {
"avatar_url": "https://avatars.githubusercontent.com/u/170015089?v=4",
"events_url": "https://api.github.com/users/stoical07/events{/privacy}",
"followers_url": "https://api.github.com/users/stoical07/followers",
"following_url": "https://api.github.com/users/stoical07/following{/other_user}",
"gists_url": ... | [] | closed | false | null | [] | null | [
"@lhoestq Thank you for merging #6607, but unfortunately the issue persists for `IterableDataset` :pensive: ",
"Hi ! I opened https://github.com/huggingface/datasets/pull/7002 to fix this bug",
"Amazing, thank you so much @lhoestq! :pray:"
] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
`IterableDataset.from_generator` crashes when using BFloat16:
```
File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor
args = (obj.detach().cpu().numpy(),)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7000/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6997/comments | https://api.github.com/repos/huggingface/datasets/issues/6997/events | https://github.com/huggingface/datasets/issues/6997 | 2,371,966,127 | I_kwDODunzps6NYVSv | 6,997 | CI is broken for tests using hf-internal-testing/librispeech_asr_dummy | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996
```
FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other']
Right contains one more item: 'othe... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6997/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6995/comments | https://api.github.com/repos/huggingface/datasets/issues/6995/events | https://github.com/huggingface/datasets/issues/6995 | 2,370,713,475 | I_kwDODunzps6NTjeD | 6,995 | ImportError when importing datasets.load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/124846947?v=4",
"events_url": "https://api.github.com/users/Leo-Lsc/events{/privacy}",
"followers_url": "https://api.github.com/users/Leo-Lsc/followers",
"following_url": "https://api.github.com/users/Leo-Lsc/following{/other_user}",
"gists_url": "https... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"What is the version of your installed `huggingface-hub`:\r\n```python\r\nimport huggingface_hub\r\nprint(huggingface_hub.__version__)\r\n```\r\n\r\nIt seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:\r\n```\r\npip install -U huggingface-hu... | 1970-01-01T00:00:00.000001 | 1,731 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. f... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6995/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6992/comments | https://api.github.com/repos/huggingface/datasets/issues/6992/events | https://github.com/huggingface/datasets/issues/6992 | 2,367,890,622 | I_kwDODunzps6NIyS- | 6,992 | Dataset with streaming doesn't work with proxy | {
"avatar_url": "https://avatars.githubusercontent.com/u/57779173?v=4",
"events_url": "https://api.github.com/users/YHL04/events{/privacy}",
"followers_url": "https://api.github.com/users/YHL04/followers",
"following_url": "https://api.github.com/users/YHL04/following{/other_user}",
"gists_url": "https://api.... | [] | open | false | null | [] | null | [
"Hi ! can you try updating `datasets` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U datasets huggingface_hub\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,719 | null | NONE | null | ### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6992/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6990/comments | https://api.github.com/repos/huggingface/datasets/issues/6990/events | https://github.com/huggingface/datasets/issues/6990 | 2,366,660,785 | I_kwDODunzps6NEGCx | 6,990 | Problematic rank after calling `split_dataset_by_node` twice | {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"ah yes good catch ! feel free to open a PR with your suggested fix"
] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
I'm trying to split `IterableDataset` by `split_dataset_by_node`.
But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`.
### Steps to reproduce the bug
Here is the minimal code for reproduction:
```py
>>> from datasets import load_dataset
>>... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6990/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6989/comments | https://api.github.com/repos/huggingface/datasets/issues/6989/events | https://github.com/huggingface/datasets/issues/6989 | 2,365,556,449 | I_kwDODunzps6M_4bh | 6,989 | cache in nfs error | {
"avatar_url": "https://avatars.githubusercontent.com/u/66729924?v=4",
"events_url": "https://api.github.com/users/simplew2011/events{/privacy}",
"followers_url": "https://api.github.com/users/simplew2011/followers",
"following_url": "https://api.github.com/users/simplew2011/following{/other_user}",
"gists_u... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,718 | null | NONE | null | ### Describe the bug
- When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory
- When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory
- The default is to use the path of tempfile.tempdir
- If I modify this path to the N... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6989/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6985/comments | https://api.github.com/repos/huggingface/datasets/issues/6985/events | https://github.com/huggingface/datasets/issues/6985 | 2,362,378,276 | I_kwDODunzps6Mzwgk | 6,985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/26666267?v=4",
"events_url": "https://api.github.com/users/firmai/events{/privacy}",
"followers_url": "https://api.github.com/users/firmai/followers",
"following_url": "https://api.github.com/users/firmai/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [
"Please note that the error is raised just at import:\r\n```python\r\nimport pyarrow.parquet as pq\r\n```\r\n\r\nTherefore it must be caused by some problem with your pyarrow installation. I would recommend you uninstall and install pyarrow again.\r\n\r\nI also see that it seems you use conda to install pyarrow. Pl... | 1970-01-01T00:00:00.000001 | 1,730 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I have been struggling with this for two days, any help would be appreciated. Python 3.10
```
from setfit import SetFitModel
from huggingface_hub import login
access_token_read = "cccxxxccc"
# Authenticate with the Hugging Face Hub
login(token=access_token_read)
# Load the models fr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6985/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6984/comments | https://api.github.com/repos/huggingface/datasets/issues/6984/events | https://github.com/huggingface/datasets/issues/6984 | 2,362,143,554 | I_kwDODunzps6My3NC | 6,984 | Convert polars DataFrame back to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_u... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi ! Thanks for reporting :)\r\n\r\nWe don't support `large_list` yet, though it should be added to `Sequence` IMO (maybe with a parameter `large=True` ?)"
] | 1970-01-01T00:00:00.000001 | 1,723 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
This returns error.
```python
from datasets import Dataset
dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]})
Dataset.from_polars(dsdf.to_polars())
```
ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent.
### Motivation
When datasets... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6984/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6982/comments | https://api.github.com/repos/huggingface/datasets/issues/6982/events | https://github.com/huggingface/datasets/issues/6982 | 2,361,661,469 | I_kwDODunzps6MxBgd | 6,982 | cannot split dataset when using load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17721894?v=4",
"events_url": "https://api.github.com/users/cybest0608/events{/privacy}",
"followers_url": "https://api.github.com/users/cybest0608/followers",
"following_url": "https://api.github.com/users/cybest0608/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"it seems the bug will happened in all windows system, I tried it in windows8.1, 10, 11 and all of them failed. But it won't happened in the Linux(Ubuntu and Centos7) and Mac (both my virtual and physical machine). I still don't know what the problem is. May be related to the path? I cannot run the split file in m... | 1970-01-01T00:00:00.000001 | 1,720 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document,
This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for da... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6982/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6980/comments | https://api.github.com/repos/huggingface/datasets/issues/6980/events | https://github.com/huggingface/datasets/issues/6980 | 2,360,909,930 | I_kwDODunzps6MuKBq | 6,980 | Support NumPy 2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4",
"events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}",
"followers_url": "https://api.github.com/users/NeilGirdhar/followers",
"following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}",
"gists_url... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,720 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6980/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6979/comments | https://api.github.com/repos/huggingface/datasets/issues/6979/events | https://github.com/huggingface/datasets/issues/6979 | 2,360,175,363 | I_kwDODunzps6MrWsD | 6,979 | How can I load partial parquet files only? | {
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"Hello,\r\n\r\nHave you tried loading the dataset in streaming mode? [Documentation](https://huggingface.co/docs/datasets/v2.20.0/stream)\r\n\r\nThis way you wouldn't have to load it all. Also, let's be nice to Parquet, it's a really nice technology and we don't need to be mean :)",
"I have downloaded part of it,... | 1970-01-01T00:00:00.000001 | 1,718 | 1970-01-01T00:00:00.000001 | NONE | null | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if the... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6979/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6977/comments | https://api.github.com/repos/huggingface/datasets/issues/6977/events | https://github.com/huggingface/datasets/issues/6977 | 2,359,295,045 | I_kwDODunzps6Mn_xF | 6,977 | load json file error with v2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4",
"events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}",
"followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers",
"following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}",
... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @xiaoyaolangzhi.\r\n\r\nIndeed, we are currently requiring `pandas` >= 2.0.0.\r\n\r\nYou will need to update pandas in your local environment:\r\n```\r\npip install -U pandas\r\n``` ",
"Thank you very much."
] | 1970-01-01T00:00:00.000001 | 1,718 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
load_dataset(path="json", data_files="./test.json")
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
pa_table = p... | {
"avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4",
"events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}",
"followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers",
"following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6977/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6977/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6973/comments | https://api.github.com/repos/huggingface/datasets/issues/6973/events | https://github.com/huggingface/datasets/issues/6973 | 2,355,517,362 | I_kwDODunzps6MZley | 6,973 | IndexError during training with Squad dataset and T5-small model | {
"avatar_url": "https://avatars.githubusercontent.com/u/151521233?v=4",
"events_url": "https://api.github.com/users/ramtunguturi36/events{/privacy}",
"followers_url": "https://api.github.com/users/ramtunguturi36/followers",
"following_url": "https://api.github.com/users/ramtunguturi36/following{/other_user}",
... | [] | closed | false | null | [] | null | [
"add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704",
"Closing this issue because it was a reported and fixed in transformers."
] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6973/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6967/comments | https://api.github.com/repos/huggingface/datasets/issues/6967/events | https://github.com/huggingface/datasets/issues/6967 | 2,349,146,398 | I_kwDODunzps6MBSEe | 6,967 | Method to load Laion400m | {
"avatar_url": "https://avatars.githubusercontent.com/u/6862868?v=4",
"events_url": "https://api.github.com/users/humanely/events{/privacy}",
"followers_url": "https://api.github.com/users/humanely/followers",
"following_url": "https://api.github.com/users/humanely/following{/other_user}",
"gists_url": "http... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,718 | null | NONE | null | ### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6967/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6961/comments | https://api.github.com/repos/huggingface/datasets/issues/6961/events | https://github.com/huggingface/datasets/issues/6961 | 2,342,022,418 | I_kwDODunzps6LmG0S | 6,961 | Manual downloads should count as downloads | {
"avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4",
"events_url": "https://api.github.com/users/umarbutler/events{/privacy}",
"followers_url": "https://api.github.com/users/umarbutler/followers",
"following_url": "https://api.github.com/users/umarbutler/following{/other_user}",
"gists_url":... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"We're unlikely to add more features/support for datasets with python loading scripts, which include datasets with manual download. Sorry for the inconvenience"
] | 1970-01-01T00:00:00.000001 | 1,718 | null | NONE | null | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6961/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6958/comments | https://api.github.com/repos/huggingface/datasets/issues/6958/events | https://github.com/huggingface/datasets/issues/6958 | 2,337,476,383 | I_kwDODunzps6LUw8f | 6,958 | My Private Dataset doesn't exist on the Hub or cannot be accessed | {
"avatar_url": "https://avatars.githubusercontent.com/u/39621324?v=4",
"events_url": "https://api.github.com/users/wangguan1995/events{/privacy}",
"followers_url": "https://api.github.com/users/wangguan1995/followers",
"following_url": "https://api.github.com/users/wangguan1995/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"I can load public dataset, but for my private dataset it fails",
"https://huggingface.co/docs/datasets/upload_dataset",
"I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.\r\n\r\n
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on t... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6958/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6953/comments | https://api.github.com/repos/huggingface/datasets/issues/6953/events | https://github.com/huggingface/datasets/issues/6953 | 2,333,366,120 | I_kwDODunzps6LFFdo | 6,953 | Remove canonical datasets from docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"Canonical datasets are no longer mentioned in the docs."
] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | Remove canonical datasets from docs, now that we no longer have canonical datasets. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6953/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6951/comments | https://api.github.com/repos/huggingface/datasets/issues/6951/events | https://github.com/huggingface/datasets/issues/6951 | 2,333,231,042 | I_kwDODunzps6LEkfC | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | {
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "h... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"@xianbaoqian ",
"Feel free to open a PR in `m-a-p/COIG-CQIA` to define a default subset. Currently there is no default.\r\n\r\nYou can find some documentation at https://huggingface.co/docs/hub/datasets-manual-configuration#multiple-configurations",
"@lhoestq \r\n\r\nWhilst having a default subset readily avai... | 1970-01-01T00:00:00.000001 | 1,732 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6951/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6950/comments | https://api.github.com/repos/huggingface/datasets/issues/6950/events | https://github.com/huggingface/datasets/issues/6950 | 2,333,005,974 | I_kwDODunzps6LDtiW | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4",
"events_url": "https://api.github.com/users/iansheng/events{/privacy}",
"followers_url": "https://api.github.com/users/iansheng/followers",
"following_url": "https://api.github.com/users/iansheng/following{/other_user}",
"gists_url": "htt... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956",
"Fixed."
] | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4",
"events_url": "https://api.github.com/users/iansheng/events{/privacy}",
"followers_url": "https://api.github.com/users/iansheng/followers",
"following_url": "https://api.github.com/users/iansheng/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6950/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6949/comments | https://api.github.com/repos/huggingface/datasets/issues/6949/events | https://github.com/huggingface/datasets/issues/6949 | 2,332,336,573 | I_kwDODunzps6LBKG9 | 6,949 | load_dataset error | {
"avatar_url": "https://avatars.githubusercontent.com/u/27952522?v=4",
"events_url": "https://api.github.com/users/frederichen01/events{/privacy}",
"followers_url": "https://api.github.com/users/frederichen01/followers",
"following_url": "https://api.github.com/users/frederichen01/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ",
"> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests ... | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6949/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6948/comments | https://api.github.com/repos/huggingface/datasets/issues/6948/events | https://github.com/huggingface/datasets/issues/6948 | 2,331,758,300 | I_kwDODunzps6K-87c | 6,948 | to_tf_dataset: Visible devices cannot be modified after being initialized | {
"avatar_url": "https://avatars.githubusercontent.com/u/7151661?v=4",
"events_url": "https://api.github.com/users/logasja/events{/privacy}",
"followers_url": "https://api.github.com/users/logasja/followers",
"following_url": "https://api.github.com/users/logasja/following{/other_user}",
"gists_url": "https:/... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,717 | null | NONE | null | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _b... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6948/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6947/comments | https://api.github.com/repos/huggingface/datasets/issues/6947/events | https://github.com/huggingface/datasets/issues/6947 | 2,331,114,055 | I_kwDODunzps6K8fpH | 6,947 | FileNotFoundError:error when loading C4 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/62374585?v=4",
"events_url": "https://api.github.com/users/W-215/events{/privacy}",
"followers_url": "https://api.github.com/users/W-215/followers",
"following_url": "https://api.github.com/users/W-215/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"same problem here",
"Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai... | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6947/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6942/comments | https://api.github.com/repos/huggingface/datasets/issues/6942/events | https://github.com/huggingface/datasets/issues/6942 | 2,329,562,382 | I_kwDODunzps6K2k0O | 6,942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,717 | 1970-01-01T00:00:00.000001 | MEMBER | null | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6942/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6941/comments | https://api.github.com/repos/huggingface/datasets/issues/6941/events | https://github.com/huggingface/datasets/issues/6941 | 2,328,930,165 | I_kwDODunzps6K0Kd1 | 6,941 | Supporting FFCV: Fast Forward Computer Vision | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gist... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,717 | null | NONE | null | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6941/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6940/comments | https://api.github.com/repos/huggingface/datasets/issues/6940/events | https://github.com/huggingface/datasets/issues/6940 | 2,328,637,831 | I_kwDODunzps6KzDGH | 6,940 | Enable Sharding to Equal Sized Shards | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"g... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,717 | null | NONE | null | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6940/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6939/comments | https://api.github.com/repos/huggingface/datasets/issues/6939/events | https://github.com/huggingface/datasets/issues/6939 | 2,328,059,386 | I_kwDODunzps6Kw136 | 6,939 | ExpectedMoreSplits error when using data_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,717 | 1970-01-01T00:00:00.000001 | MEMBER | null | As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`:
```python
from datasets import load_dataset
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
```
Traceback (most recent call last):
F... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6939/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6937/comments | https://api.github.com/repos/huggingface/datasets/issues/6937/events | https://github.com/huggingface/datasets/issues/6937 | 2,327,212,611 | I_kwDODunzps6KtnJD | 6,937 | JSON loader implicitly coerces floats to integers | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,717 | null | MEMBER | null | The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===========================... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6937/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6936/comments | https://api.github.com/repos/huggingface/datasets/issues/6936/events | https://github.com/huggingface/datasets/issues/6936 | 2,326,119,853 | I_kwDODunzps6KpcWt | 6,936 | save_to_disk() freezes when saving on s3 bucket with multiprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/54974949?v=4",
"events_url": "https://api.github.com/users/ycattan/events{/privacy}",
"followers_url": "https://api.github.com/users/ycattan/followers",
"following_url": "https://api.github.com/users/ycattan/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | null | [
"I got the same issue. Any updates so far for this issue?"
] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
I'm trying to save a `Dataset` using the `save_to_disk()` function with:
- `num_proc > 1`
- `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/"
The hf progress bar shows up but the saving does not seem to start.
When using one processor only (`num_proc=1`), e... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6936/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6935/comments | https://api.github.com/repos/huggingface/datasets/issues/6935/events | https://github.com/huggingface/datasets/issues/6935 | 2,325,612,022 | I_kwDODunzps6KngX2 | 6,935 | Support for pathlib.Path in datasets 2.19.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12202811?v=4",
"events_url": "https://api.github.com/users/lamyiowce/events{/privacy}",
"followers_url": "https://api.github.com/users/lamyiowce/followers",
"following_url": "https://api.github.com/users/lamyiowce/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | null | [
"+1 I just noticed this when I tried to update `datasets` today."
] | 1970-01-01T00:00:00.000001 | 1,724 | null | NONE | null | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets impor... | null | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6935/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6930/comments | https://api.github.com/repos/huggingface/datasets/issues/6930/events | https://github.com/huggingface/datasets/issues/6930 | 2,323,225,922 | I_kwDODunzps6KeZ1C | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | {
"avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4",
"events_url": "https://api.github.com/users/CLL112/events{/privacy}",
"followers_url": "https://api.github.com/users/CLL112/followers",
"following_url": "https://api.github.com/users/CLL112/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | null | [
"How do you solve it ?\r\n",
"> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n"
] | 1970-01-01T00:00:00.000001 | 1,721 | null | NONE | null | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6930/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6929/comments | https://api.github.com/repos/huggingface/datasets/issues/6929/events | https://github.com/huggingface/datasets/issues/6929 | 2,322,980,077 | I_kwDODunzps6Kddzt | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | {
"avatar_url": "https://avatars.githubusercontent.com/u/73740254?v=4",
"events_url": "https://api.github.com/users/zinc75/events{/privacy}",
"followers_url": "https://api.github.com/users/zinc75/followers",
"following_url": "https://api.github.com/users/zinc75/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757",
"@severo : great !"
] | 1970-01-01T00:00:00.000001 | 1,717 | null | NONE | null | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6929/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6924/comments | https://api.github.com/repos/huggingface/datasets/issues/6924/events | https://github.com/huggingface/datasets/issues/6924 | 2,320,531,015 | I_kwDODunzps6KUH5H | 6,924 | Caching map result of DatasetDict. | {
"avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4",
"events_url": "https://api.github.com/users/MostHumble/events{/privacy}",
"followers_url": "https://api.github.com/users/MostHumble/followers",
"following_url": "https://api.github.com/users/MostHumble/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,716 | null | NONE | null | Hi!
I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins.
Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior?
here it says, that cached files are loaded sequentially:
https://github.com/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6924/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6923/comments | https://api.github.com/repos/huggingface/datasets/issues/6923/events | https://github.com/huggingface/datasets/issues/6923 | 2,319,292,872 | I_kwDODunzps6KPZnI | 6,923 | Export Parquet Tablet Audio-Set is null bytes in Arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/140120605?v=4",
"events_url": "https://api.github.com/users/anioji/events{/privacy}",
"followers_url": "https://api.github.com/users/anioji/followers",
"following_url": "https://api.github.com/users/anioji/following{/other_user}",
"gists_url": "https://... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,716 | null | NONE | null | ### Describe the bug
Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"}
At the same time, the same dataset uploaded to the hub has bit arrays
` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6919/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6918/comments | https://api.github.com/repos/huggingface/datasets/issues/6918/events | https://github.com/huggingface/datasets/issues/6918 | 2,315,322,738 | I_kwDODunzps6KAQVy | 6,918 | NonMatchingSplitsSizesError when using data_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/86664538?v=4",
"events_url": "https://api.github.com/users/srehaag/events{/privacy}",
"followers_url": "https://api.github.com/users/srehaag/followers",
"following_url": "https://api.github.com/users/srehaag/following{/other_user}",
"gists_url": "https:... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.",
"I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714"
] | 1970-01-01T00:00:00.000001 | 1,717 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset.
This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on t... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6918/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6918/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6917/comments | https://api.github.com/repos/huggingface/datasets/issues/6917/events | https://github.com/huggingface/datasets/issues/6917 | 2,314,683,663 | I_kwDODunzps6J90UP | 6,917 | WinError 32 The process cannot access the file during load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/56682168?v=4",
"events_url": "https://api.github.com/users/elwe-2808/events{/privacy}",
"followers_url": "https://api.github.com/users/elwe-2808/followers",
"following_url": "https://api.github.com/users/elwe-2808/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,716 | null | NONE | null | ### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "tran... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6917/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6917/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6916/comments | https://api.github.com/repos/huggingface/datasets/issues/6916/events | https://github.com/huggingface/datasets/issues/6916 | 2,311,675,564 | I_kwDODunzps6JyV6s | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4",
"events_url": "https://api.github.com/users/jetlime/events{/privacy}",
"followers_url": "https://api.github.com/users/jetlime/followers",
"following_url": "https://api.github.com/users/jetlime/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,716 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29337128?v=4",
"events_url": "https://api.github.com/users/jetlime/events{/privacy}",
"followers_url": "https://api.github.com/users/jetlime/followers",
"following_url": "https://api.github.com/users/jetlime/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6916/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6913/comments | https://api.github.com/repos/huggingface/datasets/issues/6913/events | https://github.com/huggingface/datasets/issues/6913 | 2,309,605,889 | I_kwDODunzps6JqcoB | 6,913 | Column order is nondeterministic when loading from JSON | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,716 | 1970-01-01T00:00:00.000001 | MEMBER | null | As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects.
For example, when loading a JSON files with a list of objects, each with the following ordered keys:
- [ID, Language, Topic],
the resulting dataset may have column... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6913/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6912/comments | https://api.github.com/repos/huggingface/datasets/issues/6912/events | https://github.com/huggingface/datasets/issues/6912 | 2,309,365,961 | I_kwDODunzps6JpiDJ | 6,912 | Add MedImg for streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4",
"events_url": "https://api.github.com/users/lhallee/events{/privacy}",
"followers_url": "https://api.github.com/users/lhallee/followers",
"following_url": "https://api.github.com/users/lhallee/following{/other_user}",
"gists_url": "https:... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [
"@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?",
"Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamab... | 1970-01-01T00:00:00.000001 | 1,725 | null | NONE | null | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your con... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6912/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6908/comments | https://api.github.com/repos/huggingface/datasets/issues/6908/events | https://github.com/huggingface/datasets/issues/6908 | 2,304,958,116 | I_kwDODunzps6JYt6k | 6,908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | {
"avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4",
"events_url": "https://api.github.com/users/guch8017/events{/privacy}",
"followers_url": "https://api.github.com/users/guch8017/followers",
"following_url": "https://api.github.com/users/guch8017/following{/other_user}",
"gists_url": "htt... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at makin... | 1970-01-01T00:00:00.000001 | 1,716 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/38173059?v=4",
"events_url": "https://api.github.com/users/guch8017/events{/privacy}",
"followers_url": "https://api.github.com/users/guch8017/followers",
"following_url": "https://api.github.com/users/guch8017/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6908/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6907/comments | https://api.github.com/repos/huggingface/datasets/issues/6907/events | https://github.com/huggingface/datasets/issues/6907 | 2,303,855,833 | I_kwDODunzps6JUgzZ | 6,907 | Support the deserialization of json lines files comprised of lists | {
"avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4",
"events_url": "https://api.github.com/users/umarbutler/events{/privacy}",
"followers_url": "https://api.github.com/users/umarbutler/followers",
"following_url": "https://api.github.com/users/umarbutler/following{/other_user}",
"gists_url":... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new re... | 1970-01-01T00:00:00.000001 | 1,716 | null | NONE | null | ### Feature request
I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a v... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6907/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6907/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6906/comments | https://api.github.com/repos/huggingface/datasets/issues/6906/events | https://github.com/huggingface/datasets/issues/6906 | 2,303,679,119 | I_kwDODunzps6JT1qP | 6,906 | irc_disentangle - Issue with splitting data | {
"avatar_url": "https://avatars.githubusercontent.com/u/114260604?v=4",
"events_url": "https://api.github.com/users/eor51355/events{/privacy}",
"followers_url": "https://api.github.com/users/eor51355/followers",
"following_url": "https://api.github.com/users/eor51355/following{/other_user}",
"gists_url": "ht... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thank you I will try this out!\r\n\r\nOn Tue, Jun 11, 2024 at 3:55 AM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I add a \"streaming=True\" after the name of the dataset, and it\r\n> works.....hope it can help you\r\n>\r\n> And if you install the version datasets==2.15.0, this bug will not happen.\r\n> I don't kn... | 1970-01-01T00:00:00.000001 | 1,721 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message:
ValueError: Instruction "train" corresponds to no data!
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset('irc_disentangle')
ds
#... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6906/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6905/comments | https://api.github.com/repos/huggingface/datasets/issues/6905/events | https://github.com/huggingface/datasets/issues/6905 | 2,303,098,587 | I_kwDODunzps6JRn7b | 6,905 | Extraction protocol for arrow files is not defined | {
"avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4",
"events_url": "https://api.github.com/users/radulescupetru/events{/privacy}",
"followers_url": "https://api.github.com/users/radulescupetru/followers",
"following_url": "https://api.github.com/users/radulescupetru/following{/other_user}",
... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,715 | null | NONE | null | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_ut... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6905/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6905/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6903/comments | https://api.github.com/repos/huggingface/datasets/issues/6903/events | https://github.com/huggingface/datasets/issues/6903 | 2,300,436,053 | I_kwDODunzps6JHd5V | 6,903 | Add the option of saving in parquet instead of arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4",
"events_url": "https://api.github.com/users/arita37/events{/privacy}",
"followers_url": "https://api.github.com/users/arita37/followers",
"following_url": "https://api.github.com/users/arita37/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ",
"No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another cus... | 1970-01-01T00:00:00.000001 | 1,718 | null | NONE | null | ### Feature request
In dataset.save_to_disk('/path/to/save/dataset'),
add the option to save in parquet format
dataset.save_to_disk('/path/to/save/dataset', format="parquet"),
because arrow is not used for Production Big data.... (only parquet)
### Motivation
because arrow is not used for Production Big... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6903/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6901/comments | https://api.github.com/repos/huggingface/datasets/issues/6901/events | https://github.com/huggingface/datasets/issues/6901 | 2,300,167,465 | I_kwDODunzps6JGcUp | 6,901 | HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,715 | 1970-01-01T00:00:00.000001 | MEMBER | null | CLI convert_to_parquet cannot create "script" branch on 3rd party repos.
It can only create it on repos where the user executing the script has write access.
Otherwise, a 403 Forbidden HTTPError is raised:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/ut... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6901/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6901/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6900/comments | https://api.github.com/repos/huggingface/datasets/issues/6900/events | https://github.com/huggingface/datasets/issues/6900 | 2,298,489,733 | I_kwDODunzps6JACuF | 6,900 | [WebDataset] KeyError with user-defined `Features` when a field is missing in an example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"@lhoestq How difficult of fix is this?",
"It shouldn't be difficult, I think it's just a matter of adding the missing fields from `self.config.features` in `example` here: before it iterates on image_field_names and audio_field_names. A missing field should have a value set to None\r\n\r\nhttps://github.com/hugg... | 1970-01-01T00:00:00.000001 | 1,719 | 1970-01-01T00:00:00.000001 | MEMBER | null | reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1
```
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
example[field_name] = {"path": example["_... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6900/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6900/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6899/comments | https://api.github.com/repos/huggingface/datasets/issues/6899/events | https://github.com/huggingface/datasets/issues/6899 | 2,298,059,597 | I_kwDODunzps6I-ZtN | 6,899 | List of dictionary features get standardized | {
"avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4",
"events_url": "https://api.github.com/users/sohamparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/sohamparikh/followers",
"following_url": "https://api.github.com/users/sohamparikh/following{/other_user}",
"gists_u... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,715 | null | NONE | null | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6899/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6899/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6897/comments | https://api.github.com/repos/huggingface/datasets/issues/6897/events | https://github.com/huggingface/datasets/issues/6897 | 2,293,428,243 | I_kwDODunzps6IsvAT | 6,897 | datasets template guide :: issue in documentation YAML | {
"avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4",
"events_url": "https://api.github.com/users/bghira/events{/privacy}",
"followers_url": "https://api.github.com/users/bghira/followers",
"following_url": "https://api.github.com/users/bghira/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML erro... | 1970-01-01T00:00:00.000001 | 1,715 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6897/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/6896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6896/comments | https://api.github.com/repos/huggingface/datasets/issues/6896/events | https://github.com/huggingface/datasets/issues/6896 | 2,293,176,061 | I_kwDODunzps6Irxb9 | 6,896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"... | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,715 | null | NONE | null | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipyth... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6896/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6894/comments | https://api.github.com/repos/huggingface/datasets/issues/6894/events | https://github.com/huggingface/datasets/issues/6894 | 2,292,840,226 | I_kwDODunzps6Iqfci | 6,894 | Better document defaults of to_json | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 1970-01-01T00:00:00.000001 | 1,715 | 1970-01-01T00:00:00.000001 | MEMBER | null | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6894/timeline | null | completed | null | null | false | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.