id int64 1.14B 2.23B | labels_url stringlengths 75 75 | body stringlengths 2 33.9k ⌀ | updated_at stringlengths 20 20 | number int64 3.76k 6.79k | milestone dict | repository_url stringclasses 1
value | draft bool 2
classes | labels listlengths 0 4 | created_at stringlengths 20 20 | comments_url stringlengths 70 70 | assignee dict | timeline_url stringlengths 70 70 | title stringlengths 1 290 | events_url stringlengths 68 68 | active_lock_reason null | user dict | assignees listlengths 0 3 | performed_via_github_app null | state_reason stringclasses 3
values | author_association stringclasses 3
values | closed_at stringlengths 20 20 ⌀ | pull_request dict | node_id stringlengths 18 19 | comments listlengths 0 30 | reactions dict | state stringclasses 2
values | locked bool 1
class | url stringlengths 61 61 | html_url stringlengths 49 51 | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,956,053,294 | https://api.github.com/repos/huggingface/datasets/issues/6330/labels{/name} | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps ... | 2023-11-07T10:02:14Z | 6,330 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-22T20:57:10Z | https://api.github.com/repos/huggingface/datasets/issues/6330/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/6330/timeline | Latest fsspec==2023.10.0 issue with streaming datasets | https://api.github.com/repos/huggingface/datasets/issues/6330/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gi... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | CONTRIBUTOR | 2023-10-23T09:17:56Z | null | I_kwDODunzps50lwEu | [
"I also encountered a similar error below.\r\nAppreciate the team could shed some light on this issue.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n[/home/ubuntu/work/EveryDream2trainer/pre... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6330/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6330 | https://github.com/huggingface/datasets/issues/6330 | false |
1,955,858,020 | https://api.github.com/repos/huggingface/datasets/issues/6329/labels{/name} | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| 2023-10-23T09:22:58Z | 6,329 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-22T11:07:46Z | https://api.github.com/repos/huggingface/datasets/issues/6329/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6329/timeline | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | https://api.github.com/repos/huggingface/datasets/issues/6329/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url... | [] | null | completed | NONE | 2023-10-23T09:22:58Z | null | I_kwDODunzps50lAZk | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6329/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6329 | https://github.com/huggingface/datasets/issues/6329 | false |
1,955,857,904 | https://api.github.com/repos/huggingface/datasets/issues/6328/labels{/name} | null | 2023-10-23T09:22:38Z | 6,328 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-22T11:07:21Z | https://api.github.com/repos/huggingface/datasets/issues/6328/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6328/timeline | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | https://api.github.com/repos/huggingface/datasets/issues/6328/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url... | [] | null | completed | NONE | 2023-10-23T09:22:38Z | null | I_kwDODunzps50lAXw | [
"شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6328/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6328 | https://github.com/huggingface/datasets/issues/6328 | false |
1,955,470,755 | https://api.github.com/repos/huggingface/datasets/issues/6327/labels{/name} | ### Describe the bug
Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs.
### Steps to reproduce the bug
I've downloaded the dataset and save it to the cache dir in advance. My hope is loadi... | 2023-10-23T18:50:07Z | 6,327 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-21T12:27:03Z | https://api.github.com/repos/huggingface/datasets/issues/6327/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6327/timeline | FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)` | https://api.github.com/repos/huggingface/datasets/issues/6327/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "htt... | [] | null | completed | NONE | 2023-10-23T18:50:07Z | null | I_kwDODunzps50jh2j | [
"You can clone the `togethercomputer/RedPajama-Data-1T-Sample` repo and load the dataset with `load_dataset(\"path/to/cloned_repo\")` to use it offline.",
"@mariosasko Thank you for your kind reply! I'll try it as a workaround.\r\nDoes that mean that currently it's not supported to simply load with a short name?"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6327/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6327 | https://github.com/huggingface/datasets/issues/6327 | false |
1,955,420,536 | https://api.github.com/repos/huggingface/datasets/issues/6326/labels{/name} | null | 2023-10-23T14:56:20Z | 6,326 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-21T10:07:48Z | https://api.github.com/repos/huggingface/datasets/issues/6326/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6326/timeline | Create battery_analysis.py | https://api.github.com/repos/huggingface/datasets/issues/6326/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https... | [] | null | null | NONE | 2023-10-23T14:56:20Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6326.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6326",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6326.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6326"
} | PR_kwDODunzps5dcSRa | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6326/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6326 | https://github.com/huggingface/datasets/pull/6326 | true |
1,955,420,178 | https://api.github.com/repos/huggingface/datasets/issues/6325/labels{/name} | null | 2023-10-23T14:55:58Z | 6,325 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-21T10:06:37Z | https://api.github.com/repos/huggingface/datasets/issues/6325/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6325/timeline | Create battery_analysis.py | https://api.github.com/repos/huggingface/datasets/issues/6325/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https... | [] | null | null | NONE | 2023-10-23T14:55:58Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6325",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6325"
} | PR_kwDODunzps5dcSM3 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6325/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6325 | https://github.com/huggingface/datasets/pull/6325 | true |
1,955,126,687 | https://api.github.com/repos/huggingface/datasets/issues/6324/labels{/name} | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowI... | 2023-10-23T20:52:57Z | 6,324 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-20T23:20:58Z | https://api.github.com/repos/huggingface/datasets/issues/6324/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6324/timeline | Conversion to Arrow fails due to wrong type heuristic | https://api.github.com/repos/huggingface/datasets/issues/6324/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2862336?v=4",
"events_url": "https://api.github.com/users/jphme/events{/privacy}",
"followers_url": "https://api.github.com/users/jphme/followers",
"following_url": "https://api.github.com/users/jphme/following{/other_user}",
"gists_url": "https://api.g... | [] | null | completed | NONE | 2023-10-23T20:52:57Z | null | I_kwDODunzps50iN2f | [
"Unlike Pandas, Arrow is strict with types, so converting the problematic strings to ints (or ints to strings) to ensure all the values have the same type is the only fix. \r\n\r\nJSON support has been requested in Arrow [here](https://github.com/apache/arrow/issues/32538), but I don't expect this to be implemented... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6324/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6324 | https://github.com/huggingface/datasets/issues/6324 | false |
1,954,245,980 | https://api.github.com/repos/huggingface/datasets/issues/6323/labels{/name} | ### Describe the bug
Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change:
https://github.com/huggingface/datasets/blame/bade7af74437347a76083... | 2023-10-20T12:59:55Z | 6,323 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-20T12:59:55Z | https://api.github.com/repos/huggingface/datasets/issues/6323/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6323/timeline | Loading dataset from large GCS bucket very slow since 2.14 | https://api.github.com/repos/huggingface/datasets/issues/6323/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/6209990?v=4",
"events_url": "https://api.github.com/users/jbcdnr/events{/privacy}",
"followers_url": "https://api.github.com/users/jbcdnr/followers",
"following_url": "https://api.github.com/users/jbcdnr/following{/other_user}",
"gists_url": "https://ap... | [] | null | null | NONE | null | null | I_kwDODunzps50e21c | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6323/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6323 | https://github.com/huggingface/datasets/issues/6323 | false |
1,952,947,461 | https://api.github.com/repos/huggingface/datasets/issues/6322/labels{/name} | With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`:
- Input: `hf://datasets/...`
- Output: `hf:/datasets/...`... | 2023-10-23T14:40:45Z | 6,322 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-19T19:45:10Z | https://api.github.com/repos/huggingface/datasets/issues/6322/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6322/timeline | Fix regex `get_data_files` formatting for base paths | https://api.github.com/repos/huggingface/datasets/issues/6322/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gi... | [] | null | null | CONTRIBUTOR | 2023-10-23T14:31:21Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6322",
"merged_at": "2023-10-23T14:31:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5dT5vG | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> The reason why I used the the glob_pattern_to_regex in the entire pattern is because otherwise I got an error for Windows local paths: a base_path like 'C:\\\\Users\\\\runneradmin... made the function string_to_dict raise re.error:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6322/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6322 | https://github.com/huggingface/datasets/pull/6322 | true |
1,952,643,483 | https://api.github.com/repos/huggingface/datasets/issues/6321/labels{/name} | null | 2023-10-19T17:18:00Z | 6,321 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-19T16:24:35Z | https://api.github.com/repos/huggingface/datasets/issues/6321/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6321/timeline | Fix typos | https://api.github.com/repos/huggingface/datasets/issues/6321/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3097956?v=4",
"events_url": "https://api.github.com/users/python273/events{/privacy}",
"followers_url": "https://api.github.com/users/python273/followers",
"following_url": "https://api.github.com/users/python273/following{/other_user}",
"gists_url": "h... | [] | null | null | CONTRIBUTOR | 2023-10-19T17:07:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6321",
"merged_at": "2023-10-19T17:07:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5dS3Mc | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6321/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6321 | https://github.com/huggingface/datasets/pull/6321 | true |
1,952,618,316 | https://api.github.com/repos/huggingface/datasets/issues/6320/labels{/name} | ### Describe the bug
According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command:
`train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")`
to load the train and test sets from the dataset.
However ex... | 2023-11-30T16:21:15Z | 6,320 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-19T16:09:22Z | https://api.github.com/repos/huggingface/datasets/issues/6320/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6320/timeline | Dataset slice splits can't load training and validation at the same time | https://api.github.com/repos/huggingface/datasets/issues/6320/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/32488097?v=4",
"events_url": "https://api.github.com/users/timlac/events{/privacy}",
"followers_url": "https://api.github.com/users/timlac/followers",
"following_url": "https://api.github.com/users/timlac/following{/other_user}",
"gists_url": "https://a... | [] | null | completed | NONE | 2023-11-30T16:21:15Z | null | I_kwDODunzps50YpdM | [
"The expression \"train+test\" concatenates the splits.\r\n\r\nThe individual splits as separate datasets can be obtained as follows:\r\n```python\r\ntrain_ds, test_ds = load_dataset(\"<dataset_name>\", split=[\"train\", \"test\"])\r\ntrain_10pct_ds, test_10pct_ds = load_dataset(\"<dataset_name>\", split=[\"train[:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6320/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6320 | https://github.com/huggingface/datasets/issues/6320 | false |
1,952,101,717 | https://api.github.com/repos/huggingface/datasets/issues/6319/labels{/name} | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing s... | 2023-11-30T03:27:26Z | 6,319 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-19T12:19:33Z | https://api.github.com/repos/huggingface/datasets/issues/6319/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6319/timeline | Datasets.map is severely broken | https://api.github.com/repos/huggingface/datasets/issues/6319/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https:/... | [] | null | null | NONE | null | null | I_kwDODunzps50WrVV | [
"Hi! Instead of processing a single example at a time, you should use the batched `map` for the best performance (with `num_proc=1`) - the fast tokenizers can process a batch's samples in parallel in that scenario.\r\n\r\nE.g., the following code in Colab takes an hour to complete:\r\n```python\r\n# !pip install da... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6319/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6319 | https://github.com/huggingface/datasets/issues/6319 | false |
1,952,100,706 | https://api.github.com/repos/huggingface/datasets/issues/6318/labels{/name} | Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.
This is useful to get deterministic hashes of tokenizers that use a trie based on python sets.
reported in https://github.com/huggingface/datasets/issues/3847 | 2023-10-19T16:27:20Z | 6,318 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-19T12:19:13Z | https://api.github.com/repos/huggingface/datasets/issues/6318/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6318/timeline | Deterministic set hash | https://api.github.com/repos/huggingface/datasets/issues/6318/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-10-19T16:16:31Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6318",
"merged_at": "2023-10-19T16:16:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5dRC9V | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6318/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6318 | https://github.com/huggingface/datasets/pull/6318 | true |
1,951,965,668 | https://api.github.com/repos/huggingface/datasets/issues/6317/labels{/name} | ### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from data... | 2023-10-19T13:04:56Z | 6,317 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-19T11:25:21Z | https://api.github.com/repos/huggingface/datasets/issues/6317/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/6317/timeline | sentiment140 dataset unavailable | https://api.github.com/repos/huggingface/datasets/issues/6317/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/52670382?v=4",
"events_url": "https://api.github.com/users/AndreasKarasenko/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreasKarasenko/followers",
"following_url": "https://api.github.com/users/AndreasKarasenko/following{/other_use... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2023-10-19T13:04:56Z | null | I_kwDODunzps50WKHk | [
"Thanks for reporting. We are investigating the issue.",
"We have opened an issue in the corresponding Hub dataset: https://huggingface.co/datasets/sentiment140/discussions/3\r\n\r\nLet's continue the discussion there."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6317/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6317 | https://github.com/huggingface/datasets/issues/6317 | false |
1,951,819,869 | https://api.github.com/repos/huggingface/datasets/issues/6316/labels{/name} | Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- correspon... | 2023-10-20T06:23:21Z | 6,316 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-19T10:21:34Z | https://api.github.com/repos/huggingface/datasets/issues/6316/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6316/timeline | Fix loading Hub datasets with CSV metadata file | https://api.github.com/repos/huggingface/datasets/issues/6316/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-10-20T06:14:09Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6316",
"merged_at": "2023-10-20T06:14:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5dQGpg | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6316/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6316 | https://github.com/huggingface/datasets/pull/6316 | true |
1,951,800,819 | https://api.github.com/repos/huggingface/datasets/issues/6315/labels{/name} | When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error:
```
E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
pyarrow/error.pxi:100: ArrowInvalid
```
See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1 | 2023-10-20T06:14:10Z | 6,315 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2023-10-19T10:11:29Z | https://api.github.com/repos/huggingface/datasets/issues/6315/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/6315/timeline | Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0 | https://api.github.com/repos/huggingface/datasets/issues/6315/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2023-10-20T06:14:10Z | null | I_kwDODunzps50Vh3z | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6315/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6315 | https://github.com/huggingface/datasets/issues/6315 | false |
1,951,684,763 | https://api.github.com/repos/huggingface/datasets/issues/6314/labels{/name} | This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created. | 2023-10-19T09:20:06Z | 6,314 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-19T09:12:39Z | https://api.github.com/repos/huggingface/datasets/issues/6314/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6314/timeline | Support creating new branch in push_to_hub | https://api.github.com/repos/huggingface/datasets/issues/6314/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.gith... | [] | null | null | NONE | 2023-10-19T09:19:48Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6314",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6314"
} | PR_kwDODunzps5dPo25 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6314/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6314 | https://github.com/huggingface/datasets/pull/6314 | true |
1,951,527,712 | https://api.github.com/repos/huggingface/datasets/issues/6313/labels{/name} | Currently, the commit message keeps on adding:
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00000-of-00002) (part 00001-of-00002)`
Introduced in https://github.com/huggingface/datasets/pull/6269
This PR fixes this issue to have
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset... | 2023-10-20T14:06:13Z | 6,313 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-19T07:53:56Z | https://api.github.com/repos/huggingface/datasets/issues/6313/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6313/timeline | Fix commit message formatting in multi-commit uploads | https://api.github.com/repos/huggingface/datasets/issues/6313/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_u... | [] | null | null | MEMBER | 2023-10-20T13:57:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6313",
"merged_at": "2023-10-20T13:57:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5dPGmL | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6313/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6313 | https://github.com/huggingface/datasets/pull/6313 | true |
1,950,128,416 | https://api.github.com/repos/huggingface/datasets/issues/6312/labels{/name} | In docs of about_arrow.md, in the below example code

The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good conven... | 2023-10-19T16:31:59Z | 6,312 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-18T16:10:59Z | https://api.github.com/repos/huggingface/datasets/issues/6312/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6312/timeline | docs: resolving namespace conflict, refactored variable | https://api.github.com/repos/huggingface/datasets/issues/6312/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "htt... | [] | null | null | CONTRIBUTOR | 2023-10-19T16:23:07Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6312",
"merged_at": "2023-10-19T16:23:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5dKWDF | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6312/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6312 | https://github.com/huggingface/datasets/pull/6312 | true |
1,949,304,993 | https://api.github.com/repos/huggingface/datasets/issues/6311/labels{/name} | ### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] =... | 2024-02-06T19:24:20Z | 6,311 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-18T09:38:05Z | https://api.github.com/repos/huggingface/datasets/issues/6311/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6311/timeline | cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146 | https://api.github.com/repos/huggingface/datasets/issues/6311/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "htt... | [] | null | completed | NONE | 2024-02-06T19:24:20Z | null | I_kwDODunzps50MAih | [
"Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in https://github.com/huggingface/datasets/pull/6283 (should be part of the next release).",
"> Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in #6283 (should be p... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6311/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6311 | https://github.com/huggingface/datasets/issues/6311 | false |
1,947,457,988 | https://api.github.com/repos/huggingface/datasets/issues/6310/labels{/name} | Proposition to fix #5806.
Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output.
There is a difference between arrow-based and folder-based datasets to return the file name:
- fo... | 2023-11-27T21:11:14Z | 6,310 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-17T13:36:57Z | https://api.github.com/repos/huggingface/datasets/issues/6310/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6310/timeline | Add return_file_name in load_dataset | https://api.github.com/repos/huggingface/datasets/issues/6310/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/40604584?v=4",
"events_url": "https://api.github.com/users/juliendenize/events{/privacy}",
"followers_url": "https://api.github.com/users/juliendenize/followers",
"following_url": "https://api.github.com/users/juliendenize/following{/other_user}",
"gist... | [] | null | null | NONE | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6310",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6310"
} | PR_kwDODunzps5dBPnY | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6310). All of your documentation changes will be reflected on that endpoint.",
"> Thanks for the change !\r\n> \r\n> Since `return` in python often refers to what is actually returned by the function (here `load_dataset`), I th... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6310/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6310 | https://github.com/huggingface/datasets/pull/6310 | true |
1,946,916,969 | https://api.github.com/repos/huggingface/datasets/issues/6309/labels{/name} | Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice:
- For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`)
- The in... | 2023-10-18T14:01:52Z | 6,309 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-17T09:00:39Z | https://api.github.com/repos/huggingface/datasets/issues/6309/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6309/timeline | Fix get_data_patterns for directories with the word data twice | https://api.github.com/repos/huggingface/datasets/issues/6309/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-10-18T13:50:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6309",
"merged_at": "2023-10-18T13:50:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5c_YcX | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6309/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6309 | https://github.com/huggingface/datasets/pull/6309 | true |
1,946,810,625 | https://api.github.com/repos/huggingface/datasets/issues/6308/labels{/name} | ### Describe the bug
just run import:
`from datasets import load_dataset`
and then:
```
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow... | 2023-10-25T17:09:22Z | 6,308 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-17T08:08:54Z | https://api.github.com/repos/huggingface/datasets/issues/6308/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6308/timeline | module 'resource' has no attribute 'error' | https://api.github.com/repos/huggingface/datasets/issues/6308/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/48009681?v=4",
"events_url": "https://api.github.com/users/NeoWang9999/events{/privacy}",
"followers_url": "https://api.github.com/users/NeoWang9999/followers",
"following_url": "https://api.github.com/users/NeoWang9999/following{/other_user}",
"gists_u... | [] | null | completed | NONE | 2023-10-25T17:09:22Z | null | I_kwDODunzps50CfkB | [
"This (Windows) issue was fixed in `fsspec` in https://github.com/fsspec/filesystem_spec/pull/1275. So, to avoid the error, update the `fsspec` installation with `pip install -U fsspec`.",
"> This (Windows) issue was fixed in `fsspec` in [fsspec/filesystem_spec#1275](https://github.com/fsspec/filesystem_spec/pul... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6308/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6308 | https://github.com/huggingface/datasets/issues/6308 | false |
1,946,414,808 | https://api.github.com/repos/huggingface/datasets/issues/6307/labels{/name} | null | 2023-10-17T12:59:26Z | 6,307 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-17T02:28:50Z | https://api.github.com/repos/huggingface/datasets/issues/6307/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6307/timeline | Fix typo in code example in docs | https://api.github.com/repos/huggingface/datasets/issues/6307/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url":... | [] | null | null | CONTRIBUTOR | 2023-10-17T06:36:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6307",
"merged_at": "2023-10-17T06:36:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5c9s0j | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6307/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6307 | https://github.com/huggingface/datasets/pull/6307 | true |
1,946,363,452 | https://api.github.com/repos/huggingface/datasets/issues/6306/labels{/name} | ### Describe the bug
I ran a package with pyinstaller and got the following error:
### Steps to reproduce the bug
```
...
File "datasets\__init__.py", line 52, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_an... | 2023-11-02T07:24:51Z | 6,306 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-17T01:41:51Z | https://api.github.com/repos/huggingface/datasets/issues/6306/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6306/timeline | pyinstaller : OSError: could not get source code | https://api.github.com/repos/huggingface/datasets/issues/6306/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/57702070?v=4",
"events_url": "https://api.github.com/users/dusk877647949/events{/privacy}",
"followers_url": "https://api.github.com/users/dusk877647949/followers",
"following_url": "https://api.github.com/users/dusk877647949/following{/other_user}",
"g... | [] | null | completed | NONE | 2023-10-18T14:03:42Z | null | I_kwDODunzps50AyY8 | [
"more information:\r\n``` \r\nFile \"text2vec\\__init__.py\", line 8, in <module>\r\nFile \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\nFile \"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6306/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6306 | https://github.com/huggingface/datasets/issues/6306 | false |
1,946,010,912 | https://api.github.com/repos/huggingface/datasets/issues/6305/labels{/name} | ### Describe the bug
I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`.
### Steps to reproduce the bug
[Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg)
```python
D... | 2023-10-18T13:50:36Z | 6,305 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-16T20:11:27Z | https://api.github.com/repos/huggingface/datasets/issues/6305/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/6305/timeline | Cannot load dataset with `2.14.5`: `FileNotFound` error | https://api.github.com/repos/huggingface/datasets/issues/6305/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2023-10-18T13:50:36Z | null | I_kwDODunzps5z_cUg | [
"Thanks for reporting, @finiteautomata.\r\n\r\nWe are investigating it. ",
"There is a bug in `datasets`. You can see our proposed fix:\r\n- #6309 "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6305/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6305 | https://github.com/huggingface/datasets/issues/6305 | false |
1,945,913,521 | https://api.github.com/repos/huggingface/datasets/issues/6304/labels{/name} | Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow
| 2023-10-17T15:13:37Z | 6,304 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-16T19:10:39Z | https://api.github.com/repos/huggingface/datasets/issues/6304/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6304/timeline | Update README.md | https://api.github.com/repos/huggingface/datasets/issues/6304/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "htt... | [] | null | null | CONTRIBUTOR | 2023-10-17T15:04:52Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6304",
"merged_at": "2023-10-17T15:04:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5c7-4q | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6304/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6304 | https://github.com/huggingface/datasets/pull/6304 | true |
1,943,466,532 | https://api.github.com/repos/huggingface/datasets/issues/6303/labels{/name} | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71... | 2023-10-16T16:33:21Z | 6,303 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-14T18:31:03Z | https://api.github.com/repos/huggingface/datasets/issues/6303/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6303/timeline | Parquet uploads off-by-one naming scheme | https://api.github.com/repos/huggingface/datasets/issues/6303/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gi... | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps5z1vIk | [
"You can find the reasoning behind this naming scheme [here](https://github.com/huggingface/transformers/pull/16343#discussion_r931182168).\r\n\r\nThis point has been raised several times, so I'd be okay with starting with `00001-` (also to be consistent with the `transformers` sharding), but I'm not sure @lhoestq ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6303/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6303 | https://github.com/huggingface/datasets/issues/6303 | false |
1,942,096,078 | https://api.github.com/repos/huggingface/datasets/issues/6302/labels{/name} | ### Describe the bug
An example from [1], does not work when limiting shards with `max_shard_size`.
Try the following example with low `max_shard_size`, such as:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
The reason f... | 2023-10-17T06:52:12Z | 6,302 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-13T14:43:36Z | https://api.github.com/repos/huggingface/datasets/issues/6302/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6302/timeline | ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size` | https://api.github.com/repos/huggingface/datasets/issues/6302/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2855550?v=4",
"events_url": "https://api.github.com/users/Rassibassi/events{/privacy}",
"followers_url": "https://api.github.com/users/Rassibassi/followers",
"following_url": "https://api.github.com/users/Rassibassi/following{/other_user}",
"gists_url":... | [] | null | completed | NONE | 2023-10-17T06:52:11Z | null | I_kwDODunzps5zwgjO | [
"`writer._num_bytes` is updated every `writer_batch_size`-th call to the `write` method (default `writer_batch_size` is 1000 (examples)). You should be able to see the update by passing a smaller `writer_batch_size` to the `load_dataset_builder`.\r\n\r\nWe could improve this by supporting the string `writer_batch_s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6302/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6302 | https://github.com/huggingface/datasets/issues/6302 | false |
1,940,183,999 | https://api.github.com/repos/huggingface/datasets/issues/6301/labels{/name} | Removes the temporary pin introduced in #6264 | 2023-10-12T15:58:20Z | 6,301 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-12T14:58:07Z | https://api.github.com/repos/huggingface/datasets/issues/6301/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6301/timeline | Unpin `tensorflow` maximum version | https://api.github.com/repos/huggingface/datasets/issues/6301/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-10-12T15:49:54Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6301",
"merged_at": "2023-10-12T15:49:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5cpPVh | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6301/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6301 | https://github.com/huggingface/datasets/pull/6301 | true |
1,940,153,432 | https://api.github.com/repos/huggingface/datasets/issues/6300/labels{/name} | fix #6299
fix #6202 | 2023-10-12T16:37:55Z | 6,300 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-12T14:42:40Z | https://api.github.com/repos/huggingface/datasets/issues/6300/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6300/timeline | Unpin `jax` maximum version | https://api.github.com/repos/huggingface/datasets/issues/6300/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-10-12T16:28:57Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6300",
"merged_at": "2023-10-12T16:28:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5cpIoG | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6300/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6300 | https://github.com/huggingface/datasets/pull/6300 | true |
1,939,649,238 | https://api.github.com/repos/huggingface/datasets/issues/6299/labels{/name} | ### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a lim... | 2023-10-12T16:28:59Z | 6,299 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-10-12T10:03:46Z | https://api.github.com/repos/huggingface/datasets/issues/6299/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6299/timeline | Support for newer versions of JAX | https://api.github.com/repos/huggingface/datasets/issues/6299/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25456859?v=4",
"events_url": "https://api.github.com/users/ddrous/events{/privacy}",
"followers_url": "https://api.github.com/users/ddrous/followers",
"following_url": "https://api.github.com/users/ddrous/following{/other_user}",
"gists_url": "https://a... | [] | null | completed | NONE | 2023-10-12T16:28:59Z | null | I_kwDODunzps5znLLW | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6299/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6299 | https://github.com/huggingface/datasets/issues/6299 | false |
1,938,797,389 | https://api.github.com/repos/huggingface/datasets/issues/6298/labels{/name} | Changes in the doc READMe:
* adds two new sections (to be aligned with `transformers` and `hfh`): "Previewing the documentation" and "Writing documentation examples"
* replaces the mentions of `transformers` with `datasets`
* fixes some dead links | 2023-10-12T12:47:15Z | 6,298 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-11T21:51:12Z | https://api.github.com/repos/huggingface/datasets/issues/6298/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6298/timeline | Doc readme improvements | https://api.github.com/repos/huggingface/datasets/issues/6298/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-10-12T12:38:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6298.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6298",
"merged_at": "2023-10-12T12:38:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6298.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5ckg6j | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6298/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6298 | https://github.com/huggingface/datasets/pull/6298 | true |
1,938,752,707 | https://api.github.com/repos/huggingface/datasets/issues/6297/labels{/name} | Fix #6291 | 2023-10-13T13:54:00Z | 6,297 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-11T21:14:59Z | https://api.github.com/repos/huggingface/datasets/issues/6297/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6297/timeline | Fix ArrayXD cast | https://api.github.com/repos/huggingface/datasets/issues/6297/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-10-13T13:45:30Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6297.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6297",
"merged_at": "2023-10-13T13:45:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6297.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5ckXBa | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6297/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6297 | https://github.com/huggingface/datasets/pull/6297 | true |
1,938,453,845 | https://api.github.com/repos/huggingface/datasets/issues/6296/labels{/name} | I didn't notice the path while reviewing the PR yesterday :( | 2023-10-17T13:25:33Z | 6,296 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-11T18:28:00Z | https://api.github.com/repos/huggingface/datasets/issues/6296/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6296/timeline | Move `exceptions.py` to `utils/exceptions.py` | https://api.github.com/repos/huggingface/datasets/issues/6296/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6296.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6296",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6296.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6296"
} | PR_kwDODunzps5cjUs1 | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6296/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6296 | https://github.com/huggingface/datasets/pull/6296 | true |
1,937,362,102 | https://api.github.com/repos/huggingface/datasets/issues/6295/labels{/name} | It was failing when there's a DatasetInfo with non-None info.features from the YAML (therefore containing columns that should be ignored)
Fix https://github.com/huggingface/datasets/issues/6293 | 2023-10-11T16:30:24Z | 6,295 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-11T10:01:01Z | https://api.github.com/repos/huggingface/datasets/issues/6295/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6295/timeline | Fix parquet columns argument in streaming mode | https://api.github.com/repos/huggingface/datasets/issues/6295/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-10-11T16:21:36Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6295.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6295",
"merged_at": "2023-10-11T16:21:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6295.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5cfiW8 | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6295/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6295 | https://github.com/huggingface/datasets/pull/6295 | true |
1,937,359,605 | https://api.github.com/repos/huggingface/datasets/issues/6294/labels{/name} | ### Describe the bug
I am encountering an `IndexError` when trying to access data from a DataLoader which wraps around a dataset I've loaded using the `datasets` library. The error suggests that the dataset size is `0`, but when I check the length and print the dataset, it's clear that it has `1166` entries.
### Step... | 2023-10-17T11:24:06Z | 6,294 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-11T09:59:38Z | https://api.github.com/repos/huggingface/datasets/issues/6294/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6294/timeline | IndexError: Invalid key is out of bounds for size 0 despite having a populated dataset | https://api.github.com/repos/huggingface/datasets/issues/6294/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/61892155?v=4",
"events_url": "https://api.github.com/users/ZYM66/events{/privacy}",
"followers_url": "https://api.github.com/users/ZYM66/followers",
"following_url": "https://api.github.com/users/ZYM66/following{/other_user}",
"gists_url": "https://api.... | [] | null | completed | NONE | 2023-10-17T11:24:06Z | null | I_kwDODunzps5zecL1 | [
"It looks to be the same issue as the one reported in https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0.\r\n\r\nCan you check the length of `train_dataset` before the `train_sampler = self._get_train_sampler()` (and after `_remove_unused_columns`) line?"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6294/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6294 | https://github.com/huggingface/datasets/issues/6294 | false |
1,937,238,047 | https://api.github.com/repos/huggingface/datasets/issues/6293/labels{/name} | Currently passing columns= to load_dataset in streaming mode fails
```
Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=... | 2023-10-11T16:21:38Z | 6,293 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2023-10-11T08:59:36Z | https://api.github.com/repos/huggingface/datasets/issues/6293/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6293/timeline | Choose columns to stream parquet data in streaming mode | https://api.github.com/repos/huggingface/datasets/issues/6293/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | completed | MEMBER | 2023-10-11T16:21:38Z | null | I_kwDODunzps5zd-gf | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6293/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6293 | https://github.com/huggingface/datasets/issues/6293 | false |
1,937,050,470 | https://api.github.com/repos/huggingface/datasets/issues/6292/labels{/name} | _FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data? | 2023-10-11T13:19:11Z | 6,292 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-11T07:27:16Z | https://api.github.com/repos/huggingface/datasets/issues/6292/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6292/timeline | how to load the image of dtype float32 or float64 | https://api.github.com/repos/huggingface/datasets/issues/6292/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/26437644?v=4",
"events_url": "https://api.github.com/users/wanglaofei/events{/privacy}",
"followers_url": "https://api.github.com/users/wanglaofei/followers",
"following_url": "https://api.github.com/users/wanglaofei/following{/other_user}",
"gists_url"... | [] | null | null | NONE | null | null | I_kwDODunzps5zdQtm | [
"Hi! Can you provide a code that reproduces the issue?\r\n\r\nAlso, which version of `datasets` are you using? You can check this by running `python -c \"import datasets; print(datasets.__version__)\"` inside the env. We added support for \"float images\" in `datasets 2.9`."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6292/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6292 | https://github.com/huggingface/datasets/issues/6292 | false |
1,936,129,871 | https://api.github.com/repos/huggingface/datasets/issues/6291/labels{/name} | ### Describe the bug
I am on a school project and the initial type for feature annotations are `Array2D(shape=(None, 4))`. I am trying to cast this type to a `float64` and pyarrow gives me this error :
```
Traceback (most recent call last):
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearnin... | 2023-10-13T13:45:31Z | 6,291 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-10T20:10:10Z | https://api.github.com/repos/huggingface/datasets/issues/6291/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6291/timeline | Casting type from Array2D int to Array2D float crashes | https://api.github.com/repos/huggingface/datasets/issues/6291/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22567306?v=4",
"events_url": "https://api.github.com/users/AlanBlanchet/events{/privacy}",
"followers_url": "https://api.github.com/users/AlanBlanchet/followers",
"following_url": "https://api.github.com/users/AlanBlanchet/following{/other_user}",
"gist... | [] | null | completed | NONE | 2023-10-13T13:45:31Z | null | I_kwDODunzps5zZv9P | [
"Thanks for reporting! I've opened a PR with a fix"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6291/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6291 | https://github.com/huggingface/datasets/issues/6291 | false |
1,935,629,679 | https://api.github.com/repos/huggingface/datasets/issues/6290/labels{/name} | ### Feature request
Have the possibility to do `ds.push_to_hub(..., append=True)`.
### Motivation
Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and
this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675... | 2023-10-13T16:05:26Z | 6,290 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-10-10T15:18:03Z | https://api.github.com/repos/huggingface/datasets/issues/6290/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6290/timeline | Incremental dataset (e.g. `.push_to_hub(..., append=True)`) | https://api.github.com/repos/huggingface/datasets/issues/6290/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https:... | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps5zX11v | [
"Yea I think waiting for #6269 would be best, or branching from it. For reference, this [PR](https://github.com/LAION-AI/Discord-Scrapers/pull/2) is progressing pretty well which will do similar using the hf hub for our LAION dataset bot https://github.com/LAION-AI/Discord-Scrapers/pull/2. "
] | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6290/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6290 | https://github.com/huggingface/datasets/issues/6290 | false |
1,935,628,506 | https://api.github.com/repos/huggingface/datasets/issues/6289/labels{/name} | testing https://github.com/huggingface/doc-builder/pull/426 | 2023-10-13T08:57:14Z | 6,289 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-10T15:17:29Z | https://api.github.com/repos/huggingface/datasets/issues/6289/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6289/timeline | testing doc-builder | https://api.github.com/repos/huggingface/datasets/issues/6289/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "htt... | [] | null | null | CONTRIBUTOR | 2023-10-13T08:56:48Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6289",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6289"
} | PR_kwDODunzps5cZiay | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6289/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6289 | https://github.com/huggingface/datasets/pull/6289 | true |
1,935,005,457 | https://api.github.com/repos/huggingface/datasets/issues/6288/labels{/name} | Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way | 2023-10-20T18:23:05Z | 6,288 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-10-10T10:29:16Z | https://api.github.com/repos/huggingface/datasets/issues/6288/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6288/timeline | Dataset.from_pandas with a DataFrame of PIL.Images | https://api.github.com/repos/huggingface/datasets/issues/6288/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | null | I_kwDODunzps5zVdcR | [
"A duplicate of https://github.com/huggingface/datasets/issues/4796.\r\n\r\nWe could get this for free by implementing the `Image` feature as an extension type, as shown in [this](https://colab.research.google.com/drive/1Uzm_tXVpGTwbzleDConWcNjacwO1yxE4?usp=sharing) Colab (example with UUIDs).\r\n",
"+1 to this\r... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6288/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6288 | https://github.com/huggingface/datasets/issues/6288 | false |
1,932,758,192 | https://api.github.com/repos/huggingface/datasets/issues/6287/labels{/name} | ### Describe the bug
The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:
`
ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`
I have been trying to reproduce it in my code as:
`tokenizedData... | 2023-10-11T20:28:45Z | 6,287 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-09T10:27:30Z | https://api.github.com/repos/huggingface/datasets/issues/6287/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6287/timeline | map() not recognizing "text" | https://api.github.com/repos/huggingface/datasets/issues/6287/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5688359?v=4",
"events_url": "https://api.github.com/users/EngineerKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/EngineerKhan/followers",
"following_url": "https://api.github.com/users/EngineerKhan/following{/other_user}",
"gists... | [] | null | completed | NONE | 2023-10-11T20:28:45Z | null | I_kwDODunzps5zM4yw | [
"There is no \"text\" column in the `amazon_reviews_multi`, hence the `KeyError`. You can get the column names by running `dataset.column_names`."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6287/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6287 | https://github.com/huggingface/datasets/issues/6287 | false |
1,932,640,128 | https://api.github.com/repos/huggingface/datasets/issues/6286/labels{/name} | Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible.
See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b | 2023-10-10T07:13:22Z | 6,286 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-09T09:23:23Z | https://api.github.com/repos/huggingface/datasets/issues/6286/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6286/timeline | Create DefunctDatasetError | https://api.github.com/repos/huggingface/datasets/issues/6286/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-10-10T07:03:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6286",
"merged_at": "2023-10-10T07:03:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5cPKNK | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6286/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6286 | https://github.com/huggingface/datasets/pull/6286 | true |
1,932,306,325 | https://api.github.com/repos/huggingface/datasets/issues/6285/labels{/name} | ### Describe the bug
my dataset is in form : train- image /n -labels
and tried the code:
```
from datasets import load_dataset
data_files = {
"train": "/content/datasets/PotholeDetectionYOLOv8-1/train/",
"validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/",
"test": "/content/dat... | 2023-10-10T13:17:33Z | 6,285 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-09T04:56:26Z | https://api.github.com/repos/huggingface/datasets/issues/6285/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6285/timeline | TypeError: expected str, bytes or os.PathLike object, not dict | https://api.github.com/repos/huggingface/datasets/issues/6285/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url"... | [] | null | null | NONE | null | null | I_kwDODunzps5zLKeV | [
"You should be able to load the images by modifying the `load_dataset` call like this:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_dir=\"/content/datasets/PotholeDetectionYOLOv8-1\")\r\n```\r\n\r\nThe `imagefolder` builder expects the image files to be in `path/label/image_file` (e.g. .`.../train/d... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6285/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6285 | https://github.com/huggingface/datasets/issues/6285 | false |
1,929,551,712 | https://api.github.com/repos/huggingface/datasets/issues/6284/labels{/name} | ### Feature request
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short pass... | 2023-10-06T13:26:51Z | 6,284 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-10-06T06:58:03Z | https://api.github.com/repos/huggingface/datasets/issues/6284/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6284/timeline | Add Belebele multiple-choice machine reading comprehension (MRC) dataset | https://api.github.com/repos/huggingface/datasets/issues/6284/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "... | [] | null | completed | NONE | 2023-10-06T13:26:51Z | null | I_kwDODunzps5zAp9g | [
"This dataset is already available on the Hub: https://huggingface.co/datasets/facebook/belebele.\r\n"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6284/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6284 | https://github.com/huggingface/datasets/issues/6284 | false |
1,928,552,257 | https://api.github.com/repos/huggingface/datasets/issues/6283/labels{/name} | Fixes issues with casting/embedding PyArrow list arrays with null values. It also bumps the required PyArrow version to 12.0.0 (over 9 months old) to simplify the implementation.
Fix #6280, fix #6311, fix #6360
(Also fixes https://github.com/huggingface/datasets/issues/5430 to make Beam compatible with PyArrow>=... | 2024-02-06T19:30:25Z | 6,283 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-05T15:24:05Z | https://api.github.com/repos/huggingface/datasets/issues/6283/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6283/timeline | Fix array cast/embed with null values | https://api.github.com/repos/huggingface/datasets/issues/6283/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2024-02-06T19:24:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6283",
"merged_at": "2024-02-06T19:24:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5cBlKq | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6283/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6283 | https://github.com/huggingface/datasets/pull/6283 | true |
1,928,473,630 | https://api.github.com/repos/huggingface/datasets/issues/6282/labels{/name} | I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
| 2024-03-01T16:33:20Z | 6,282 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-05T14:43:08Z | https://api.github.com/repos/huggingface/datasets/issues/6282/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6282/timeline | Drop data_files duplicates | https://api.github.com/repos/huggingface/datasets/issues/6282/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6282",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6282"
} | PR_kwDODunzps5cBT5p | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6282/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6282 | https://github.com/huggingface/datasets/pull/6282 | true |
1,928,456,959 | https://api.github.com/repos/huggingface/datasets/issues/6281/labels{/name} | Improve documentation to clarify sharding behavior (#6270) | 2023-10-05T19:09:07Z | 6,281 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-10-05T14:34:49Z | https://api.github.com/repos/huggingface/datasets/issues/6281/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6281/timeline | Improve documentation of dataset.from_generator | https://api.github.com/repos/huggingface/datasets/issues/6281/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https:... | [] | null | null | CONTRIBUTOR | 2023-10-05T18:57:41Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6281.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6281",
"merged_at": "2023-10-05T18:57:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6281.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5cBQPd | [
"I have looked at the doc failures, and I do not think that my change caused the doc build failure, but I'm not 100% sure about that.\r\nI have high confidence that the integration test failures are not something I introduced:-)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<sum... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6281/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6281 | https://github.com/huggingface/datasets/pull/6281 | true |
1,928,215,278 | https://api.github.com/repos/huggingface/datasets/issues/6280/labels{/name} | ### Describe the bug
I have a dataset with an embedding column, when I try to map that dataset I get the following exception:
```
Traceback (most recent call last):
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map
for rank, done, content... | 2024-02-06T19:24:20Z | 6,280 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-05T12:48:31Z | https://api.github.com/repos/huggingface/datasets/issues/6280/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6280/timeline | Couldn't cast array of type fixed_size_list to Sequence(Value(float64)) | https://api.github.com/repos/huggingface/datasets/issues/6280/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.gith... | [] | null | completed | NONE | 2024-02-06T19:24:20Z | null | I_kwDODunzps5y7jru | [
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks for the quick response @mariosasko! I just installed your branch via `poetry add 'git+https://github.com/huggingface/datasets#fix-array_values'` and I can confirm it works on the example provided.\r\n\r\nFollow up question for you, should `None`s be s... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6280/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6280 | https://github.com/huggingface/datasets/issues/6280 | false |
1,928,028,226 | https://api.github.com/repos/huggingface/datasets/issues/6279/labels{/name} | ### Feature request
Hi,
could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator.
### Motivation
The current implementation load... | 2023-10-05T11:50:28Z | 6,279 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-10-05T11:12:49Z | https://api.github.com/repos/huggingface/datasets/issues/6279/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6279/timeline | Batched IterableDataset | https://api.github.com/repos/huggingface/datasets/issues/6279/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7010688?v=4",
"events_url": "https://api.github.com/users/lneukom/events{/privacy}",
"followers_url": "https://api.github.com/users/lneukom/followers",
"following_url": "https://api.github.com/users/lneukom/following{/other_user}",
"gists_url": "https:/... | [] | null | null | NONE | null | null | I_kwDODunzps5y62BC | [
"This is exactly what I was looking for. It would also be very useful for me :-)"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6279/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6279 | https://github.com/huggingface/datasets/issues/6279 | false |
1,927,957,877 | https://api.github.com/repos/huggingface/datasets/issues/6278/labels{/name} | I added a new DataFilesSet class to disallow duplicate data files.
I also deprecated DataFilesList.
EDIT: actually I might just add drop_duplicates=True to `.from_patterns`
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
TODO:
- [ ] tests
... | 2024-01-11T06:32:49Z | 6,278 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2023-10-05T10:31:58Z | https://api.github.com/repos/huggingface/datasets/issues/6278/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6278/timeline | No data files duplicates | https://api.github.com/repos/huggingface/datasets/issues/6278/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-10-05T14:43:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6278.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6278",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6278.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6278"
} | PR_kwDODunzps5b_iKb | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6278/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6278 | https://github.com/huggingface/datasets/pull/6278 | true |
1,927,044,546 | https://api.github.com/repos/huggingface/datasets/issues/6277/labels{/name} | ### Describe the bug
I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows:
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub eit... | 2023-10-08T17:05:46Z | 6,277 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-04T22:01:25Z | https://api.github.com/repos/huggingface/datasets/issues/6277/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6277/timeline | FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either. | https://api.github.com/repos/huggingface/datasets/issues/6277/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/66733346?v=4",
"events_url": "https://api.github.com/users/diegogonzalezc/events{/privacy}",
"followers_url": "https://api.github.com/users/diegogonzalezc/followers",
"following_url": "https://api.github.com/users/diegogonzalezc/following{/other_user}",
... | [] | null | completed | NONE | 2023-10-08T17:05:46Z | null | I_kwDODunzps5y3F3C | [
"`evaluate.load(\"paws-x\", \"es\")` throws the error because there is no such metric in the `evaluate` lib.\r\n\r\nSo, this is unrelated to our lib."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6277/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6277 | https://github.com/huggingface/datasets/issues/6277 | false |
1,925,961,878 | https://api.github.com/repos/huggingface/datasets/issues/6276/labels{/name} | ### Describe the bug
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
I tried google collab and it works but because I'm on the free version the training ... | 2023-11-27T10:39:16Z | 6,276 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-04T11:03:41Z | https://api.github.com/repos/huggingface/datasets/issues/6276/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6276/timeline | I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error | https://api.github.com/repos/huggingface/datasets/issues/6276/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/50768065?v=4",
"events_url": "https://api.github.com/users/valaofficial/events{/privacy}",
"followers_url": "https://api.github.com/users/valaofficial/followers",
"following_url": "https://api.github.com/users/valaofficial/following{/other_user}",
"gist... | [] | null | null | NONE | null | null | I_kwDODunzps5yy9iW | [
"Since you are using Windows, maybe moving the `map` call inside `if __name__ == \"__main__\"` can fix the issue:\r\n```python\r\nif __name__ == \"__main__\":\r\n common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=4)\r\n```\r\n\r\nOtherwise, the only s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6276/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6276 | https://github.com/huggingface/datasets/issues/6276 | false |
1,921,354,680 | https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name} | I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community | 2023-10-10T16:27:54Z | 6,275 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-02T07:00:21Z | https://api.github.com/repos/huggingface/datasets/issues/6275/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6275/timeline | Would like to Contribute a dataset | https://api.github.com/repos/huggingface/datasets/issues/6275/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4",
"events_url": "https://api.github.com/users/vikas70607/events{/privacy}",
"followers_url": "https://api.github.com/users/vikas70607/followers",
"following_url": "https://api.github.com/users/vikas70607/following{/other_user}",
"gists_url"... | [] | null | completed | NONE | 2023-10-10T16:27:54Z | null | I_kwDODunzps5yhYu4 | [
"Hi! The process of contributing a dataset is explained here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingface.co/docs/datasets/image_dataset for a more detailed explanation of how to share an image dataset."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6275 | https://github.com/huggingface/datasets/issues/6275 | false |
1,921,036,328 | https://api.github.com/repos/huggingface/datasets/issues/6274/labels{/name} | ### Describe the bug
When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen.
FileNotFoundError: [Errno 2... | 2023-10-02T20:09:38Z | 6,274 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-01T23:45:56Z | https://api.github.com/repos/huggingface/datasets/issues/6274/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6274/timeline | FileNotFoundError for dataset with multiple builder config | https://api.github.com/repos/huggingface/datasets/issues/6274/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4",
"events_url": "https://api.github.com/users/LouisChen15/events{/privacy}",
"followers_url": "https://api.github.com/users/LouisChen15/followers",
"following_url": "https://api.github.com/users/LouisChen15/following{/other_user}",
"gists_u... | [] | null | completed | NONE | 2023-10-02T20:09:38Z | null | I_kwDODunzps5ygLAo | [
"Please tell me if the above info is not enough for solving the problem. I will then make my dataset public temporarily so that you can really reproduce the bug. "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6274/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6274 | https://github.com/huggingface/datasets/issues/6274 | false |
1,920,922,260 | https://api.github.com/repos/huggingface/datasets/issues/6273/labels{/name} | ### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapt... | 2024-01-09T05:48:01Z | 6,273 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-10-01T19:08:48Z | https://api.github.com/repos/huggingface/datasets/issues/6273/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6273/timeline | Broken Link to PubMed Abstracts dataset . | https://api.github.com/repos/huggingface/datasets/issues/6273/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/100606327?v=4",
"events_url": "https://api.github.com/users/sameemqureshi/events{/privacy}",
"followers_url": "https://api.github.com/users/sameemqureshi/followers",
"following_url": "https://api.github.com/users/sameemqureshi/following{/other_user}",
"... | [] | null | null | NONE | null | null | I_kwDODunzps5yfvKU | [
"This has already been reported in the HF Course repo (https://github.com/huggingface/course/issues/623).",
"@lhoestq @albertvillanova @lewtun I don't think we are allowed to host these data files on the Hub (due to DMCA), which means the only option is to use a different dataset in the course (and to re-record t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6273/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6273 | https://github.com/huggingface/datasets/issues/6273 | false |
1,920,831,487 | https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name} | e.g. with `u23429/stock_1_minute_ticker`
```ipython
In [1]: from datasets import *
In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker")
Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s]
In [3]: b.config.data_files
Out[3]:
{NamedSplit('train'): ['hf://datasets/... | 2024-03-15T15:22:05Z | 6,272 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2023-10-01T15:43:56Z | https://api.github.com/repos/huggingface/datasets/issues/6272/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6272/timeline | Duplicate `data_files` when named `<split>/<split>.parquet` | https://api.github.com/repos/huggingface/datasets/issues/6272/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | completed | MEMBER | 2024-03-15T15:22:05Z | null | I_kwDODunzps5yfY__ | [
"Also reported in https://github.com/huggingface/datasets/issues/6259",
"I think it's best to drop duplicates with a `set` (as a temporary fix) and improve the patterns when/if https://github.com/fsspec/filesystem_spec/pull/1382 gets merged. @lhoestq Do you have some other ideas?",
"Alternatively we could just... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6272 | https://github.com/huggingface/datasets/issues/6272 | false |
1,920,420,295 | https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name} | ### Describe the bug
I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below.
**Current Behavior**
Whe... | 2023-10-16T13:30:50Z | 6,271 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-30T22:37:31Z | https://api.github.com/repos/huggingface/datasets/issues/6271/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6271/timeline | Overwriting Split overwrites data but not metadata, corrupting dataset | https://api.github.com/repos/huggingface/datasets/issues/6271/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4",
"events_url": "https://api.github.com/users/govindrai/events{/privacy}",
"followers_url": "https://api.github.com/users/govindrai/followers",
"following_url": "https://api.github.com/users/govindrai/following{/other_user}",
"gists_url": "... | [] | null | completed | NONE | 2023-10-16T13:30:50Z | null | I_kwDODunzps5yd0nH | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6271 | https://github.com/huggingface/datasets/issues/6271 | false |
1,920,329,373 | https://api.github.com/repos/huggingface/datasets/issues/6270/labels{/name} | ### Describe the bug
According to the docs of Datasets.from_generator:
```
gen_kwargs(`dict`, *optional*):
Keyword arguments to be passed to the `generator` callable.
You can define a sharded dataset by passing the list of shards in `gen_kwargs`.
```
So I'd expect that if gen_kwar... | 2023-10-11T20:29:12Z | 6,270 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-30T16:50:06Z | https://api.github.com/repos/huggingface/datasets/issues/6270/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6270/timeline | Dataset.from_generator raises with sharded gen_args | https://api.github.com/repos/huggingface/datasets/issues/6270/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https:... | [] | null | completed | CONTRIBUTOR | 2023-10-11T20:29:11Z | null | I_kwDODunzps5ydead | [
"`gen_kwargs` should be a `dict`, as stated in the docstring, but you are passing a `list`.\r\n\r\nSo, to fix the error, replace the list of dicts with a dict of lists (and slightly modify the generator function):\r\n```python\r\nfrom pathlib import Path\r\nimport datasets\r\n\r\ndef process_yaml(files):\r\n for... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6270/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6270 | https://github.com/huggingface/datasets/issues/6270 | false |
1,919,572,790 | https://api.github.com/repos/huggingface/datasets/issues/6269/labels{/name} | Reduces the number of commits in `push_to_hub` by using the `preupload` API from https://github.com/huggingface/huggingface_hub/pull/1699. Each commit contains a maximum of 50 uploaded files.
A shard's fingerprint no longer needs to be added as a suffix to support resuming an upload, meaning the shards' naming schem... | 2023-10-16T16:03:18Z | 6,269 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-29T16:22:31Z | https://api.github.com/repos/huggingface/datasets/issues/6269/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6269/timeline | Reduce the number of commits in `push_to_hub` | https://api.github.com/repos/huggingface/datasets/issues/6269/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-10-16T13:30:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6269.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6269",
"merged_at": "2023-10-16T13:30:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6269.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5bjbDc | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 3,
"laugh": 0,
"rocket": 1,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6269/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6269 | https://github.com/huggingface/datasets/pull/6269 | true |
1,919,010,645 | https://api.github.com/repos/huggingface/datasets/issues/6268/labels{/name} | ```python
from datasets import load_dataset
ds = load_dataset("lhoestq/demo1", split="train")
ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"])
print(ds.repo_id)
# lhoestq/demo1
```
- repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict
- ... | 2023-10-01T15:29:45Z | 6,268 | null | https://api.github.com/repos/huggingface/datasets | true | [] | 2023-09-29T10:24:55Z | https://api.github.com/repos/huggingface/datasets/issues/6268/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6268/timeline | Add repo_id to DatasetInfo | https://api.github.com/repos/huggingface/datasets/issues/6268/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6268",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6268"
} | PR_kwDODunzps5bhgs7 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6268). All of your documentation changes will be reflected on that endpoint.",
"In https://github.com/huggingface/datasets/issues/4129 we want to track the origin of a dataset, e.g. if it comes from multiple datasets.\r\n\r\nI ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6268/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6268 | https://github.com/huggingface/datasets/pull/6268 | true |
1,916,443,262 | https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name} | ### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
... | 2023-10-26T18:46:08Z | 6,267 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-09-27T22:48:08Z | https://api.github.com/repos/huggingface/datasets/issues/6267/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6267/timeline | Multi label class encoding | https://api.github.com/repos/huggingface/datasets/issues/6267/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.gith... | [] | null | null | NONE | null | null | I_kwDODunzps5yOpp- | [
"You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6267 | https://github.com/huggingface/datasets/issues/6267 | false |
1,916,334,394 | https://api.github.com/repos/huggingface/datasets/issues/6266/labels{/name} | PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibY... | 2023-09-28T14:29:24Z | 6,266 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-27T21:13:36Z | https://api.github.com/repos/huggingface/datasets/issues/6266/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6266/timeline | Use LibYAML with PyYAML if available | https://api.github.com/repos/huggingface/datasets/issues/6266/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url":... | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6266.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6266",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6266.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6266"
} | PR_kwDODunzps5bYYb8 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6266). All of your documentation changes will be reflected on that endpoint.",
"On Ubuntu, if `libyaml-dev` is installed, you can install PyYAML 6.0.1 with LibYAML with the following command (as it's automatically detected):\r\... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6266/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6266 | https://github.com/huggingface/datasets/pull/6266 | true |
1,915,651,566 | https://api.github.com/repos/huggingface/datasets/issues/6265/labels{/name} | ... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS)
Fix https://github.com/huggingface/datasets/issues/6260 | 2023-09-28T18:34:02Z | 6,265 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-27T13:56:34Z | https://api.github.com/repos/huggingface/datasets/issues/6265/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6265/timeline | Remove `apache_beam` import in `BeamBasedBuilder._save_info` | https://api.github.com/repos/huggingface/datasets/issues/6265/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-28T18:23:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6265.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6265",
"merged_at": "2023-09-28T18:23:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6265.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5bWDfc | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6265/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6265 | https://github.com/huggingface/datasets/pull/6265 | true |
1,914,958,781 | https://api.github.com/repos/huggingface/datasets/issues/6264/labels{/name} | Temporarily pin tensorflow < 2.14.0 until permanent solution is found.
Hot fix #6263. | 2023-09-27T08:45:24Z | 6,264 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-27T08:16:06Z | https://api.github.com/repos/huggingface/datasets/issues/6264/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6264/timeline | Temporarily pin tensorflow < 2.14.0 | https://api.github.com/repos/huggingface/datasets/issues/6264/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-09-27T08:36:39Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6264.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6264",
"merged_at": "2023-09-27T08:36:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6264.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5bTvzh | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6264/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6264 | https://github.com/huggingface/datasets/pull/6264 | true |
1,914,951,043 | https://api.github.com/repos/huggingface/datasets/issues/6263/labels{/name} | Python 3.10 CI is broken for `test_py310`.
See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/li... | 2023-09-27T08:36:40Z | 6,263 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2023-09-27T08:12:05Z | https://api.github.com/repos/huggingface/datasets/issues/6263/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/6263/timeline | CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python' | https://api.github.com/repos/huggingface/datasets/issues/6263/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2023-09-27T08:36:40Z | null | I_kwDODunzps5yI9WD | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6263/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6263 | https://github.com/huggingface/datasets/issues/6263 | false |
1,914,895,459 | https://api.github.com/repos/huggingface/datasets/issues/6262/labels{/name} | Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884
```
FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.u... | 2023-09-28T15:39:16Z | 6,262 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-27T07:40:18Z | https://api.github.com/repos/huggingface/datasets/issues/6262/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6262/timeline | Fix CI 404 errors | https://api.github.com/repos/huggingface/datasets/issues/6262/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-09-28T15:30:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6262",
"merged_at": "2023-09-28T15:30:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5bTh6H | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6262/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6262 | https://github.com/huggingface/datasets/pull/6262 | true |
1,913,813,178 | https://api.github.com/repos/huggingface/datasets/issues/6261/labels{/name} | ### Describe the bug
Can't seem to load the JourneyDB dataset.
It throws the following error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[15], line 2
1 # If the dataset is gated/priv... | 2023-10-05T10:23:23Z | 6,261 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-26T15:46:25Z | https://api.github.com/repos/huggingface/datasets/issues/6261/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6261/timeline | Can't load a dataset | https://api.github.com/repos/huggingface/datasets/issues/6261/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/37955817?v=4",
"events_url": "https://api.github.com/users/joaopedrosdmm/events{/privacy}",
"followers_url": "https://api.github.com/users/joaopedrosdmm/followers",
"following_url": "https://api.github.com/users/joaopedrosdmm/following{/other_user}",
"g... | [] | null | completed | NONE | 2023-10-05T10:23:22Z | null | I_kwDODunzps5yEni6 | [
"I believe is due to the fact that doesn't work with .tgz files.",
"`JourneyDB/JourneyDB` is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.\r\n\r\n> I believe is due to the fact that d... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6261/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6261 | https://github.com/huggingface/datasets/issues/6261 | false |
1,912,593,466 | https://api.github.com/repos/huggingface/datasets/issues/6260/labels{/name} | ### Describe the bug
I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ
config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/da... | 2023-09-28T18:23:36Z | 6,260 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-26T03:02:16Z | https://api.github.com/repos/huggingface/datasets/issues/6260/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6260/timeline | REUSE_DATASET_IF_EXISTS don't work | https://api.github.com/repos/huggingface/datasets/issues/6260/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "htt... | [] | null | completed | NONE | 2023-09-28T18:23:36Z | null | I_kwDODunzps5x_9w6 | [
"Hi! Unfortunately, the current behavior is to delete the downloaded data when this error happens. So, I've opened a PR that removes the problematic import to avoid losing data due to `apache_beam` not being installed (we host the preprocessed version of `natual_questions` on the HF GCS, so requiring `apache_beam` ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6260/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6260 | https://github.com/huggingface/datasets/issues/6260 | false |
1,911,965,758 | https://api.github.com/repos/huggingface/datasets/issues/6259/labels{/name} | ### Describe the bug
When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets.
### Steps to reproduce the bug... | 2024-03-15T15:22:04Z | 6,259 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-25T17:20:54Z | https://api.github.com/repos/huggingface/datasets/issues/6259/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | https://api.github.com/repos/huggingface/datasets/issues/6259/timeline | Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories | https://api.github.com/repos/huggingface/datasets/issues/6259/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/141304309?v=4",
"events_url": "https://api.github.com/users/MF-FOOM/events{/privacy}",
"followers_url": "https://api.github.com/users/MF-FOOM/followers",
"following_url": "https://api.github.com/users/MF-FOOM/following{/other_user}",
"gists_url": "https... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | completed | NONE | 2024-03-15T15:22:04Z | null | I_kwDODunzps5x9kg- | [
"Thanks for reporting this issue! We should be able to avoid this by making our `glob` patterns more precise. In the meantime, you can load the dataset by directly assigning splits to the data files: \r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"parquet\", data_files={\"train\": \"testin... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6259/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6259 | https://github.com/huggingface/datasets/issues/6259 | false |
1,911,445,373 | https://api.github.com/repos/huggingface/datasets/issues/6258/labels{/name} | Not ElasticSearch :) | 2023-09-26T14:55:35Z | 6,258 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-25T12:50:59Z | https://api.github.com/repos/huggingface/datasets/issues/6258/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6258/timeline | [DOCS] Fix typo: Elasticsearch | https://api.github.com/repos/huggingface/datasets/issues/6258/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/32779855?v=4",
"events_url": "https://api.github.com/users/leemthompo/events{/privacy}",
"followers_url": "https://api.github.com/users/leemthompo/followers",
"following_url": "https://api.github.com/users/leemthompo/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-26T13:36:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6258.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6258",
"merged_at": "2023-09-26T13:36:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6258.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5bHxHl | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6258/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6258 | https://github.com/huggingface/datasets/pull/6258 | true |
1,910,741,044 | https://api.github.com/repos/huggingface/datasets/issues/6257/labels{/name} | ### Describe the bug
I try to upload a very large dataset of images, and get the following error:
```
File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo... | 2023-10-16T13:30:49Z | 6,257 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-25T06:11:43Z | https://api.github.com/repos/huggingface/datasets/issues/6257/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6257/timeline | HfHubHTTPError - exceeded our hourly quotas for action: commit | https://api.github.com/repos/huggingface/datasets/issues/6257/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"g... | [] | null | completed | NONE | 2023-10-16T13:30:48Z | null | I_kwDODunzps5x45g0 | [
"how is your dataset structured? (file types, how many commits and files are you trying to push, etc)",
"I succeeded in uploading it after several attempts with an hour gap between each attempt (inconvenient but worked). The final dataset is [here](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2), code ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6257/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6257 | https://github.com/huggingface/datasets/issues/6257 | false |
1,910,275,199 | https://api.github.com/repos/huggingface/datasets/issues/6256/labels{/name} | ### Describe the bug
datasets version: 2.14.5
when trying to run the following command
trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir')
I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine.
It seems the cache_... | 2023-09-27T13:40:45Z | 6,256 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-24T15:34:06Z | https://api.github.com/repos/huggingface/datasets/issues/6256/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6256/timeline | load_dataset() function's cache_dir does not seems to work | https://api.github.com/repos/huggingface/datasets/issues/6256/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/171831?v=4",
"events_url": "https://api.github.com/users/andyzhu/events{/privacy}",
"followers_url": "https://api.github.com/users/andyzhu/followers",
"following_url": "https://api.github.com/users/andyzhu/following{/other_user}",
"gists_url": "https://... | [] | null | null | NONE | null | null | I_kwDODunzps5x3Hx_ | [
"Can you share the error message?\r\n\r\nAlso, it would help if you could check whether `huggingface_hub`'s download behaves the same:\r\n```python\r\nfrom huggingface_hub import snapshot_download\r\nsnapshot_download(\"trec\", repo_type=\"dataset\", cache_dir='/path/to/my/dir)\r\n```\r\n\r\nIn the next major relea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6256/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6256 | https://github.com/huggingface/datasets/issues/6256 | false |
1,909,842,977 | https://api.github.com/repos/huggingface/datasets/issues/6255/labels{/name} | For datasets with lots of configs defined in YAML
E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec | 2024-01-11T06:32:34Z | 6,255 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-23T11:56:20Z | https://api.github.com/repos/huggingface/datasets/issues/6255/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6255/timeline | Parallelize builder configs creation | https://api.github.com/repos/huggingface/datasets/issues/6255/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-09-26T15:44:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6255.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6255",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6255.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6255"
} | PR_kwDODunzps5bCioS | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6255/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6255 | https://github.com/huggingface/datasets/pull/6255 | true |
1,909,672,104 | https://api.github.com/repos/huggingface/datasets/issues/6254/labels{/name} | ### Describe the bug
Hey there,
I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset.
However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal.
### Steps to reproduce the bu... | 2023-10-03T14:42:53Z | 6,254 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-23T02:07:26Z | https://api.github.com/repos/huggingface/datasets/issues/6254/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6254/timeline | Dataset.from_generator() cost much more time in vscode debugging mode then running mode | https://api.github.com/repos/huggingface/datasets/issues/6254/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/56437469?v=4",
"events_url": "https://api.github.com/users/dontnet-wuenze/events{/privacy}",
"followers_url": "https://api.github.com/users/dontnet-wuenze/followers",
"following_url": "https://api.github.com/users/dontnet-wuenze/following{/other_user}",
... | [] | null | completed | NONE | 2023-10-03T14:42:53Z | null | I_kwDODunzps5x00io | [
"Answered on the forum: https://discuss.huggingface.co/t/dataset-from-generator-cost-much-more-time-in-vscode-debugging-mode-then-running-mode/56005/2"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6254/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6254 | https://github.com/huggingface/datasets/issues/6254 | false |
1,906,618,910 | https://api.github.com/repos/huggingface/datasets/issues/6253/labels{/name} | Fix https://github.com/huggingface/datasets-server/issues/1812
this was causing this issue:
```ipython
In [1]: from datasets import *
In [2]: inspect.get_dataset_config_names("aakanksha/udpos")
Out[2]: ['default']
In [3]: load_dataset_builder("aakanksha/udpos").config.name
Out[3]: 'en'
``` | 2023-09-21T14:16:44Z | 6,253 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-21T10:15:32Z | https://api.github.com/repos/huggingface/datasets/issues/6253/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6253/timeline | Check builder cls default config name in inspect | https://api.github.com/repos/huggingface/datasets/issues/6253/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-09-21T14:08:00Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6253",
"merged_at": "2023-09-21T14:08:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5a3s__ | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6253/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6253 | https://github.com/huggingface/datasets/pull/6253 | true |
1,906,375,378 | https://api.github.com/repos/huggingface/datasets/issues/6252/labels{/name} | ### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca... | 2024-03-19T15:29:43Z | 6,252 | {
"closed_at": null,
"closed_issues": 1,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/follow... | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-09-21T08:11:46Z | https://api.github.com/repos/huggingface/datasets/issues/6252/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6252/timeline | exif_transpose not done to Image (PIL problem) | https://api.github.com/repos/huggingface/datasets/issues/6252/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/108274349?v=4",
"events_url": "https://api.github.com/users/rhajou/events{/privacy}",
"followers_url": "https://api.github.com/users/rhajou/followers",
"following_url": "https://api.github.com/users/rhajou/following{/other_user}",
"gists_url": "https://... | [] | null | completed | NONE | 2024-03-19T15:29:43Z | null | I_kwDODunzps5xoPrS | [
"Indeed, it makes sense to do this by default. \r\n\r\nIn the meantime, you can use `.with_transform` to transpose the images when accessing them:\r\n\r\n```python\r\nimport PIL.ImageOps\r\n\r\ndef exif_transpose_transform(batch):\r\n batch[\"image\"] = [PIL.ImageOps.exif_transpose(image) for image in batch[\"imag... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6252/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6252 | https://github.com/huggingface/datasets/issues/6252 | false |
1,904,418,426 | https://api.github.com/repos/huggingface/datasets/issues/6251/labels{/name} | Support streaming datasets with `pyarrow.parquet.read_table`.
See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2
CC: @AndreaFrancis | 2023-09-27T06:37:03Z | 6,251 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-20T08:07:02Z | https://api.github.com/repos/huggingface/datasets/issues/6251/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6251/timeline | Support streaming datasets with pyarrow.parquet.read_table | https://api.github.com/repos/huggingface/datasets/issues/6251/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-09-27T06:26:24Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6251.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6251",
"merged_at": "2023-09-27T06:26:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6251.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5awQsy | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This function reads an entire Arrow table in one go, which is not ideal memory-wise, so I don't think we should encourage using this function, considering we want to keep RAM usage as low as possible in the streaming mode. \r\n\r\n(N... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6251/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6251 | https://github.com/huggingface/datasets/pull/6251 | true |
1,901,390,945 | https://api.github.com/repos/huggingface/datasets/issues/6247/labels{/name} | modified , as AudioFolder and ImageFolder not in Dataset Library.
``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset```
```
cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site... | 2023-09-19T18:51:49Z | 6,247 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-18T17:06:29Z | https://api.github.com/repos/huggingface/datasets/issues/6247/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6247/timeline | Update create_dataset.mdx | https://api.github.com/repos/huggingface/datasets/issues/6247/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/76403422?v=4",
"events_url": "https://api.github.com/users/EswarDivi/events{/privacy}",
"followers_url": "https://api.github.com/users/EswarDivi/followers",
"following_url": "https://api.github.com/users/EswarDivi/following{/other_user}",
"gists_url": "... | [] | null | null | CONTRIBUTOR | 2023-09-19T18:40:10Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6247",
"merged_at": "2023-09-19T18:40:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5amAQ1 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6247/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6247 | https://github.com/huggingface/datasets/pull/6247 | true |
1,899,848,414 | https://api.github.com/repos/huggingface/datasets/issues/6246/labels{/name} | ### Describe the bug
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dataset['train']['/workspace/data']
3 frames
[/... | 2023-09-18T16:20:09Z | 6,246 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-17T16:59:48Z | https://api.github.com/repos/huggingface/datasets/issues/6246/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6246/timeline | Add new column to dataset | https://api.github.com/repos/huggingface/datasets/issues/6246/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url"... | [] | null | completed | NONE | 2023-09-18T16:20:09Z | null | I_kwDODunzps5xPWLe | [
"I think it's an issue with the code.\r\n\r\nSpecifically:\r\n```python\r\ndataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nNow `dataset` is the train set with a new column. \r\nTo fix this, you can do:\r\n\r\n```python\r\ndataset['train'] = dataset['train'].add_column(\"/workspa... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6246/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6246 | https://github.com/huggingface/datasets/issues/6246 | false |
1,898,861,422 | https://api.github.com/repos/huggingface/datasets/issues/6244/labels{/name} | Fix #6214 | 2023-09-26T15:41:38Z | 6,244 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-15T17:58:25Z | https://api.github.com/repos/huggingface/datasets/issues/6244/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6244/timeline | Add support for `fsspec>=2023.9.0` | https://api.github.com/repos/huggingface/datasets/issues/6244/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-26T15:32:51Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6244",
"merged_at": "2023-09-26T15:32:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5adtD3 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6244/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6244 | https://github.com/huggingface/datasets/pull/6244 | true |
1,898,532,784 | https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name} | Fix #6242 | 2023-09-19T18:02:21Z | 6,243 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-15T14:23:33Z | https://api.github.com/repos/huggingface/datasets/issues/6243/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6243/timeline | Fix cast from fixed size list to variable size list | https://api.github.com/repos/huggingface/datasets/issues/6243/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-19T17:53:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6243.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6243",
"merged_at": "2023-09-19T17:53:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6243.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5aclIy | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6243 | https://github.com/huggingface/datasets/pull/6243 | true |
1,896,899,123 | https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name} | ### Describe the bug
When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent.
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Value, Sequence, load_dataset
# Repository ID
repo_id... | 2023-09-19T17:53:18Z | 6,242 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-14T16:12:45Z | https://api.github.com/repos/huggingface/datasets/issues/6242/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6242/timeline | Data alteration when loading dataset with unspecified inner sequence length | https://api.github.com/repos/huggingface/datasets/issues/6242/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_u... | [] | null | completed | MEMBER | 2023-09-19T17:53:18Z | null | I_kwDODunzps5xEGIz | [
"While this issue may seem specific, it led to a silent problem in my workflow that took days to diagnose. If this feature is not intended to be supported, an error should be raised when encountering this configuration to prevent such issues.",
"Thanks for reporting! This is a MRE:\r\n\r\n```python\r\nimport pyar... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6242 | https://github.com/huggingface/datasets/issues/6242 | false |
1,896,429,694 | https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name} | null | 2023-09-15T15:57:10Z | 6,241 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-14T12:06:32Z | https://api.github.com/repos/huggingface/datasets/issues/6241/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6241/timeline | Remove unused global variables in `audio.py` | https://api.github.com/repos/huggingface/datasets/issues/6241/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-15T15:46:07Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6241",
"merged_at": "2023-09-15T15:46:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5aVfl- | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6241 | https://github.com/huggingface/datasets/pull/6241 | true |
1,895,723,888 | https://api.github.com/repos/huggingface/datasets/issues/6240/labels{/name} | ### Describe the bug
I am trying to get CLIP to fine-tuning with my code.
When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon.
- Validation dataloader stuck in 2nd epoch only on multi-GPU
Specifically, when the "for inputs in valid_loader:" process is finished, it does... | 2023-09-14T23:54:42Z | 6,240 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-14T05:30:30Z | https://api.github.com/repos/huggingface/datasets/issues/6240/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6240/timeline | Dataloader stuck on multiple GPUs | https://api.github.com/repos/huggingface/datasets/issues/6240/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4",
"events_url": "https://api.github.com/users/kuri54/events{/privacy}",
"followers_url": "https://api.github.com/users/kuri54/followers",
"following_url": "https://api.github.com/users/kuri54/following{/other_user}",
"gists_url": "https://a... | [] | null | completed | NONE | 2023-09-14T23:54:42Z | null | I_kwDODunzps5w_nNw | [
"What type of dataset are you using in this script? `torch.utils.data.Dataset` or `datasets.Dataset`? Please share the `datasets` package version if it's the latter. Otherwise, it's better to move this issue to the `accelerate` repo.",
"Very sorry, I thought I had a repo in `accelerate!`\r\nI will close this issu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6240/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6240 | https://github.com/huggingface/datasets/issues/6240 | false |
1,895,349,382 | https://api.github.com/repos/huggingface/datasets/issues/6239/labels{/name} | ### Describe the bug
I get a RuntimeError from the following code:
```python
audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio())
audio_dataset[0]
```
### Traceback
<details>
```python
RuntimeError ... | 2023-09-15T14:32:10Z | 6,239 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-13T22:30:01Z | https://api.github.com/repos/huggingface/datasets/issues/6239/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6239/timeline | Load local audio data doesn't work | https://api.github.com/repos/huggingface/datasets/issues/6239/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/554032?v=4",
"events_url": "https://api.github.com/users/abodacs/events{/privacy}",
"followers_url": "https://api.github.com/users/abodacs/followers",
"following_url": "https://api.github.com/users/abodacs/following{/other_user}",
"gists_url": "https://... | [] | null | completed | NONE | 2023-09-15T14:32:10Z | null | I_kwDODunzps5w-LyG | [
"I think this is the same issue as https://github.com/huggingface/datasets/issues/4776. Maybe installing `ffmpeg` can fix it:\r\n```python\r\nadd-apt-repository -y ppa:savoury1/ffmpeg4\r\napt-get -qq install -y ffmpeg\r\n```\r\n\r\nHowever, the best solution is to use a newer version of `datasets`. In the recent re... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6239/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6239 | https://github.com/huggingface/datasets/issues/6239 | false |
1,895,207,828 | https://api.github.com/repos/huggingface/datasets/issues/6238/labels{/name} | ### Describe the bug
If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition.
### Steps to reproduce the bug
Here's a minimal example:
```python
def filter_batch_always_true(batch, indices):
print("First index being passed into this filte... | 2023-09-17T07:05:07Z | 6,238 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-13T20:20:37Z | https://api.github.com/repos/huggingface/datasets/issues/6238/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6238/timeline | `dataset.filter` ALWAYS removes the first item from the dataset when using batched=True | https://api.github.com/repos/huggingface/datasets/issues/6238/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4",
"events_url": "https://api.github.com/users/Taytay/events{/privacy}",
"followers_url": "https://api.github.com/users/Taytay/followers",
"following_url": "https://api.github.com/users/Taytay/following{/other_user}",
"gists_url": "https://ap... | [] | null | completed | NONE | 2023-09-17T07:05:07Z | null | I_kwDODunzps5w9pOU | [
"`filter` treats the function's output as a (selection) mask - `True` keeps the sample, and `False` drops it. In your case, `bool(0)` evaluates to `False`, so dropping the first sample is the correct behavior.",
"Oh gosh! 🤦 I totally misunderstood the API! My apologies!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6238/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6238 | https://github.com/huggingface/datasets/issues/6238 | false |
1,893,822,321 | https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name} | I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever.
Code snippet:
```
raw_datasets.map(
encode_function,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.ove... | 2023-09-19T21:54:58Z | 6,237 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-13T06:18:34Z | https://api.github.com/repos/huggingface/datasets/issues/6237/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6237/timeline | Tokenization with multiple workers is too slow | https://api.github.com/repos/huggingface/datasets/issues/6237/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "... | [] | null | completed | NONE | 2023-09-19T21:54:58Z | null | I_kwDODunzps5w4W9x | [
"[This](https://huggingface.co/docs/datasets/nlp_process#map) is the most performant way to tokenize a dataset (`batched=True, num_proc=None, return_tensors=\"np\"`) \r\n\r\nIf`tokenizer.is_fast` returns `True`, `num_proc` must be `None/1` to benefit from the fast tokenizers' parallelism (the fast tokenizers are im... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6237 | https://github.com/huggingface/datasets/issues/6237 | false |
1,893,648,480 | https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name} | ### Feature request
I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model.
Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset.
tf.data.Dataset support buffer shuffle by default.
shuffle(
buffer_size, seed=None, r... | 2023-09-18T01:11:21Z | 6,236 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-09-13T03:19:44Z | https://api.github.com/repos/huggingface/datasets/issues/6236/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6236/timeline | Support buffer shuffle for to_tf_dataset | https://api.github.com/repos/huggingface/datasets/issues/6236/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4",
"events_url": "https://api.github.com/users/EthanRock/events{/privacy}",
"followers_url": "https://api.github.com/users/EthanRock/followers",
"following_url": "https://api.github.com/users/EthanRock/following{/other_user}",
"gists_url": "h... | [] | null | null | NONE | null | null | I_kwDODunzps5w3shg | [
"cc @Rocketknight1 ",
"Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end agai... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6236 | https://github.com/huggingface/datasets/issues/6236 | false |
1,893,337,083 | https://api.github.com/repos/huggingface/datasets/issues/6235/labels{/name} | ### Feature request
Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders
```
Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s]
Downloading data f... | 2023-09-12T21:51:08Z | 6,235 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-09-12T21:51:08Z | https://api.github.com/repos/huggingface/datasets/issues/6235/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6235/timeline | Support multiprocessing for download/extract nestedly | https://api.github.com/repos/huggingface/datasets/issues/6235/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22725729?v=4",
"events_url": "https://api.github.com/users/hgt312/events{/privacy}",
"followers_url": "https://api.github.com/users/hgt312/followers",
"following_url": "https://api.github.com/users/hgt312/following{/other_user}",
"gists_url": "https://a... | [] | null | null | NONE | null | null | I_kwDODunzps5w2gf7 | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6235/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6235 | https://github.com/huggingface/datasets/issues/6235 | false |
1,891,804,286 | https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name} | fixed a typo | 2023-09-13T18:20:50Z | 6,233 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-12T06:53:06Z | https://api.github.com/repos/huggingface/datasets/issues/6233/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6233/timeline | Update README.md | https://api.github.com/repos/huggingface/datasets/issues/6233/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gist... | [] | null | null | CONTRIBUTOR | 2023-09-13T18:10:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6233",
"merged_at": "2023-09-13T18:10:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5aF3kd | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6233 | https://github.com/huggingface/datasets/pull/6233 | true |
1,891,109,762 | https://api.github.com/repos/huggingface/datasets/issues/6232/labels{/name} | The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature."
This has been fixed. | 2023-09-15T18:07:56Z | 6,232 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-11T19:11:58Z | https://api.github.com/repos/huggingface/datasets/issues/6232/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6232/timeline | Improve error message for missing function parameters | https://api.github.com/repos/huggingface/datasets/issues/6232/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/4016832?v=4",
"events_url": "https://api.github.com/users/suavemint/events{/privacy}",
"followers_url": "https://api.github.com/users/suavemint/followers",
"following_url": "https://api.github.com/users/suavemint/following{/other_user}",
"gists_url": "h... | [] | null | null | CONTRIBUTOR | 2023-09-15T17:59:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6232",
"merged_at": "2023-09-15T17:59:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5aDhhK | [
"_The documentation is not available anymore as the PR was closed or merged._",
"CI errors are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_nu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6232/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6232 | https://github.com/huggingface/datasets/pull/6232 | true |
1,890,863,249 | https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name} | Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in thi... | 2023-09-26T11:19:36Z | 6,231 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-11T16:27:09Z | https://api.github.com/repos/huggingface/datasets/issues/6231/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6231/timeline | Overwrite legacy default config name in `dataset_infos.json` in packaged datasets | https://api.github.com/repos/huggingface/datasets/issues/6231/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/6231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6231",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6231"
} | PR_kwDODunzps5aCr8_ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6231). All of your documentation changes will be reflected on that endpoint.",
"realized that this pr is still not merged, @lhoestq maybe you can take a look at it? ",
"I think https://github.com/huggingface/datasets/pull/621... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/6231 | https://github.com/huggingface/datasets/pull/6231 | true |
1,890,521,006 | https://api.github.com/repos/huggingface/datasets/issues/6230/labels{/name} | Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected | 2023-09-13T18:21:28Z | 6,230 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-11T13:29:19Z | https://api.github.com/repos/huggingface/datasets/issues/6230/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6230/timeline | Don't skip hidden files in `dl_manager.iter_files` when they are given as input | https://api.github.com/repos/huggingface/datasets/issues/6230/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-13T18:12:09Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6230",
"merged_at": "2023-09-13T18:12:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5aBh6L | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6230/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6230 | https://github.com/huggingface/datasets/pull/6230 | true |
1,889,050,954 | https://api.github.com/repos/huggingface/datasets/issues/6229/labels{/name} | ### Describe the bug
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[14], line 11
9 for idx, example in enumerate(dataset['train']):
10 image_path = example['image']
---> 11 mask... | 2023-09-20T16:11:53Z | 6,229 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-10T08:36:12Z | https://api.github.com/repos/huggingface/datasets/issues/6229/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6229/timeline | Apply inference on all images in the dataset | https://api.github.com/repos/huggingface/datasets/issues/6229/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url"... | [] | null | completed | NONE | 2023-09-20T16:11:52Z | null | I_kwDODunzps5wmKFK | [
"From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object). ",
"> From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6229/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6229 | https://github.com/huggingface/datasets/issues/6229 | false |
1,887,959,311 | https://api.github.com/repos/huggingface/datasets/issues/6228/labels{/name} | Fix #6225 | 2023-09-08T18:02:49Z | 6,228 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-08T16:09:13Z | https://api.github.com/repos/huggingface/datasets/issues/6228/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6228/timeline | Remove RGB -> BGR image conversion in Object Detection tutorial | https://api.github.com/repos/huggingface/datasets/issues/6228/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-09-08T17:52:16Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6228",
"merged_at": "2023-09-08T17:52:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Z5HZi | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6228/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6228 | https://github.com/huggingface/datasets/pull/6228 | true |
1,887,462,591 | https://api.github.com/repos/huggingface/datasets/issues/6226/labels{/name} | null | 2023-09-08T12:29:21Z | 6,226 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-09-08T11:08:55Z | https://api.github.com/repos/huggingface/datasets/issues/6226/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6226/timeline | Add push_to_hub with multiple configs docs | https://api.github.com/repos/huggingface/datasets/issues/6226/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-09-08T12:20:51Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/6226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6226",
"merged_at": "2023-09-08T12:20:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Z3arq | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6226/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6226 | https://github.com/huggingface/datasets/pull/6226 | true |
1,887,054,320 | https://api.github.com/repos/huggingface/datasets/issues/6225/labels{/name} | The [tutorial](https://huggingface.co/docs/datasets/main/en/object_detection) mentions the necessity of conversion the input image from BGR to RGB
> albumentations expects the image to be in BGR format, not RGB, so you’ll have to convert the image before applying the transform.
[Link to tutorial](https://github.c... | 2023-09-08T17:52:18Z | 6,225 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-09-08T06:49:19Z | https://api.github.com/repos/huggingface/datasets/issues/6225/comments | null | https://api.github.com/repos/huggingface/datasets/issues/6225/timeline | Conversion from RGB to BGR in Object Detection tutorial | https://api.github.com/repos/huggingface/datasets/issues/6225/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/33297401?v=4",
"events_url": "https://api.github.com/users/samokhinv/events{/privacy}",
"followers_url": "https://api.github.com/users/samokhinv/followers",
"following_url": "https://api.github.com/users/samokhinv/following{/other_user}",
"gists_url": "... | [] | null | completed | NONE | 2023-09-08T17:52:17Z | null | I_kwDODunzps5weinw | [
"Good catch!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6225/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/6225 | https://github.com/huggingface/datasets/issues/6225 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.