id int64 1.14B 2.23B | labels_url stringlengths 75 75 | body stringlengths 2 33.9k ⌀ | updated_at stringlengths 20 20 | number int64 3.76k 6.79k | milestone dict | repository_url stringclasses 1
value | draft bool 2
classes | labels listlengths 0 4 | created_at stringlengths 20 20 | comments_url stringlengths 70 70 | assignee dict | timeline_url stringlengths 70 70 | title stringlengths 1 290 | events_url stringlengths 68 68 | active_lock_reason null | user dict | assignees listlengths 0 3 | performed_via_github_app null | state_reason stringclasses 3
values | author_association stringclasses 3
values | closed_at stringlengths 20 20 ⌀ | pull_request dict | node_id stringlengths 18 19 | comments listlengths 0 30 | reactions dict | state stringclasses 2
values | locked bool 1
class | url stringlengths 61 61 | html_url stringlengths 49 51 | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,567,601,264 | https://api.github.com/repos/huggingface/datasets/issues/5497/labels{/name} | Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos
See https://github.com/huggingface/huggingface_hub/pull/1064 | 2023-02-02T11:26:08Z | 5,497 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-02-02T08:56:15Z | https://api.github.com/repos/huggingface/datasets/issues/5497/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5497/timeline | Improved error message for gated/private repos | https://api.github.com/repos/huggingface/datasets/issues/5497/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_ur... | [] | null | null | MEMBER | 2023-02-02T11:17:15Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5497.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5497",
"merged_at": "2023-02-02T11:17:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5497.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5JFhvc | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5497/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5497 | https://github.com/huggingface/datasets/pull/5497 | true |
1,567,301,765 | https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name} | ### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average... | 2023-07-21T14:24:32Z | 5,496 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-02-02T04:30:22Z | https://api.github.com/repos/huggingface/datasets/issues/5496/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5496/timeline | Add a `reduce` method | https://api.github.com/repos/huggingface/datasets/issues/5496/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4",
"events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangir-azerbayev/followers",
"following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_... | [] | null | completed | NONE | 2023-07-21T14:24:32Z | null | I_kwDODunzps5dayCF | [
"Hi! Sure, feel free to open a PR, so we can see the API you have in mind.",
"I would like to give it a go! #self-assign",
"Closing as `Dataset.map` can be used instead (see https://github.com/huggingface/datasets/pull/5533#issuecomment-1440571658 and https://github.com/huggingface/datasets/pull/5533#issuecomme... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5496 | https://github.com/huggingface/datasets/issues/5496 | false |
1,566,803,452 | https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name} | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column... | 2023-02-08T14:33:19Z | 5,495 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | 2023-02-01T20:47:33Z | https://api.github.com/repos/huggingface/datasets/issues/5495/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5495/timeline | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | https://api.github.com/repos/huggingface/datasets/issues/5495/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https:/... | [] | null | completed | CONTRIBUTOR | 2023-02-08T14:33:19Z | null | I_kwDODunzps5dY4X8 | [
"Hi! This is indeed a bug in our zero-copy logic.\r\n\r\nTo fix it, instead of the line:\r\nhttps://github.com/huggingface/datasets/blob/7cfac43b980ab9e4a69c2328f085770996323005/src/datasets/features/features.py#L702\r\n\r\nwe should have:\r\n```python\r\nreturn pa.types.is_primitive(pa_type) and not (pa.types.is_b... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5495 | https://github.com/huggingface/datasets/issues/5495 | false |
1,566,655,348 | https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name} | Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a cust... | 2023-03-02T16:08:17Z | 5,494 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | 2023-02-01T19:07:50Z | https://api.github.com/repos/huggingface/datasets/issues/5494/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5494/timeline | Update audio installation doc page | https://api.github.com/repos/huggingface/datasets/issues/5494/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | [] | null | completed | CONTRIBUTOR | 2023-03-02T16:08:17Z | null | I_kwDODunzps5dYUN0 | [
"Totally agree, the docs should be in sync with our code.\r\n\r\nIndeed to avoid confusing users, I think we should have updated the docs at the same time as this PR:\r\n- #5167",
"@albertvillanova yeah sure I should have, but I forgot back then, sorry for that 😶",
"No, @polinaeterna, nothing to be sorry about... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5494 | https://github.com/huggingface/datasets/issues/5494 | false |
1,566,637,806 | https://api.github.com/repos/huggingface/datasets/issues/5493/labels{/name} | null | 2023-02-08T15:10:46Z | 5,493 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-02-01T18:57:48Z | https://api.github.com/repos/huggingface/datasets/issues/5493/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5493/timeline | Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring | https://api.github.com/repos/huggingface/datasets/issues/5493/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | [] | null | null | CONTRIBUTOR | 2023-02-08T15:03:50Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5493",
"merged_at": "2023-02-08T15:03:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5JCSAZ | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5493). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5493/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5493 | https://github.com/huggingface/datasets/pull/5493 | true |
1,566,604,216 | https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name} | Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `pus... | 2023-10-16T13:30:48Z | 5,492 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | 2023-02-01T18:32:14Z | https://api.github.com/repos/huggingface/datasets/issues/5492/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https:... | https://api.github.com/repos/huggingface/datasets/issues/5492/timeline | Push_to_hub in a pull request | https://api.github.com/repos/huggingface/datasets/issues/5492/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists... | null | completed | MEMBER | 2023-10-16T13:30:48Z | null | I_kwDODunzps5dYHu4 | [
"Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ",
"I would like to be assigned to this issue, @nateraw . #self-assign"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5492 | https://github.com/huggingface/datasets/issues/5492 | false |
1,566,235,012 | https://api.github.com/repos/huggingface/datasets/issues/5491/labels{/name} | null | 2023-02-02T07:42:28Z | 5,491 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-02-01T14:39:39Z | https://api.github.com/repos/huggingface/datasets/issues/5491/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5491/timeline | [MINOR] Typo | https://api.github.com/repos/huggingface/datasets/issues/5491/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://ap... | [] | null | null | CONTRIBUTOR | 2023-02-02T07:35:14Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5491",
"merged_at": "2023-02-02T07:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5JA9OD | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5491/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5491 | https://github.com/huggingface/datasets/pull/5491 | true |
1,565,842,327 | https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name} | As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` t... | 2023-02-09T09:29:08Z | 5,490 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-02-01T10:20:55Z | https://api.github.com/repos/huggingface/datasets/issues/5490/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5490/timeline | Do not add index column by default when exporting to CSV | https://api.github.com/repos/huggingface/datasets/issues/5490/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-02-09T09:22:23Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5490.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5490",
"merged_at": "2023-02-09T09:22:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5490.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5I_nz- | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5490 | https://github.com/huggingface/datasets/pull/5490 | true |
1,565,761,705 | https://api.github.com/repos/huggingface/datasets/issues/5489/labels{/name} | Pin `dill` lower version compatible with `datasets`.
Related to:
- #5487
- #288
Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multip... | 2023-02-02T07:48:09Z | 5,489 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-02-01T09:33:42Z | https://api.github.com/repos/huggingface/datasets/issues/5489/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5489/timeline | Pin dill lower version | https://api.github.com/repos/huggingface/datasets/issues/5489/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-02-02T07:40:43Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5489.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5489",
"merged_at": "2023-02-02T07:40:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5489.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5I_WPH | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5489/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5489 | https://github.com/huggingface/datasets/pull/5489 | true |
1,565,025,262 | https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name} | ### Describe the bug
When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays:
```python
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
~/.l... | 2023-03-02T16:25:14Z | 5,488 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-31T21:25:33Z | https://api.github.com/repos/huggingface/datasets/issues/5488/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5488/timeline | Error loading MP3 files from CommonVoice | https://api.github.com/repos/huggingface/datasets/issues/5488/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4",
"events_url": "https://api.github.com/users/kradonneoh/events{/privacy}",
"followers_url": "https://api.github.com/users/kradonneoh/followers",
"following_url": "https://api.github.com/users/kradonneoh/following{/other_user}",
"gists_url... | [] | null | completed | NONE | 2023-03-02T16:25:13Z | null | I_kwDODunzps5dSGPu | [
"Hi @kradonneoh, thanks for reporting.\r\n\r\nPlease note that to work with audio datasets (and specifically with MP3 files) we have detailed installation instructions in our docs: https://huggingface.co/docs/datasets/installation#audio\r\n- one of the requirements is torchaudio<0.12.0\r\n\r\nLet us know if the pro... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5488 | https://github.com/huggingface/datasets/issues/5488 | false |
1,564,480,121 | https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name} | ### Describe the bug
I installed the `datasets` package and when I try to `import` it, I get the following error:
```
Traceback (most recent call last):
File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module>
import datasets
File "/Users/avivbrokman/... | 2023-02-24T16:18:36Z | 5,487 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-31T15:01:08Z | https://api.github.com/repos/huggingface/datasets/issues/5487/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5487/timeline | Incorrect filepath for dill module | https://api.github.com/repos/huggingface/datasets/issues/5487/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4",
"events_url": "https://api.github.com/users/avivbrokman/events{/privacy}",
"followers_url": "https://api.github.com/users/avivbrokman/followers",
"following_url": "https://api.github.com/users/avivbrokman/following{/other_user}",
"gists_u... | [] | null | completed | NONE | 2023-02-24T16:18:36Z | null | I_kwDODunzps5dQBJ5 | [
"Hi! The correct path is still `dill._dill.XXXX` in the latest release. What do you get when you run `python -c \"import dill; print(dill.__version__)\"` in your environment?",
"`0.3.6` I feel like that's bad news, because it's probably not the issue.\r\n\r\nMy mistake, about the wrong path guess. I think I did... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5487 | https://github.com/huggingface/datasets/issues/5487 | false |
1,564,059,749 | https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name} | I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` ... | 2023-01-31T14:50:18Z | 5,486 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-31T10:39:53Z | https://api.github.com/repos/huggingface/datasets/issues/5486/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5486/timeline | Adding `sep` to TextConfig | https://api.github.com/repos/huggingface/datasets/issues/5486/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4",
"events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}",
"followers_url": "https://api.github.com/users/omar-araboghli/followers",
"following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}",
... | [] | null | null | NONE | null | null | I_kwDODunzps5dOahl | [
"Hi @omar-araboghli, thanks for your proposal.\r\n\r\nHave you tried to use \"csv\" loader instead of \"text\"? That already has a `sep` argument.",
"Hi @albertvillanova, thanks for the quick response!\r\n\r\nIndeed, I have been trying to use `csv` instead of `text`. However I am still not able to define range of... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5486 | https://github.com/huggingface/datasets/issues/5486 | false |
1,563,002,829 | https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name} | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new d... | 2023-02-01T18:15:38Z | 5,485 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-30T18:43:04Z | https://api.github.com/repos/huggingface/datasets/issues/5485/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5485/timeline | Add section in tutorial for IterableDataset | https://api.github.com/repos/huggingface/datasets/issues/5485/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "htt... | [] | null | null | MEMBER | 2023-02-01T18:08:46Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"merged_at": "2023-02-01T18:08:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5I2ER2 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5485 | https://github.com/huggingface/datasets/pull/5485 | true |
1,562,877,070 | https://api.github.com/repos/huggingface/datasets/issues/5484/labels{/name} | This PR will fix the issue mentioned in #5461. Here is brief overview,
## Bug:
Discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are diffe... | 2023-09-29T06:43:11Z | 5,484 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-30T17:37:08Z | https://api.github.com/repos/huggingface/datasets/issues/5484/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5484/timeline | Update docs for `nyu_depth_v2` dataset | https://api.github.com/repos/huggingface/datasets/issues/5484/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https:... | [] | null | null | CONTRIBUTOR | 2023-02-05T14:15:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5484.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5484",
"merged_at": "2023-02-05T14:15:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5484.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5I1oaq | [
"I think I need to create another PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets for hosting the images there?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the update @awsaf49 !",
"> Thanks a lot for the updates!\r\n> ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5484/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5484 | https://github.com/huggingface/datasets/pull/5484 | true |
1,560,894,690 | https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name} | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.pus... | 2023-01-29T08:09:49Z | 5,483 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-28T15:18:26Z | https://api.github.com/repos/huggingface/datasets/issues/5483/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5483/timeline | Unable to upload dataset | https://api.github.com/repos/huggingface/datasets/issues/5483/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"g... | [] | null | completed | NONE | 2023-01-29T08:09:49Z | null | I_kwDODunzps5dCVzi | [
"Seems to work now, perhaps it was something internal with our university's network."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5483 | https://github.com/huggingface/datasets/issues/5483 | false |
1,560,853,137 | https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name} | The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading th... | 2023-02-12T15:57:02Z | 5,482 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": fals... | 2023-01-28T13:12:31Z | https://api.github.com/repos/huggingface/datasets/issues/5482/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "h... | https://api.github.com/repos/huggingface/datasets/issues/5482/timeline | Reload features from Parquet metadata | https://api.github.com/repos/huggingface/datasets/issues/5482/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"... | null | completed | MEMBER | 2023-02-12T15:57:02Z | null | I_kwDODunzps5dCLqR | [
"I'd be happy to have a look, if nobody else has started working on this yet @lhoestq. \r\n\r\nIt seems to me that for the `arrow` format features are currently attached as metadata [in `datasets.arrow_writer`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/arrow_... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5482 | https://github.com/huggingface/datasets/issues/5482 | false |
1,560,468,195 | https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name} | The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True | 2023-06-26T10:48:53Z | 5,481 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": fals... | 2023-01-27T21:43:51Z | https://api.github.com/repos/huggingface/datasets/issues/5481/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5481/timeline | Load a cached dataset as iterable | https://api.github.com/repos/huggingface/datasets/issues/5481/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | null | I_kwDODunzps5dAtrj | [
"Can I work on this issue? I am pretty new to this.",
"Hi ! Sure :) you can comment `#self-assign` to assign yourself to this issue.\r\n\r\nI can give you some pointers to get started:\r\n\r\n`load_dataset` works roughly this way:\r\n1. it instantiate a dataset builder using `load_dataset_builder()`\r\n2. the bui... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5481 | https://github.com/huggingface/datasets/issues/5481 | false |
1,560,364,866 | https://api.github.com/repos/huggingface/datasets/issues/5480/labels{/name} | Close #5474 and #5468. | 2023-02-13T11:10:13Z | 5,480 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-27T20:06:16Z | https://api.github.com/repos/huggingface/datasets/issues/5480/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5480/timeline | Select columns of Dataset or DatasetDict | https://api.github.com/repos/huggingface/datasets/issues/5480/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://ap... | [] | null | null | CONTRIBUTOR | 2023-02-13T09:59:35Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5480",
"merged_at": "2023-02-13T09:59:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5ItY2y | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5480/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5480 | https://github.com/huggingface/datasets/pull/5480 | true |
1,560,357,590 | https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name} | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what cou... | 2023-01-29T05:23:14Z | 5,479 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-27T20:01:22Z | https://api.github.com/repos/huggingface/datasets/issues/5479/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5479/timeline | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | https://api.github.com/repos/huggingface/datasets/issues/5479/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://... | [] | null | completed | NONE | 2023-01-29T05:23:14Z | null | I_kwDODunzps5dASrW | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5479 | https://github.com/huggingface/datasets/issues/5479 | false |
1,560,357,583 | https://api.github.com/repos/huggingface/datasets/issues/5478/labels{/name} | From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset. | 2023-01-30T19:22:21Z | 5,478 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-27T20:01:22Z | https://api.github.com/repos/huggingface/datasets/issues/5478/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5478/timeline | Tip for recomputing metadata | https://api.github.com/repos/huggingface/datasets/issues/5478/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "htt... | [] | null | null | MEMBER | 2023-01-30T19:15:26Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5478.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5478",
"merged_at": "2023-01-30T19:15:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5478.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5ItXQG | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5478/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5478 | https://github.com/huggingface/datasets/pull/5478 | true |
1,559,909,892 | https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name} | Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476 | 2024-01-26T14:50:45Z | 5,477 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-27T15:01:55Z | https://api.github.com/repos/huggingface/datasets/issues/5477/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5477/timeline | Unpin sqlalchemy once issue is fixed | https://api.github.com/repos/huggingface/datasets/issues/5477/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | completed | MEMBER | 2024-01-26T14:50:45Z | null | I_kwDODunzps5c-lYE | [
"@albertvillanova It looks like that issue has been fixed so I made a PR to unpin sqlalchemy! ",
"The source issue:\r\n- https://github.com/pandas-dev/pandas/issues/40686\r\n\r\nhas been fixed:\r\n- https://github.com/pandas-dev/pandas/pull/48576\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5477 | https://github.com/huggingface/datasets/issues/5477 | false |
1,559,594,684 | https://api.github.com/repos/huggingface/datasets/issues/5476/labels{/name} | since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514
the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015 | 2023-01-27T12:06:51Z | 5,476 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-27T11:26:38Z | https://api.github.com/repos/huggingface/datasets/issues/5476/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5476/timeline | Pin sqlalchemy | https://api.github.com/repos/huggingface/datasets/issues/5476/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-01-27T11:57:48Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5476",
"merged_at": "2023-01-27T11:57:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5IqwC_ | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5476/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5476 | https://github.com/huggingface/datasets/pull/5476 | true |
1,559,030,149 | https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name} | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that exp... | 2023-01-30T16:17:11Z | 5,475 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-27T01:32:25Z | https://api.github.com/repos/huggingface/datasets/issues/5475/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5475/timeline | Dataset scan time is much slower than using native arrow | https://api.github.com/repos/huggingface/datasets/issues/5475/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_us... | [] | null | completed | CONTRIBUTOR | 2023-01-30T16:17:11Z | null | I_kwDODunzps5c7OmF | [
"Hi ! In your code you only iterate on the Arrow buffers - you don't actually load the data as python objects. For a fair comparison, you can modify your code using:\r\n```diff\r\n- for _ in range(0, len(table), bsz):\r\n- _ = {k:table[k][_ : _ + bsz] for k in cols}\r\n+ for _ in range(0, len(table)... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5475 | https://github.com/huggingface/datasets/issues/5475 | false |
1,558,827,155 | https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name} | ### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # std... | 2023-02-13T09:59:37Z | 5,474 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
... | 2023-01-26T21:47:53Z | https://api.github.com/repos/huggingface/datasets/issues/5474/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5474/timeline | Column project operation on `datasets.Dataset` | https://api.github.com/repos/huggingface/datasets/issues/5474/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://ap... | [] | null | completed | CONTRIBUTOR | 2023-02-13T09:59:37Z | null | I_kwDODunzps5c6dCT | [
"Hi ! This would be a nice addition indeed :) This sounds like a duplicate of https://github.com/huggingface/datasets/issues/5468\r\n\r\n> Not sure. Some of my PRs are still open and some do not have any discussions.\r\n\r\nSorry to hear that, feel free to ping me on those PRs"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5474 | https://github.com/huggingface/datasets/issues/5474 | false |
1,558,668,197 | https://api.github.com/repos/huggingface/datasets/issues/5473/labels{/name} | null | 2023-01-26T19:47:34Z | 5,473 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T19:34:44Z | https://api.github.com/repos/huggingface/datasets/issues/5473/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5473/timeline | Set dev version | https://api.github.com/repos/huggingface/datasets/issues/5473/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-01-26T19:38:30Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5473",
"merged_at": "2023-01-26T19:38:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Inm9h | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5473/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5473 | https://github.com/huggingface/datasets/pull/5473 | true |
1,558,662,251 | https://api.github.com/repos/huggingface/datasets/issues/5472/labels{/name} | null | 2023-01-26T19:40:44Z | 5,472 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T19:29:42Z | https://api.github.com/repos/huggingface/datasets/issues/5472/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5472/timeline | Release: 2.9.0 | https://api.github.com/repos/huggingface/datasets/issues/5472/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-01-26T19:33:00Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5472",
"merged_at": "2023-01-26T19:33:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Inlp8 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5472/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5472 | https://github.com/huggingface/datasets/pull/5472 | true |
1,558,557,545 | https://api.github.com/repos/huggingface/datasets/issues/5471/labels{/name} | `to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same ac... | 2023-01-27T18:16:45Z | 5,471 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T18:09:40Z | https://api.github.com/repos/huggingface/datasets/issues/5471/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5471/timeline | Add num_test_batches option | https://api.github.com/repos/huggingface/datasets/issues/5471/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_u... | [] | null | null | CONTRIBUTOR | 2023-01-27T18:08:36Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5471",
"merged_at": "2023-01-27T18:08:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5InPA7 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought this issue was resolved in my parallel `to_tf_dataset` PR! I changed the default `num_test_batches` in `_get_output_signature` to 20 and used a test batch size of 1 to maximize variance to detect shorter samples. I think it... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5471/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5471 | https://github.com/huggingface/datasets/pull/5471 | true |
1,558,542,611 | https://api.github.com/repos/huggingface/datasets/issues/5470/labels{/name} | Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one. | 2023-01-27T16:27:00Z | 5,470 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T17:57:51Z | https://api.github.com/repos/huggingface/datasets/issues/5470/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5470/timeline | Update dataset card creation | https://api.github.com/repos/huggingface/datasets/issues/5470/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "htt... | [] | null | null | MEMBER | 2023-01-27T16:20:10Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5470",
"merged_at": "2023-01-27T16:20:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5InLw9 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to your PR - feel free to merge :)",
"Haha thanks, you read my mind :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n##... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5470/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5470 | https://github.com/huggingface/datasets/pull/5470 | true |
1,558,346,906 | https://api.github.com/repos/huggingface/datasets/issues/5469/labels{/name} | The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it? | 2023-01-26T17:37:51Z | 5,469 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T15:40:56Z | https://api.github.com/repos/huggingface/datasets/issues/5469/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5469/timeline | Remove deprecated `shard_size` arg from `.push_to_hub()` | https://api.github.com/repos/huggingface/datasets/issues/5469/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | [] | null | null | CONTRIBUTOR | 2023-01-26T17:30:59Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5469.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5469",
"merged_at": "2023-01-26T17:30:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5469.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Imhk2 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5469/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5469 | https://github.com/huggingface/datasets/pull/5469 | true |
1,558,066,625 | https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name} | ### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(column... | 2023-02-13T09:59:38Z | 5,468 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | 2023-01-26T12:28:09Z | https://api.github.com/repos/huggingface/datasets/issues/5468/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5468/timeline | Allow opposite of remove_columns on Dataset and DatasetDict | https://api.github.com/repos/huggingface/datasets/issues/5468/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https... | [] | null | completed | NONE | 2023-02-13T09:59:38Z | null | I_kwDODunzps5c3jXB | [
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also wa... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5468 | https://github.com/huggingface/datasets/issues/5468 | false |
1,557,898,273 | https://api.github.com/repos/huggingface/datasets/issues/5467/labels{/name} | The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining
```
conda install -c huggingface datasets
``` | 2023-09-24T10:06:59Z | 5,467 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T10:03:01Z | https://api.github.com/repos/huggingface/datasets/issues/5467/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5467/timeline | Fix conda command in readme | https://api.github.com/repos/huggingface/datasets/issues/5467/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-01-26T18:29:37Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5467",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5467"
} | PR_kwDODunzps5IlAlk | [
"ah didn't read well - it's all good",
"or maybe it isn't ? `-c huggingface -c conda-forge` installs from HF or from conda-forge ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5467/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5467 | https://github.com/huggingface/datasets/pull/5467 | true |
1,557,584,845 | https://api.github.com/repos/huggingface/datasets/issues/5466/labels{/name} | Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage | 2023-01-26T17:08:57Z | 5,466 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-26T03:25:45Z | https://api.github.com/repos/huggingface/datasets/issues/5466/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5466/timeline | remove pathlib.Path with URIs | https://api.github.com/repos/huggingface/datasets/issues/5466/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4",
"events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}",
"followers_url": "https://api.github.com/users/jonny-cyberhaven/followers",
"following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_us... | [] | null | null | CONTRIBUTOR | 2023-01-26T16:59:11Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5466.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5466",
"merged_at": "2023-01-26T16:59:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5466.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Ij-z1 | [
"Thanks !\r\n`os.path.join` will use a backslash `\\` on windows which will also fail. You can use this instead in `load_from_disk`:\r\n```python\r\nfrom .filesystems import is_remote_filesystem\r\n\r\nis_local = not is_remote_filesystem(fs)\r\npath_join = os.path.join if is_local else posixpath.join\r\n```",
"Th... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5466/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5466 | https://github.com/huggingface/datasets/pull/5466 | true |
1,557,510,618 | https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name} | ### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the follo... | 2023-01-26T08:48:45Z | 5,465 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-26T01:45:45Z | https://api.github.com/repos/huggingface/datasets/issues/5465/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5465/timeline | audiofolder creates empty dataset even though the dataset passed in follows the correct structure | https://api.github.com/repos/huggingface/datasets/issues/5465/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4",
"events_url": "https://api.github.com/users/jcho19/events{/privacy}",
"followers_url": "https://api.github.com/users/jcho19/followers",
"following_url": "https://api.github.com/users/jcho19/following{/other_user}",
"gists_url": "https://... | [] | null | completed | NONE | 2023-01-26T08:48:45Z | null | I_kwDODunzps5c1bna | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5465 | https://github.com/huggingface/datasets/issues/5465 | false |
1,557,462,104 | https://api.github.com/repos/huggingface/datasets/issues/5464/labels{/name} | ### Describe the bug
The checksum of the file has likely changed on the remote host.
### Steps to reproduce the bug
`dataset = nlp.load_dataset("hendrycks_test", "anatomy")`
### Expected behavior
no error thrown
### Environment info
- `datasets` version: 2.2.1
- Platform: macOS-13.1-arm64-arm-64bit
- Pyt... | 2023-01-27T05:44:31Z | 5,464 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-26T00:43:23Z | https://api.github.com/repos/huggingface/datasets/issues/5464/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5464/timeline | NonMatchingChecksumError for hendrycks_test | https://api.github.com/repos/huggingface/datasets/issues/5464/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "http... | [] | null | completed | NONE | 2023-01-26T07:41:58Z | null | I_kwDODunzps5c1PxY | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```",
"Oops, missed that I needed to upgrade. Thanks!"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5464/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5464 | https://github.com/huggingface/datasets/issues/5464 | false |
1,557,021,041 | https://api.github.com/repos/huggingface/datasets/issues/5463/labels{/name} | null | 2023-01-25T18:33:35Z | 5,463 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-25T17:24:01Z | https://api.github.com/repos/huggingface/datasets/issues/5463/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5463/timeline | Imagefolder docs: mention support of CSV and ZIP | https://api.github.com/repos/huggingface/datasets/issues/5463/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-01-25T18:26:15Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5463.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5463",
"merged_at": "2023-01-25T18:26:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5463.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5IiGWb | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5463/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5463 | https://github.com/huggingface/datasets/pull/5463 | true |
1,556,572,144 | https://api.github.com/repos/huggingface/datasets/issues/5462/labels{/name} | Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413 | 2023-01-26T09:37:00Z | 5,462 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-25T12:33:22Z | https://api.github.com/repos/huggingface/datasets/issues/5462/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5462/timeline | Concatenate on axis=1 with misaligned blocks | https://api.github.com/repos/huggingface/datasets/issues/5462/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-01-26T09:27:19Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"merged_at": "2023-01-26T09:27:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Iglqu | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5462/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5462 | https://github.com/huggingface/datasets/pull/5462 | true |
1,555,532,719 | https://api.github.com/repos/huggingface/datasets/issues/5461/labels{/name} | ### Describe the bug
I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-sid... | 2023-02-06T20:52:00Z | 5,461 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-24T19:15:46Z | https://api.github.com/repos/huggingface/datasets/issues/5461/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5461/timeline | Discrepancy in `nyu_depth_v2` dataset | https://api.github.com/repos/huggingface/datasets/issues/5461/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4",
"events_url": "https://api.github.com/users/awsaf49/events{/privacy}",
"followers_url": "https://api.github.com/users/awsaf49/followers",
"following_url": "https://api.github.com/users/awsaf49/following{/other_user}",
"gists_url": "https:... | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps5ct4uv | [
"Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/ny... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5461/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5461 | https://github.com/huggingface/datasets/issues/5461 | false |
1,555,387,532 | https://api.github.com/repos/huggingface/datasets/issues/5460/labels{/name} | null | 2023-01-25T16:11:10Z | 5,460 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-24T17:33:38Z | https://api.github.com/repos/huggingface/datasets/issues/5460/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5460/timeline | Document that removing all the columns returns an empty document and the num_row is lost | https://api.github.com/repos/huggingface/datasets/issues/5460/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "... | [] | null | null | CONTRIBUTOR | 2023-01-25T16:04:03Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5460.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5460",
"merged_at": "2023-01-25T16:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5460.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Icn9C | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5460/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5460 | https://github.com/huggingface/datasets/pull/5460 | true |
1,555,367,504 | https://api.github.com/repos/huggingface/datasets/issues/5459/labels{/name} | The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'`
This is a problem for our Hugging Face Hub, which requires exact URL from location header.
Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `re... | 2023-02-01T08:45:33Z | 5,459 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-24T17:18:59Z | https://api.github.com/repos/huggingface/datasets/issues/5459/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5459/timeline | Disable aiohttp requoting of redirection URL | https://api.github.com/repos/huggingface/datasets/issues/5459/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-31T08:37:54Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5459",
"merged_at": "2023-01-31T08:37:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Icjwe | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ",
"The lib `requests` does not perform that requote on redir... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5459/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5459 | https://github.com/huggingface/datasets/pull/5459 | true |
1,555,054,737 | https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name} | ### Describe the bug
When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported.
Did I miss this in the documentation?
### Steps to reproduce the bug
`load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")`
causes ValueError: Bad split:... | 2023-01-24T15:11:47Z | 5,458 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-24T14:08:17Z | https://api.github.com/repos/huggingface/datasets/issues/5458/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5458/timeline | slice split while streaming | https://api.github.com/repos/huggingface/datasets/issues/5458/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4",
"events_url": "https://api.github.com/users/SvenDS9/events{/privacy}",
"followers_url": "https://api.github.com/users/SvenDS9/followers",
"following_url": "https://api.github.com/users/SvenDS9/following{/other_user}",
"gists_url": "https... | [] | null | completed | NONE | 2023-01-24T15:11:47Z | null | I_kwDODunzps5csECR | [
"Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5458 | https://github.com/huggingface/datasets/issues/5458 | false |
1,554,171,264 | https://api.github.com/repos/huggingface/datasets/issues/5457/labels{/name} | ### Describe the bug
I pre-built the dataset:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
and it can be used just fine.
now I wipe out `downloads/extracted` and it no longer works.
```
rm -r ~/.cache/huggingface... | 2023-01-24T18:14:10Z | 5,457 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-24T02:09:32Z | https://api.github.com/repos/huggingface/datasets/issues/5457/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5457/timeline | prebuilt dataset relies on `downloads/extracted` | https://api.github.com/repos/huggingface/datasets/issues/5457/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | null | null | CONTRIBUTOR | null | null | I_kwDODunzps5cosWA | [
"Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to e... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5457/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5457 | https://github.com/huggingface/datasets/issues/5457 | false |
1,553,905,148 | https://api.github.com/repos/huggingface/datasets/issues/5456/labels{/name} | As described in #5418
I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet? | 2023-01-24T11:26:47Z | 5,456 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-23T22:05:38Z | https://api.github.com/repos/huggingface/datasets/issues/5456/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5456/timeline | feat: tqdm for `to_parquet` | https://api.github.com/repos/huggingface/datasets/issues/5456/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-01-24T11:17:12Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5456",
"merged_at": "2023-01-24T11:17:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5IXq92 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5456/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5456 | https://github.com/huggingface/datasets/pull/5456 | true |
1,553,040,080 | https://api.github.com/repos/huggingface/datasets/issues/5455/labels{/name} | Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode.
Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issue... | 2023-02-13T20:23:34Z | 5,455 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-23T12:49:40Z | https://api.github.com/repos/huggingface/datasets/issues/5455/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5455/timeline | Single TQDM bar in multi-proc map | https://api.github.com/repos/huggingface/datasets/issues/5455/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | null | null | CONTRIBUTOR | 2023-02-13T20:16:38Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5455.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5455",
"merged_at": "2023-02-13T20:16:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5455.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5IUvAZ | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5455/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5455 | https://github.com/huggingface/datasets/pull/5455 | true |
1,552,890,419 | https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name} | It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed)
What I have in mind (but lmk if you have other ideas or comments):
For map-style datasets, this requires to have a PyTorch Sampler state that can be sav... | 2024-03-19T02:50:44Z | 5,454 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | 2023-01-23T10:58:54Z | https://api.github.com/repos/huggingface/datasets/issues/5454/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5454/timeline | Save and resume the state of a DataLoader | https://api.github.com/repos/huggingface/datasets/issues/5454/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | null | I_kwDODunzps5cjzoz | [
"Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.",
"Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra fe... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5454 | https://github.com/huggingface/datasets/issues/5454 | false |
1,552,727,425 | https://api.github.com/repos/huggingface/datasets/issues/5453/labels{/name} | This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared:
- from: "."
- to: `output_path`
This PR also adds tests for extracting insecure TAR files.
Related to:
- #5441
- #5452
@stas00 please note this PR addresses just one of the issues you pointe... | 2023-01-24T01:34:20Z | 5,453 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-23T08:57:40Z | https://api.github.com/repos/huggingface/datasets/issues/5453/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5453/timeline | Fix base directory while extracting insecure TAR files | https://api.github.com/repos/huggingface/datasets/issues/5453/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-23T10:10:42Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5453.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5453",
"merged_at": "2023-01-23T10:10:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5453.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5ITraa | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5453/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5453 | https://github.com/huggingface/datasets/pull/5453 | true |
1,552,655,939 | https://api.github.com/repos/huggingface/datasets/issues/5452/labels{/name} | The log messages do not match their if-condition. This PR swaps them.
Found while investigating:
- #5441
CC: @lhoestq | 2023-01-23T09:40:55Z | 5,452 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-23T07:53:38Z | https://api.github.com/repos/huggingface/datasets/issues/5452/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5452/timeline | Swap log messages for symbolic/hard links in tar extractor | https://api.github.com/repos/huggingface/datasets/issues/5452/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-23T08:31:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5452.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5452",
"merged_at": "2023-01-23T08:31:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5452.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5ITcA3 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5452/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5452 | https://github.com/huggingface/datasets/pull/5452 | true |
1,552,336,300 | https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name} | ### Describe the bug
I'm getting the following exception:
```
lib/python3.10/zipfile.py:1353 in _RealGetContents │
│ │
│ 1350 │ │ # self.start_dir: Position of start of central directory ... | 2023-05-23T10:35:48Z | 5,451 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-22T23:50:12Z | https://api.github.com/repos/huggingface/datasets/issues/5451/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5451/timeline | ImageFolder BadZipFile: Bad offset for central directory | https://api.github.com/repos/huggingface/datasets/issues/5451/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1524208?v=4",
"events_url": "https://api.github.com/users/hmartiro/events{/privacy}",
"followers_url": "https://api.github.com/users/hmartiro/followers",
"following_url": "https://api.github.com/users/hmartiro/following{/other_user}",
"gists_url": "http... | [] | null | completed | NONE | 2023-02-10T16:31:36Z | null | I_kwDODunzps5chsWs | [
"Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640",
"The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.",
"For others that find ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5451 | https://github.com/huggingface/datasets/issues/5451 | false |
1,551,109,365 | https://api.github.com/repos/huggingface/datasets/issues/5450/labels{/name} | ### Describe the bug
This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing)
Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data colla... | 2023-02-13T14:13:34Z | 5,450 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-20T16:08:37Z | https://api.github.com/repos/huggingface/datasets/issues/5450/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5450/timeline | to_tf_dataset with a TF collator causes bizarrely persistent slowdown | https://api.github.com/repos/huggingface/datasets/issues/5450/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"g... | [] | null | completed | MEMBER | 2023-02-13T14:13:34Z | null | I_kwDODunzps5cdAz1 | [
"wtf",
"Couldn't find what's causing this, this will need more investigation",
"A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive*... | 2023-02-24T16:17:51Z | 5,442 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-19T23:12:08Z | https://api.github.com/repos/huggingface/datasets/issues/5442/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5442/timeline | OneDrive Integrations with HF Datasets | https://api.github.com/repos/huggingface/datasets/issues/5442/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_use... | [] | null | completed | NONE | 2023-02-24T16:17:51Z | null | I_kwDODunzps5cZGli | [
"Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://githu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5442 | https://github.com/huggingface/datasets/issues/5442 | false |
1,548,417,594 | https://api.github.com/repos/huggingface/datasets/issues/5441/labels{/name} | ok, every so often, I have been getting a strange failure on dataset install:
```
$ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Downloading and prep... | 2023-01-20T16:49:22Z | 5,441 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-19T02:17:21Z | https://api.github.com/repos/huggingface/datasets/issues/5441/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5441/timeline | resolving a weird tar extract issue | https://api.github.com/repos/huggingface/datasets/issues/5441/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | null | null | CONTRIBUTOR | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/5441.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5441",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5441.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5441"
} | PR_kwDODunzps5IFeCW | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5441/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5441 | https://github.com/huggingface/datasets/pull/5441 | true |
1,538,361,143 | https://api.github.com/repos/huggingface/datasets/issues/5440/labels{/name} | null | 2023-01-18T17:57:29Z | 5,440 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-18T17:04:27Z | https://api.github.com/repos/huggingface/datasets/issues/5440/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5440/timeline | Fix documentation about batch samplers | https://api.github.com/repos/huggingface/datasets/issues/5440/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "... | [] | null | null | CONTRIBUTOR | 2023-01-18T17:50:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5440.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5440",
"merged_at": "2023-01-18T17:50:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5440.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5HpRbF | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5440/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5440 | https://github.com/huggingface/datasets/pull/5440 | true |
1,537,973,564 | https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name} | ### Feature request
Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets.
### Motivation
The dataset link:
https://commonvoice.mozilla.org/en/datasets
| 2023-07-21T14:26:10Z | 5,439 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-18T13:07:05Z | https://api.github.com/repos/huggingface/datasets/issues/5439/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | https://api.github.com/repos/huggingface/datasets/issues/5439/timeline | [dataset request] Add Common Voice 12.0 | https://api.github.com/repos/huggingface/datasets/issues/5439/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4",
"events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}",
"followers_url": "https://api.github.com/users/MohammedRakib/followers",
"following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}",
"g... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_use... | null | completed | NONE | 2023-07-21T14:26:09Z | null | I_kwDODunzps5bq508 | [
"@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?",
"This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0"
] | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5439 | https://github.com/huggingface/datasets/issues/5439 | false |
1,537,489,730 | https://api.github.com/repos/huggingface/datasets/issues/5438/labels{/name} | This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/ | 2023-01-18T13:49:51Z | 5,438 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-18T06:53:15Z | https://api.github.com/repos/huggingface/datasets/issues/5438/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5438/timeline | Update actions/checkout in CD Conda release | https://api.github.com/repos/huggingface/datasets/issues/5438/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-18T13:42:49Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5438.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5438",
"merged_at": "2023-01-18T13:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5438.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5HmWA8 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5438/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5438 | https://github.com/huggingface/datasets/pull/5438 | true |
1,536,837,144 | https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name} | I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand. | https://api.github.com/repos/huggingface/datasets/issues/5437/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4",
"events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}",
"followers_url": "https://api.github.com/users/WiNE-iNEFF/followers",
"following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}",
"gists_url"... | [] | null | completed | NONE | 2023-01-18T20:20:15Z | null | I_kwDODunzps5bmkYY | [
"Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n",
"> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5437 | https://github.com/huggingface/datasets/issues/5437 | false |
1,536,633,173 | https://api.github.com/repos/huggingface/datasets/issues/5436/labels{/name} | Closes #5433, reverts #5432, and also:
* Uses [ghcr.io container images](https://cml.dev/doc/self-hosted-runners/#docker-images) for extra speed
* Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead... | 2023-01-18T09:05:49Z | 5,436 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-17T15:59:50Z | https://api.github.com/repos/huggingface/datasets/issues/5436/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5436/timeline | Revert container image pin in CI benchmarks | https://api.github.com/repos/huggingface/datasets/issues/5436/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/11387611?v=4",
"events_url": "https://api.github.com/users/0x2b3bfa0/events{/privacy}",
"followers_url": "https://api.github.com/users/0x2b3bfa0/followers",
"following_url": "https://api.github.com/users/0x2b3bfa0/following{/other_user}",
"gists_url": "... | [] | null | null | CONTRIBUTOR | 2023-01-18T06:29:06Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5436",
"merged_at": "2023-01-18T06:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Hjh4v | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5436/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5436 | https://github.com/huggingface/datasets/pull/5436 | true |
1,536,099,300 | https://api.github.com/repos/huggingface/datasets/issues/5435/labels{/name} | ### Describe the bug
In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states:
> Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples cou... | 2023-01-19T09:56:03Z | 5,435 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-17T10:04:16Z | https://api.github.com/repos/huggingface/datasets/issues/5435/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5435/timeline | Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage | https://api.github.com/repos/huggingface/datasets/issues/5435/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/80093591?v=4",
"events_url": "https://api.github.com/users/DanielYang59/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielYang59/followers",
"following_url": "https://api.github.com/users/DanielYang59/following{/other_user}",
"gist... | [] | null | completed | NONE | 2023-01-19T09:56:03Z | null | I_kwDODunzps5bjwPk | [
"Just for your information, Tensorflow confirmed this issue [here.](https://github.com/tensorflow/tensorflow/issues/59279)",
"Thanks for reporting, @HaoyuYang59.\r\n\r\nPlease note that these are different \"dataset\" objects: our docs refer to Hugging Face `datasets.Dataset` and not to TensorFlow `tf.data.Datase... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5435/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5435 | https://github.com/huggingface/datasets/issues/5435 | false |
1,536,090,042 | https://api.github.com/repos/huggingface/datasets/issues/5434/labels{/name} | null | 2023-01-19T13:52:12Z | 5,434 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-17T09:57:54Z | https://api.github.com/repos/huggingface/datasets/issues/5434/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5434/timeline | sample_dataset module not found | https://api.github.com/repos/huggingface/datasets/issues/5434/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/15816213?v=4",
"events_url": "https://api.github.com/users/nickums/events{/privacy}",
"followers_url": "https://api.github.com/users/nickums/followers",
"following_url": "https://api.github.com/users/nickums/following{/other_user}",
"gists_url": "https:... | [] | null | completed | NONE | 2023-01-19T07:55:11Z | null | I_kwDODunzps5bjt-6 | [
"Hi! Can you describe what the actual error is?",
"working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5434/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5434 | https://github.com/huggingface/datasets/issues/5434 | false |
1,536,017,901 | https://api.github.com/repos/huggingface/datasets/issues/5433/labels{/name} | Once we find out the root cause of:
- #5431
we should revert the temporary pin on the Docker image version introduced by:
- #5432 | 2023-01-18T06:29:08Z | 5,433 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-17T09:06:08Z | https://api.github.com/repos/huggingface/datasets/issues/5433/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5433/timeline | Support latest Docker image in CI benchmarks | https://api.github.com/repos/huggingface/datasets/issues/5433/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | completed | MEMBER | 2023-01-18T06:29:08Z | null | I_kwDODunzps5bjcXt | [
"Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.",
"Opened htt... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5433/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5433 | https://github.com/huggingface/datasets/issues/5433 | false |
1,535,893,019 | https://api.github.com/repos/huggingface/datasets/issues/5432/labels{/name} | This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag.
It also updates deprecated `cml-send-comment` command and using `cml comment create` instead.
Fix #5431. | 2023-01-17T08:58:22Z | 5,432 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-17T07:15:31Z | https://api.github.com/repos/huggingface/datasets/issues/5432/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5432/timeline | Fix CI benchmarks by temporarily pinning Docker image version | https://api.github.com/repos/huggingface/datasets/issues/5432/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-17T08:51:17Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5432",
"merged_at": "2023-01-17T08:51:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5HhEA8 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5432/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5432 | https://github.com/huggingface/datasets/pull/5432 | true |
1,535,862,621 | https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name} | Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|██████████| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes ... | 2023-01-18T06:33:24Z | 5,431 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | 2023-01-17T06:49:57Z | https://api.github.com/repos/huggingface/datasets/issues/5431/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5431/timeline | CI benchmarks are broken: Unknown arguments: runnerPath, path | https://api.github.com/repos/huggingface/datasets/issues/5431/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2023-01-17T08:51:18Z | null | I_kwDODunzps5bi2dd | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5431 | https://github.com/huggingface/datasets/issues/5431 | false |
1,535,856,503 | https://api.github.com/repos/huggingface/datasets/issues/5430/labels{/name} | Once we find out the root cause of:
- #5426
we should revert the temporary pin on apache-beam introduced by:
- #5429 | 2024-02-06T19:24:21Z | 5,430 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-17T06:42:12Z | https://api.github.com/repos/huggingface/datasets/issues/5430/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5430/timeline | Support Apache Beam >= 2.44.0 | https://api.github.com/repos/huggingface/datasets/issues/5430/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2024-02-06T19:24:21Z | null | I_kwDODunzps5bi093 | [
"Some of the shard files now have 0 number of rows.\r\n\r\nWe have opened an issue in the Apache Beam repo:\r\n- https://github.com/apache/beam/issues/25041"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5430/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5430 | https://github.com/huggingface/datasets/issues/5430 | false |
1,535,192,687 | https://api.github.com/repos/huggingface/datasets/issues/5429/labels{/name} | Temporarily pin apache-beam < 2.44.0
Fix #5426. | 2023-01-16T16:51:42Z | 5,429 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-16T16:20:09Z | https://api.github.com/repos/huggingface/datasets/issues/5429/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5429/timeline | Fix CI by temporarily pinning apache-beam < 2.44.0 | https://api.github.com/repos/huggingface/datasets/issues/5429/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-16T16:49:03Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5429",
"merged_at": "2023-01-16T16:49:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5HeuyT | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5429/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5429 | https://github.com/huggingface/datasets/pull/5429 | true |
1,535,166,139 | https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name} | ### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In... | 2023-03-27T15:18:22Z | 5,428 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-16T16:08:12Z | https://api.github.com/repos/huggingface/datasets/issues/5428/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5428/timeline | Load/Save FAISS index using fsspec | https://api.github.com/repos/huggingface/datasets/issues/5428/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https:/... | [] | null | completed | CONTRIBUTOR | 2023-03-27T15:18:22Z | null | I_kwDODunzps5bgMa7 | [
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a gr... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5428 | https://github.com/huggingface/datasets/issues/5428 | false |
1,535,162,889 | https://api.github.com/repos/huggingface/datasets/issues/5427/labels{/name} | ### Describe the bug
I tried to download dataset `id_clickbait`, but receive this error message.
```
FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip
```
When i open the link using browser, i got this XML data.
```xml
<?xml versi... | 2023-01-18T09:51:28Z | 5,427 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-16T16:05:36Z | https://api.github.com/repos/huggingface/datasets/issues/5427/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5427/timeline | Unable to download dataset id_clickbait | https://api.github.com/repos/huggingface/datasets/issues/5427/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/45941585?v=4",
"events_url": "https://api.github.com/users/ilos-vigil/events{/privacy}",
"followers_url": "https://api.github.com/users/ilos-vigil/followers",
"following_url": "https://api.github.com/users/ilos-vigil/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2023-01-18T09:25:19Z | null | I_kwDODunzps5bgLoJ | [
"Thanks for reporting, @ilos-vigil.\r\n\r\nWe have transferred this issue to the corresponding dataset on the Hugging Face Hub: https://huggingface.co/datasets/id_clickbait/discussions/1 "
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5427/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5427 | https://github.com/huggingface/datasets/issues/5427 | false |
1,535,158,555 | https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name} | CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `feat... | 2023-06-02T06:40:32Z | 5,426 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | 2023-01-16T16:02:07Z | https://api.github.com/repos/huggingface/datasets/issues/5426/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5426/timeline | CI tests are broken: SchemaInferenceError | https://api.github.com/repos/huggingface/datasets/issues/5426/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2023-01-16T16:49:04Z | null | I_kwDODunzps5bgKkb | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5426 | https://github.com/huggingface/datasets/issues/5426 | false |
1,534,581,850 | https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name} | ### Feature request
From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1
`sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function.
The suggested solution:
> ... having something similar to panda... | 2023-02-24T16:15:11Z | 5,425 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | 2023-01-16T09:22:26Z | https://api.github.com/repos/huggingface/datasets/issues/5425/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5425/timeline | Sort on multiple keys with datasets.Dataset.sort() | https://api.github.com/repos/huggingface/datasets/issues/5425/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/101344863?v=4",
"events_url": "https://api.github.com/users/rocco-fortuna/events{/privacy}",
"followers_url": "https://api.github.com/users/rocco-fortuna/followers",
"following_url": "https://api.github.com/users/rocco-fortuna/following{/other_user}",
"... | [] | null | completed | NONE | 2023-02-24T16:15:11Z | null | I_kwDODunzps5bd9xa | [
"Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multipl... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5425 | https://github.com/huggingface/datasets/issues/5425 | false |
1,534,394,756 | https://api.github.com/repos/huggingface/datasets/issues/5424/labels{/name} | ### Describe the bug
I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`.
### Steps to reproduce the bug
Steps to reproduc... | 2023-02-24T16:19:00Z | 5,424 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-16T06:54:28Z | https://api.github.com/repos/huggingface/datasets/issues/5424/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5424/timeline | When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset? | https://api.github.com/repos/huggingface/datasets/issues/5424/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "... | [] | null | completed | NONE | 2023-02-24T16:19:00Z | null | I_kwDODunzps5bdQGE | [
"Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n R... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5424/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5424 | https://github.com/huggingface/datasets/issues/5424 | false |
1,533,385,239 | https://api.github.com/repos/huggingface/datasets/issues/5422/labels{/name} | ### Describe the bug
Loading a previously downloaded & saved dataset as described in the HuggingFace course:
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
Gives this error:
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset... | 2023-09-14T11:39:57Z | 5,422 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-14T17:29:38Z | https://api.github.com/repos/huggingface/datasets/issues/5422/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5422/timeline | Datasets load error for saved github issues | https://api.github.com/repos/huggingface/datasets/issues/5422/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7360564?v=4",
"events_url": "https://api.github.com/users/folterj/events{/privacy}",
"followers_url": "https://api.github.com/users/folterj/followers",
"following_url": "https://api.github.com/users/folterj/following{/other_user}",
"gists_url": "https:/... | [] | null | null | NONE | null | null | I_kwDODunzps5bZZoX | [
"I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.p... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5422/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5422 | https://github.com/huggingface/datasets/issues/5422 | false |
1,532,278,307 | https://api.github.com/repos/huggingface/datasets/issues/5421/labels{/name} | ### Feature request
The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue.
Ideally, we could load the glue dataset using the following:
```
from d... | 2023-01-13T20:12:32Z | 5,421 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-13T13:07:07Z | https://api.github.com/repos/huggingface/datasets/issues/5421/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5421/timeline | Support case-insensitive Hub dataset name in load_dataset | https://api.github.com/repos/huggingface/datasets/issues/5421/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | null | completed | CONTRIBUTOR | 2023-01-13T20:12:32Z | null | I_kwDODunzps5bVLYj | [
"Closing as case-insensitivity should be only for URL redirection on the Hub. In the APIs, we will only support the canonical name (https://github.com/huggingface/moon-landing/pull/2399#issuecomment-1382085611)"
] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5421/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5421 | https://github.com/huggingface/datasets/issues/5421 | false |
1,532,265,742 | https://api.github.com/repos/huggingface/datasets/issues/5420/labels{/name} | add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project.
See https://github.com/huggingface/datasets/issues/new/choose
<img width="1245" alt="Capture d’écran 2023-01-13 à 13 59 58" src="https://user-images.githubuserconten... | 2023-01-13T13:36:00Z | 5,420 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-13T12:58:43Z | https://api.github.com/repos/huggingface/datasets/issues/5420/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5420/timeline | ci: 🎡 remove two obsolete issue templates | https://api.github.com/repos/huggingface/datasets/issues/5420/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | null | null | CONTRIBUTOR | 2023-01-13T13:29:01Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5420.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5420",
"merged_at": "2023-01-13T13:29:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5420.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5HVAhL | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5420/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5420 | https://github.com/huggingface/datasets/pull/5420 | true |
1,531,999,850 | https://api.github.com/repos/huggingface/datasets/issues/5419/labels{/name} | ### Describe the bug
When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem.
It is required to rename the column... | 2023-07-21T14:27:08Z | 5,419 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-13T09:40:07Z | https://api.github.com/repos/huggingface/datasets/issues/5419/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5419/timeline | label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator | https://api.github.com/repos/huggingface/datasets/issues/5419/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/172385?v=4",
"events_url": "https://api.github.com/users/CreatixEA/events{/privacy}",
"followers_url": "https://api.github.com/users/CreatixEA/followers",
"following_url": "https://api.github.com/users/CreatixEA/following{/other_user}",
"gists_url": "ht... | [] | null | completed | NONE | 2023-07-21T14:27:08Z | null | I_kwDODunzps5bUHZq | [
"Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_inde... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5419/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5419 | https://github.com/huggingface/datasets/issues/5419 | false |
1,530,111,184 | https://api.github.com/repos/huggingface/datasets/issues/5418/labels{/name} | ### Feature request
Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works.
### Motivation
It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar
### Your contribution
Sure I can help if needed | 2023-01-24T18:18:24Z | 5,418 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-12T05:06:20Z | https://api.github.com/repos/huggingface/datasets/issues/5418/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url"... | https://api.github.com/repos/huggingface/datasets/issues/5418/timeline | Add ProgressBar for `to_parquet` | https://api.github.com/repos/huggingface/datasets/issues/5418/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
... | null | completed | CONTRIBUTOR | 2023-01-24T18:18:24Z | null | I_kwDODunzps5bM6TQ | [
"Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!",
"@albertvillanova I’m happy to make a quick PR for the feature! let me know ",
"That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5418/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5418 | https://github.com/huggingface/datasets/issues/5418 | false |
1,526,988,113 | https://api.github.com/repos/huggingface/datasets/issues/5416/labels{/name} | This PR fixes the RuntimeError: Sharding is ambiguous for this dataset.
The error for ambiguous sharding will be raised only if num_proc > 1.
Fix #5415, fix #5414.
Fix https://huggingface.co/datasets/ami/discussions/3. | 2023-01-18T17:12:17Z | 5,416 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-10T08:43:19Z | https://api.github.com/repos/huggingface/datasets/issues/5416/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5416/timeline | Fix RuntimeError: Sharding is ambiguous for this dataset | https://api.github.com/repos/huggingface/datasets/issues/5416/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-18T14:09:02Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5416",
"merged_at": "2023-01-18T14:09:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5HDLmR | [
"_The documentation is not available anymore as the PR was closed or merged._",
"By the way, do we know how many datasets are impacted by this issue?\r\n\r\nMaybe we should do a patch release with this fix.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated be... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5416/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5416 | https://github.com/huggingface/datasets/pull/5416 | true |
1,526,904,861 | https://api.github.com/repos/huggingface/datasets/issues/5415/labels{/name} | ### Describe the bug
When loading some datasets, a RuntimeError is raised.
For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3
```
.../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
... | 2023-01-18T14:09:04Z | 5,415 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-10T07:36:11Z | https://api.github.com/repos/huggingface/datasets/issues/5415/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5415/timeline | RuntimeError: Sharding is ambiguous for this dataset | https://api.github.com/repos/huggingface/datasets/issues/5415/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | MEMBER | 2023-01-18T14:09:03Z | null | I_kwDODunzps5bArgd | [] | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5415/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5415 | https://github.com/huggingface/datasets/issues/5415 | false |
1,525,733,818 | https://api.github.com/repos/huggingface/datasets/issues/5414/labels{/name} | ### Describe the bug
Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace:
```
Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/... | 2023-01-18T14:09:04Z | 5,414 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-09T14:45:31Z | https://api.github.com/repos/huggingface/datasets/issues/5414/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5414/timeline | Sharding error with Multilingual LibriSpeech | https://api.github.com/repos/huggingface/datasets/issues/5414/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4",
"events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}",
"followers_url": "https://api.github.com/users/Nithin-Holla/followers",
"following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}",
"gist... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2023-01-18T14:09:04Z | null | I_kwDODunzps5a8Nm6 | [
"Thanks for reporting, @Nithin-Holla.\r\n\r\nThis is a known issue for multiple datasets and we are investigating it:\r\n- See e.g.: https://huggingface.co/datasets/ami/discussions/3",
"Main issue:\r\n- #5415",
"@albertvillanova Thanks! As a workaround for now, can I use the dataset in streaming mode?",
"Yes,... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5414/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5414 | https://github.com/huggingface/datasets/issues/5414 | false |
1,524,591,837 | https://api.github.com/repos/huggingface/datasets/issues/5413/labels{/name} | ### Describe the bug
When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails:
```
File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets
return _concatenate_map_style_data... | 2023-01-26T09:27:21Z | 5,413 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-08T17:01:52Z | https://api.github.com/repos/huggingface/datasets/issues/5413/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | https://api.github.com/repos/huggingface/datasets/issues/5413/timeline | concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers | https://api.github.com/repos/huggingface/datasets/issues/5413/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/38279341?v=4",
"events_url": "https://api.github.com/users/ZeguanXiao/events{/privacy}",
"followers_url": "https://api.github.com/users/ZeguanXiao/followers",
"following_url": "https://api.github.com/users/ZeguanXiao/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | completed | NONE | 2023-01-26T09:27:21Z | null | I_kwDODunzps5a32zd | [
"Hi ! Thanks for reporting :)\r\n\r\nI managed to reproduce the hub using\r\n```python\r\n\r\nfrom datasets import concatenate_datasets, Dataset, load_from_disk\r\n\r\nDataset.from_dict({\"a\": range(9)}).save_to_disk(\"tmp/ds1\")\r\nds1 = load_from_disk(\"tmp/ds1\")\r\nds1 = concatenate_datasets([ds1, ds1])\r\n\r\... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5413/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5413 | https://github.com/huggingface/datasets/issues/5413 | false |
1,524,250,269 | https://api.github.com/repos/huggingface/datasets/issues/5412/labels{/name} | ### Describe the bug
I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error.
If there is a workaround to ignore the cache I think that would ... | 2023-01-19T20:28:43Z | 5,412 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-08T00:44:32Z | https://api.github.com/repos/huggingface/datasets/issues/5412/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5412/timeline | load_dataset() cannot find dataset_info.json with multiple training runs in parallel | https://api.github.com/repos/huggingface/datasets/issues/5412/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4",
"events_url": "https://api.github.com/users/mtoles/events{/privacy}",
"followers_url": "https://api.github.com/users/mtoles/followers",
"following_url": "https://api.github.com/users/mtoles/following{/other_user}",
"gists_url": "https://ap... | [] | null | completed | NONE | 2023-01-19T20:28:43Z | null | I_kwDODunzps5a2jad | [
"Hi ! It fails because the dataset is already being prepared by your first run. I'd encourage you to prepare your dataset before using it for multiple trainings.\r\n\r\nYou can also specify another cache directory by passing `cache_dir=` to `load_dataset()`.",
"Thank you! What do you mean by prepare it beforehand... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5412/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5412 | https://github.com/huggingface/datasets/issues/5412 | false |
1,523,297,786 | https://api.github.com/repos/huggingface/datasets/issues/5411/labels{/name} | [s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets | 2023-01-18T11:18:59Z | 5,411 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-06T23:19:17Z | https://api.github.com/repos/huggingface/datasets/issues/5411/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5411/timeline | Update docs of S3 filesystem with async aiobotocore | https://api.github.com/repos/huggingface/datasets/issues/5411/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/5677912?v=4",
"events_url": "https://api.github.com/users/maheshpec/events{/privacy}",
"followers_url": "https://api.github.com/users/maheshpec/followers",
"following_url": "https://api.github.com/users/maheshpec/following{/other_user}",
"gists_url": "h... | [] | null | null | CONTRIBUTOR | 2023-01-18T11:12:04Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"merged_at": "2023-01-18T11:12:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5G23-T | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5411/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5411 | https://github.com/huggingface/datasets/pull/5411 | true |
1,521,168,032 | https://api.github.com/repos/huggingface/datasets/issues/5410/labels{/name} | Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset.
It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets.
TODO:
- [x] tests
- [x] docs
Fi... | 2023-02-01T18:11:45Z | 5,410 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-05T18:12:17Z | https://api.github.com/repos/huggingface/datasets/issues/5410/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5410/timeline | Map-style Dataset to IterableDataset | https://api.github.com/repos/huggingface/datasets/issues/5410/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | 2023-02-01T16:36:01Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5410",
"merged_at": "2023-02-01T16:36:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5GvnJH | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5410/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5410 | https://github.com/huggingface/datasets/pull/5410 | true |
1,520,374,219 | https://api.github.com/repos/huggingface/datasets/issues/5409/labels{/name} | The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in:
- #5302
However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...).
This PR fixes it, so that no deprecation warning is raised.
Fix #5407. | 2023-01-06T11:06:16Z | 5,409 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-05T09:10:58Z | https://api.github.com/repos/huggingface/datasets/issues/5409/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5409/timeline | Fix deprecation warning when use_auth_token passed to download_and_prepare | https://api.github.com/repos/huggingface/datasets/issues/5409/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-06T10:59:13Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5409",
"merged_at": "2023-01-06T10:59:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Gs3nL | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5409/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5409 | https://github.com/huggingface/datasets/pull/5409 | true |
1,519,890,752 | https://api.github.com/repos/huggingface/datasets/issues/5408/labels{/name} | ### Describe the bug
I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model.
When using map function to prepare dataset, following warning pop out:
`common_voice = common_voice.map(prepare_dataset,
remove_... | 2023-01-06T13:22:19Z | 5,408 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-05T01:59:59Z | https://api.github.com/repos/huggingface/datasets/issues/5408/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5408/timeline | dataset map function could not be hash properly | https://api.github.com/repos/huggingface/datasets/issues/5408/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/68179274?v=4",
"events_url": "https://api.github.com/users/Tungway1990/events{/privacy}",
"followers_url": "https://api.github.com/users/Tungway1990/followers",
"following_url": "https://api.github.com/users/Tungway1990/following{/other_user}",
"gists_u... | [] | null | completed | NONE | 2023-01-06T13:22:18Z | null | I_kwDODunzps5al7FA | [
"Hi ! On macos I tried with\r\n- py 3.9.11\r\n- datasets 2.8.0\r\n- transformers 4.25.1\r\n- dill 0.3.4\r\n\r\nand I was able to hash `prepare_dataset` correctly:\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nHasher.hash(prepare_dataset)\r\n```\r\n\r\nWhat version of transformers do you have ? Can you ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5408/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5408 | https://github.com/huggingface/datasets/issues/5408 | false |
1,519,797,345 | https://api.github.com/repos/huggingface/datasets/issues/5407/labels{/name} | ### Describe the bug
Calling `Datasets.from_sql()` generates a warning:
`.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.`
### Steps to reproduce the ... | 2023-01-06T10:59:14Z | 5,407 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-05T00:43:17Z | https://api.github.com/repos/huggingface/datasets/issues/5407/comments | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | https://api.github.com/repos/huggingface/datasets/issues/5407/timeline | Datasets.from_sql() generates deprecation warning | https://api.github.com/repos/huggingface/datasets/issues/5407/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/21002157?v=4",
"events_url": "https://api.github.com/users/msummerfield/events{/privacy}",
"followers_url": "https://api.github.com/users/msummerfield/followers",
"following_url": "https://api.github.com/users/msummerfield/following{/other_user}",
"gist... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | completed | NONE | 2023-01-06T10:59:14Z | null | I_kwDODunzps5alkRh | [
"Thanks for reporting @msummerfield. We are fixing it."
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5407/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5407 | https://github.com/huggingface/datasets/issues/5407 | false |
1,519,140,544 | https://api.github.com/repos/huggingface/datasets/issues/5406/labels{/name} | `datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets.
When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets:
```python
TypeError: can only concatenate str (not "int") to str
```
This is because we started to update the metadat... | 2023-06-21T18:45:38Z | 5,406 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-04T15:10:04Z | https://api.github.com/repos/huggingface/datasets/issues/5406/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5406/timeline | [2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | https://api.github.com/repos/huggingface/datasets/issues/5406/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | null | null | MEMBER | null | null | I_kwDODunzps5ajD7A | [
"I still get this error on 2.9.0\r\n<img width=\"1925\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7208470/215597359-2f253c76-c472-4612-8099-d3a74d16eb29.png\">\r\n",
"Hi ! I just tested locally and or colab and it works fine for 2.9 on `sst2`.\r\n\r\nAlso the code that is shown in your stack t... | {
"+1": 11,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 11,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5406/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5406 | https://github.com/huggingface/datasets/issues/5406 | false |
1,517,879,386 | https://api.github.com/repos/huggingface/datasets/issues/5405/labels{/name} | ### Describe the bug
Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example:
```
>>> from datasets import load_da... | 2023-01-04T09:22:59Z | 5,405 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-03T20:25:48Z | https://api.github.com/repos/huggingface/datasets/issues/5405/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5405/timeline | size_in_bytes the same for all splits | https://api.github.com/repos/huggingface/datasets/issues/5405/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "http... | [] | null | null | NONE | null | null | I_kwDODunzps5aeQBa | [
"Hi @Breakend,\r\n\r\nIndeed, the attribute `size_in_bytes` refers to the size of the entire dataset configuration, for all splits (size of downloaded files + Arrow files), not the specific split.\r\nThis is also the case for `download_size` (downloaded files) and `dataset_size` (Arrow files).\r\n\r\nThe size of th... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5405/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5405 | https://github.com/huggingface/datasets/issues/5405 | false |
1,517,566,331 | https://api.github.com/repos/huggingface/datasets/issues/5404/labels{/name} | ### Feature request
Ideally, it would be nice to have a maintained PyPI package for `bigbench`.
### Motivation
We'd like to allow anyone to access, explore and use any task.
### Your contribution
@lhoestq has opened an issue in their repo:
- https://github.com/google/BIG-bench/issues/906 | 2023-02-09T20:30:26Z | 5,404 | null | https://api.github.com/repos/huggingface/datasets | null | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | 2023-01-03T15:37:57Z | https://api.github.com/repos/huggingface/datasets/issues/5404/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5404/timeline | Better integration of BIG-bench | https://api.github.com/repos/huggingface/datasets/issues/5404/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | null | null | I_kwDODunzps5adDl7 | [
"Hi, I made my version : https://huggingface.co/datasets/tasksource/bigbench"
] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5404/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5404 | https://github.com/huggingface/datasets/issues/5404 | false |
1,517,466,492 | https://api.github.com/repos/huggingface/datasets/issues/5403/labels{/name} | This PR updates a code example for consistency across the docs based on [feedback from this comment](https://github.com/huggingface/transformers/pull/20925/files/9fda31634d203a47d3212e4e8d43d3267faf9808#r1058769500):
"In terms of style we usually stay away from one-letter imports like this (even if the community use... | 2023-01-03T15:06:18Z | 5,403 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-03T14:26:32Z | https://api.github.com/repos/huggingface/datasets/issues/5403/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5403/timeline | Replace one letter import in docs | https://api.github.com/repos/huggingface/datasets/issues/5403/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url":... | [] | null | null | CONTRIBUTOR | 2023-01-03T14:59:01Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5403.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5403",
"merged_at": "2023-01-03T14:59:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5403.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5Gi3d9 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the docs fix for consistency.\r\n> \r\n> Again for consistency, it would be nice to make the same fix across all the docs, e.g.\r\n> \r\n> https://github.com/huggingface/datasets/blob/310cdddd1c43f9658de172b85b6509d07d5e... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5403/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5403 | https://github.com/huggingface/datasets/pull/5403 | true |
1,517,409,429 | https://api.github.com/repos/huggingface/datasets/issues/5402/labels{/name} | ### Describe the bug
Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_da... | 2023-01-04T17:23:57Z | 5,402 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-03T13:39:59Z | https://api.github.com/repos/huggingface/datasets/issues/5402/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5402/timeline | Missing state.json when creating a cloud dataset using a dataset_builder | https://api.github.com/repos/huggingface/datasets/issues/5402/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/22022514?v=4",
"events_url": "https://api.github.com/users/danielfleischer/events{/privacy}",
"followers_url": "https://api.github.com/users/danielfleischer/followers",
"following_url": "https://api.github.com/users/danielfleischer/following{/other_user}"... | [] | null | null | NONE | null | null | I_kwDODunzps5acdSV | [
"`load_from_disk` must be used on datasets saved using `save_to_disk`: they correspond to fully serialized datasets including their state.\r\n\r\nOn the other hand, `download_and_prepare` just downloads the raw data and convert them to arrow (or parquet if you want). We are working on allowing you to reload a datas... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5402/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5402 | https://github.com/huggingface/datasets/issues/5402 | false |
1,517,160,935 | https://api.github.com/repos/huggingface/datasets/issues/5401/labels{/name} | This PR implements Spark integration by supporting `Dataset` conversion from/to Spark `DataFrame`. | 2023-01-05T14:21:33Z | 5,401 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-03T09:57:40Z | https://api.github.com/repos/huggingface/datasets/issues/5401/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5401/timeline | Support Dataset conversion from/to Spark | https://api.github.com/repos/huggingface/datasets/issues/5401/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/5401.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5401",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5401.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5401"
} | PR_kwDODunzps5Gh1XQ | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5401). All of your documentation changes will be reflected on that endpoint.",
"Cool thanks !\r\n\r\nSpark DataFrame are usually quite big, and I believe here `from_spark` would load everything in the driver node's RAM, which i... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5401/reactions"
} | open | false | https://api.github.com/repos/huggingface/datasets/issues/5401 | https://github.com/huggingface/datasets/pull/5401 | true |
1,517,032,972 | https://api.github.com/repos/huggingface/datasets/issues/5400/labels{/name} | Support streaming datasets with `os.path.exists` and `pathlib.Path.exists`. | 2023-01-06T10:42:44Z | 5,400 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2023-01-03T07:42:37Z | https://api.github.com/repos/huggingface/datasets/issues/5400/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5400/timeline | Support streaming datasets with os.path.exists and Path.exists | https://api.github.com/repos/huggingface/datasets/issues/5400/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2023-01-06T10:35:44Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5400.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5400",
"merged_at": "2023-01-06T10:35:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5400.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5GhaGI | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5400/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5400 | https://github.com/huggingface/datasets/pull/5400 | true |
1,515,548,427 | https://api.github.com/repos/huggingface/datasets/issues/5399/labels{/name} | ### Describe the bug
While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs
### Steps to reproduce the bug
```
df = pd.read_csv('x.csv', encoding='utf-8-sig')
features = Features({
'link': Ima... | 2023-01-02T07:21:52Z | 5,399 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2023-01-01T13:00:11Z | https://api.github.com/repos/huggingface/datasets/issues/5399/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5399/timeline | Got disconnected from remote data host. Retrying in 5sec [2/20] | https://api.github.com/repos/huggingface/datasets/issues/5399/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/46427957?v=4",
"events_url": "https://api.github.com/users/alhuri/events{/privacy}",
"followers_url": "https://api.github.com/users/alhuri/followers",
"following_url": "https://api.github.com/users/alhuri/following{/other_user}",
"gists_url": "https://a... | [] | null | completed | NONE | 2023-01-02T07:21:52Z | null | I_kwDODunzps5aVW8L | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5399/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5399 | https://github.com/huggingface/datasets/issues/5399 | false |
1,514,425,231 | https://api.github.com/repos/huggingface/datasets/issues/5398/labels{/name} | Once `pydantic` fixes their issue in their 1.10.3 version, unpin it.
See issue:
- #5394
See temporary fix:
- #5395 | 2022-12-30T10:43:41Z | 5,398 | null | https://api.github.com/repos/huggingface/datasets | null | [] | 2022-12-30T10:37:31Z | https://api.github.com/repos/huggingface/datasets/issues/5398/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5398/timeline | Unpin pydantic | https://api.github.com/repos/huggingface/datasets/issues/5398/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | completed | MEMBER | 2022-12-30T10:43:41Z | null | I_kwDODunzps5aREuP | [] | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5398/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5398 | https://github.com/huggingface/datasets/issues/5398 | false |
1,514,412,246 | https://api.github.com/repos/huggingface/datasets/issues/5397/labels{/name} | Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/
See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807
```
v1.10.3 has been yanked.
```
in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issu... | 2022-12-30T10:53:11Z | 5,397 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-12-30T10:22:09Z | https://api.github.com/repos/huggingface/datasets/issues/5397/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5397/timeline | Unpin pydantic test dependency | https://api.github.com/repos/huggingface/datasets/issues/5397/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | null | null | MEMBER | 2022-12-30T10:43:40Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5397",
"merged_at": "2022-12-30T10:43:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | PR_kwDODunzps5GYirs | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5397/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5397 | https://github.com/huggingface/datasets/pull/5397 | true |
1,514,002,934 | https://api.github.com/repos/huggingface/datasets/issues/5396/labels{/name} | Expected checksum was verified against checksum dict (not checksum). | 2023-02-13T11:11:22Z | 5,396 | null | https://api.github.com/repos/huggingface/datasets | false | [] | 2022-12-29T19:45:17Z | https://api.github.com/repos/huggingface/datasets/issues/5396/comments | null | https://api.github.com/repos/huggingface/datasets/issues/5396/timeline | Fix checksum verification | https://api.github.com/repos/huggingface/datasets/issues/5396/events | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://ap... | [] | null | null | CONTRIBUTOR | 2023-02-13T11:11:22Z | {
"diff_url": "https://github.com/huggingface/datasets/pull/5396.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5396",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5396.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5396"
} | PR_kwDODunzps5GXMhp | [
"Hi ! If I'm not mistaken both `expected_checksums[url]` and `recorded_checksums[url]` are dictionaries with keys \"checksum\" and \"num_bytes\". So we need to check whether `expected_checksums[url] != recorded_checksums[url]` (or simply `expected_checksums[url][\"checksum\"] != recorded_checksums[url][\"checksum\"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5396/reactions"
} | closed | false | https://api.github.com/repos/huggingface/datasets/issues/5396 | https://github.com/huggingface/datasets/pull/5396 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.