url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2333
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2333/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2333/events
|
https://github.com/huggingface/datasets/pull/2333
| 879,214,067
|
MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy
| 2,333
|
Fix duplicate keys
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"- @jplu "
] | 2021-05-07T15:28:08Z
| 2021-05-08T21:47:31Z
| 2021-05-07T15:57:08Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2333",
"merged_at": "2021-05-07T15:57:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2333"
}
|
As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2333/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3924/events
|
https://github.com/huggingface/datasets/pull/3924
| 1,169,805,813
|
PR_kwDODunzps40eED5
| 3,924
|
Document cases for github datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.",
"Yay!"
] | 2022-03-15T15:10:10Z
| 2022-04-05T18:33:15Z
| 2022-03-15T15:41:23Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3924.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3924",
"merged_at": "2022-03-15T15:41:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3924.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3924"
}
|
In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases.
I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github:
- when you need the dataset to be reviewed
- when you need long-term maintenance from the HF team
- when there’s no clear org name / namespace that you can put the dataset under
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3924/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1567
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1567/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1567/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1567/events
|
https://github.com/huggingface/datasets/pull/1567
| 766,382,609
|
MDExOlB1bGxSZXF1ZXN0NTM5NDE3NzI5
| 1,567
|
[wording] Update Readme.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-14T12:34:52Z
| 2020-12-15T12:54:07Z
| 2020-12-15T12:54:06Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1567",
"merged_at": "2020-12-15T12:54:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1567"
}
|
Make the features of the library clearer.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1567/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1567/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5818
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5818/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5818/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5818/events
|
https://github.com/huggingface/datasets/issues/5818
| 1,695,052,555
|
I_kwDODunzps5lCHML
| 5,818
|
Ability to update a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidgilbertson",
"id": 4443482,
"login": "davidgilbertson",
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidgilbertson"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"This [reply](https://discuss.huggingface.co/t/how-do-i-add-things-rows-to-an-already-saved-dataset/27423) from @mariosasko on the forums may be useful :)",
"In this case, I think we can avoid the `PermissionError` by unpacking the underlying `ConcatenationTable` and saving only the newly added data blocks (in new files).",
"Thanks @stevhliu and @mariosasko , so saving to individual files then loading them later, concatenating again and saving again is the recommended way. Good to know.\r\n\r\nQuestion that I hope doesn't sound rude: is this sort of thing (processing a dataset that doesn't fit in memory) outside of `datasets`'s core area of focus? Are there other tools you would recommend to do this sort of thing that play nice with `datasets`? Or is it just that I've found myself in a niche situation that hasn't specifically been catered for?"
] | 2023-05-04T01:08:13Z
| 2023-05-04T20:43:39Z
| null |
NONE
| null | null | null |
### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.save_to_disk("data/test1")
```
With the error:
```
PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself.
```
### Motivation
My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again.
Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow.
The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing.
### Your contribution
na
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5818/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5818/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5622
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5622/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5622/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5622/events
|
https://github.com/huggingface/datasets/pull/5622
| 1,615,190,942
|
PR_kwDODunzps5LkSj8
| 5,622
|
Update README template to better template
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54767532?v=4",
"events_url": "https://api.github.com/users/emiltj/events{/privacy}",
"followers_url": "https://api.github.com/users/emiltj/followers",
"following_url": "https://api.github.com/users/emiltj/following{/other_user}",
"gists_url": "https://api.github.com/users/emiltj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emiltj",
"id": 54767532,
"login": "emiltj",
"node_id": "MDQ6VXNlcjU0NzY3NTMy",
"organizations_url": "https://api.github.com/users/emiltj/orgs",
"received_events_url": "https://api.github.com/users/emiltj/received_events",
"repos_url": "https://api.github.com/users/emiltj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emiltj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emiltj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emiltj"
}
|
[] |
closed
| false
| null |
[] | null |
[
"IMO this template should stay generic.\r\n\r\nAlso, we now use [the card template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md) from `hugginface_hub` as the source of truth on the Hub (you now have the option to import it into the dataset card/README.md), so I think the next step would be deleting this template rather than updating it.",
"Agreed, the PR was a mistake and meant for my own repo. My bad",
"Feel free to close the PR then."
] | 2023-03-08T12:30:23Z
| 2023-03-11T05:07:38Z
| 2023-03-11T05:07:38Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5622.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5622",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5622.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5622"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5622/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5622/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5540
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5540/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5540/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5540/events
|
https://github.com/huggingface/datasets/pull/5540
| 1,588,438,344
|
PR_kwDODunzps5KK5qz
| 5,540
|
Tutorial for creating a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012018 / 0.011353 (0.000665) | 0.006204 / 0.011008 (-0.004804) | 0.134119 / 0.038508 (0.095611) | 0.038436 / 0.023109 (0.015327) | 0.381397 / 0.275898 (0.105499) | 0.456362 / 0.323480 (0.132882) | 0.009826 / 0.007986 (0.001840) | 0.004746 / 0.004328 (0.000417) | 0.103755 / 0.004250 (0.099505) | 0.043867 / 0.037052 (0.006815) | 0.395322 / 0.258489 (0.136833) | 0.475812 / 0.293841 (0.181971) | 0.057865 / 0.128546 (-0.070682) | 0.019919 / 0.075646 (-0.055727) | 0.465343 / 0.419271 (0.046072) | 0.061574 / 0.043533 (0.018041) | 0.371668 / 0.255139 (0.116529) | 0.400375 / 0.283200 (0.117176) | 0.106539 / 0.141683 (-0.035144) | 1.822931 / 1.452155 (0.370776) | 1.875535 / 1.492716 (0.382819) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.013583 / 0.018006 (-0.004423) | 0.535515 / 0.000490 (0.535025) | 0.007920 / 0.000200 (0.007720) | 0.000305 / 0.000054 (0.000250) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030204 / 0.037411 (-0.007207) | 0.131671 / 0.014526 (0.117145) | 0.143977 / 0.176557 (-0.032579) | 0.175498 / 0.737135 (-0.561637) | 0.166134 / 0.296338 (-0.130204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.630995 / 0.215209 (0.415786) | 6.152275 / 2.077655 (4.074620) | 2.519887 / 1.504120 (1.015767) | 2.110926 / 1.541195 (0.569732) | 2.207555 / 1.468490 (0.739064) | 1.296197 / 4.584777 (-3.288580) | 5.510619 / 3.745712 (1.764906) | 3.167468 / 5.269862 (-2.102394) | 2.043924 / 4.565676 (-2.521753) | 0.144772 / 0.424275 (-0.279503) | 0.014456 / 0.007607 (0.006848) | 0.783629 / 0.226044 (0.557585) | 7.836962 / 2.268929 (5.568033) | 3.248593 / 55.444624 (-52.196032) | 2.577092 / 6.876477 (-4.299385) | 2.671918 / 2.142072 (0.529846) | 1.471586 / 4.805227 (-3.333641) | 0.251391 / 6.500664 (-6.249273) | 0.091947 / 0.075469 (0.016478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594839 / 1.841788 (-0.246949) | 18.250630 / 8.074308 (10.176322) | 23.948781 / 10.191392 (13.757389) | 0.275505 / 0.680424 (-0.404919) | 0.045202 / 0.534201 (-0.488999) | 0.545552 / 0.579283 (-0.033731) | 0.639352 / 0.434364 (0.204989) | 0.666345 / 0.540337 (0.126008) | 0.795614 / 1.386936 (-0.591322) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011234 / 0.011353 (-0.000119) | 0.005983 / 0.011008 (-0.005025) | 0.109144 / 0.038508 (0.070636) | 0.036070 / 0.023109 (0.012961) | 0.429313 / 0.275898 (0.153415) | 0.490615 / 0.323480 (0.167135) | 0.007448 / 0.007986 (-0.000538) | 0.004424 / 0.004328 (0.000095) | 0.097100 / 0.004250 (0.092850) | 0.049719 / 0.037052 (0.012667) | 0.412719 / 0.258489 (0.154230) | 0.485717 / 0.293841 (0.191876) | 0.061168 / 0.128546 (-0.067378) | 0.021510 / 0.075646 (-0.054136) | 0.116598 / 0.419271 (-0.302673) | 0.066116 / 0.043533 (0.022583) | 0.426212 / 0.255139 (0.171073) | 0.448368 / 0.283200 (0.165168) | 0.116003 / 0.141683 (-0.025680) | 1.799329 / 1.452155 (0.347175) | 1.967256 / 1.492716 (0.474540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214893 / 0.018006 (0.196887) | 0.497843 / 0.000490 (0.497354) | 0.000464 / 0.000200 (0.000264) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031758 / 0.037411 (-0.005653) | 0.131182 / 0.014526 (0.116656) | 0.141251 / 0.176557 (-0.035305) | 0.186526 / 0.737135 (-0.550609) | 0.142975 / 0.296338 (-0.153363) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.662094 / 0.215209 (0.446885) | 6.664841 / 2.077655 (4.587186) | 2.690613 / 1.504120 (1.186493) | 2.305399 / 1.541195 (0.764205) | 2.383697 / 1.468490 (0.915207) | 1.280692 / 4.584777 (-3.304085) | 5.629215 / 3.745712 (1.883503) | 5.007083 / 5.269862 (-0.262778) | 2.482163 / 4.565676 (-2.083513) | 0.147662 / 0.424275 (-0.276613) | 0.017770 / 0.007607 (0.010163) | 0.818380 / 0.226044 (0.592335) | 8.006521 / 2.268929 (5.737592) | 3.472262 / 55.444624 (-51.972363) | 2.709550 / 6.876477 (-4.166926) | 2.775138 / 2.142072 (0.633066) | 1.570545 / 4.805227 (-3.234683) | 0.266323 / 6.500664 (-6.234341) | 0.090591 / 0.075469 (0.015122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.657927 / 1.841788 (-0.183861) | 18.448981 / 8.074308 (10.374673) | 20.336909 / 10.191392 (10.145517) | 0.230322 / 0.680424 (-0.450102) | 0.025972 / 0.534201 (-0.508229) | 0.561361 / 0.579283 (-0.017922) | 0.623758 / 0.434364 (0.189394) | 0.664120 / 0.540337 (0.123783) | 0.763144 / 1.386936 (-0.623792) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-16T22:09:35Z
| 2023-02-17T18:50:46Z
| 2023-02-17T18:41:28Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5540",
"merged_at": "2023-02-17T18:41:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5540"
}
|
A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5540/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5540/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1789
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1789/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1789/events
|
https://github.com/huggingface/datasets/pull/1789
| 796,229,721
|
MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2
| 1,789
|
[BUG FIX] typo in the import path for metrics
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-01-28T18:01:37Z
| 2021-01-28T18:13:56Z
| 2021-01-28T18:13:56Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1789"
}
|
This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1789/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2081
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2081/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2081/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2081/events
|
https://github.com/huggingface/datasets/pull/2081
| 835,112,968
|
MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4
| 2,081
|
Fix docstrings issues
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] | null |
[] | 2021-03-18T18:11:01Z
| 2021-04-07T14:37:43Z
| 2021-04-07T14:37:43Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2081",
"merged_at": "2021-04-07T14:37:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2081"
}
|
Fix docstring issues.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2081/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2081/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/273
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/273/comments
|
https://api.github.com/repos/huggingface/datasets/issues/273/events
|
https://github.com/huggingface/datasets/pull/273
| 638,968,054
|
MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4
| 273
|
update cos_e to add cos_e v1.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-06-15T16:03:22Z
| 2020-06-16T08:25:54Z
| 2020-06-16T08:25:52Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/273.diff",
"html_url": "https://github.com/huggingface/datasets/pull/273",
"merged_at": "2020-06-16T08:25:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/273.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/273"
}
|
This PR updates the cos_e dataset to add v1.0 as requested here #163
@nazneenrajani
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/273/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/273/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/475
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/475/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/475/comments
|
https://api.github.com/repos/huggingface/datasets/issues/475/events
|
https://github.com/huggingface/datasets/pull/475
| 672,884,595
|
MDExOlB1bGxSZXF1ZXN0NDYyODQzMzQz
| 475
|
misc. bugs and quality of life
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Cool thanks, I made those changes. LMK if you think it's ready for merge.",
"Ok to merge for me"
] | 2020-08-04T15:32:29Z
| 2020-08-17T21:14:08Z
| 2020-08-17T21:14:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/475.diff",
"html_url": "https://github.com/huggingface/datasets/pull/475",
"merged_at": "2020-08-17T21:14:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/475.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/475"
}
|
A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them.
1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to the repr to make it slightly more readable.
```
>>> print(list_datasets()[0])
nlp.ObjectInfo(
id='aeslc',
description='A collection of email messages of employees in the Enron Corporation.There are two features: - email_body: email body text. - subject_line: email subject text.',
files=[nlp.S3Object('aeslc.py'), nlp.S3Object('dataset_infos.json'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/dev/allen-p_inbox_29.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/test/allen-p_inbox_24.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/train/allen-p_inbox_20.subject'), nlp.S3Object('dummy/1.0.0/dummy_data.zip'), nlp.S3Object('urls_checksums/checksums.txt')]
)
```
2. Add id-only option to `list_datasets` and `list_metrics` to allow the user to easily print out just the names of the datasets & metrics. I often found myself annoyed that this took so many strokes to do.
```python
[dataset.id for dataset in list_datasets()] # before
list_datasets(id_only=True) # after
```
3. Fix null-seed randomization caching. When using `train_test_split` and `shuffle`, the computation was being cached even without a seed or generator being passed. The result was that calling `.shuffle` more than once on the same dataset didn't do anything without passing a distinct seed or generator. Likewise with `train_test_split`.
4. Indexing by iterables of bool. I added support for passing an iterable of type bool to `_getitem` as a numpy/pandas-like indexing method. Let me know if you think it's redundant with `filter` (I know it's not optimal memory-wise), but I think it's nice to have as a lightweight alternative to do simple things without having to create a copy of the entire dataset, e.g.
```python
dataset[dataset['label'] == 0] # numpy-like bool indexing to look at instances with labels of 0
```
5. Add an `input_column` argument to `map` and `filter`, which allows you to filter/map on a particular column rather than passing the whole dict to the function. Also adds `fn_kwargs` to be passed to the function. I think these together make mapping much cleaner in many cases such as mono-column tokenization:
```python
# before
dataset = dataset.map(lambda batch: tokenizer(batch["text"])
# after
dataset = dataset.map(tokenizer, input_column="text")
dataset = dataset.map(tokenizer, input_column="text", fn_kwargs={"truncation": True, "padding": True})
```
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/475/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/475/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5005
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5005/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5005/events
|
https://github.com/huggingface/datasets/issues/5005
| 1,380,952,960
|
I_kwDODunzps5ST6uA
| 5,005
|
Release 2.5.0 breaks transformers CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | 2022-09-21T13:39:19Z
| 2022-09-21T14:11:57Z
| 2022-09-21T14:11:57Z
|
MEMBER
| null | null | null |
## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5005/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/533
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/533/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/533/comments
|
https://api.github.com/repos/huggingface/datasets/issues/533/events
|
https://github.com/huggingface/datasets/pull/533
| 685,585,914
|
MDExOlB1bGxSZXF1ZXN0NDczMjg4OTgx
| 533
|
Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-08-25T15:32:44Z
| 2020-08-26T08:02:24Z
| 2020-08-26T08:02:23Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/533.diff",
"html_url": "https://github.com/huggingface/datasets/pull/533",
"merged_at": "2020-08-26T08:02:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/533.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/533"
}
|
It should fix the CI problems in #513
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/533/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/533/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3095
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3095/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3095/events
|
https://github.com/huggingface/datasets/issues/3095
| 1,027,453,146
|
I_kwDODunzps49PbDa
| 3,095
|
`cast_column` makes audio decoding fail
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"cc @anton-l @albertvillanova ",
"Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it."
] | 2021-10-15T13:36:58Z
| 2023-04-07T09:43:20Z
| 2021-10-15T15:38:30Z
|
MEMBER
| null | null | null |
## Describe the bug
After changing the sampling rate automatic decoding fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ab", split="train")
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
print(ds[0]["audio"]) # <- this fails currently
```
yields:
```
TypeError: forward() takes 2 positional arguments but 4 were given
```
## Expected results
no failure
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.13.2 (master)
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3095/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/184
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/184/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/184/comments
|
https://api.github.com/repos/huggingface/datasets/issues/184/events
|
https://github.com/huggingface/datasets/pull/184
| 623,120,929
|
MDExOlB1bGxSZXF1ZXN0NDIxODQ5MTQ3
| 184
|
Use IndexError instead of ValueError when index out of range
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-22T10:43:42Z
| 2020-05-28T08:31:18Z
| 2020-05-28T08:31:18Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/184",
"merged_at": "2020-05-28T08:31:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/184"
}
|
**`default __iter__ needs IndexError`**.
When I want to create a wrapper of arrow dataset to adapt to fastai,
I don't know how to initialize it, so I didn't use inheritance but use object composition.
I wrote sth like this.
```
clas HF_dataset():
def __init__(self, arrow_dataset):
self.dset = arrow_dataset
def __getitem__(self, i):
return self.my_get_item(self.dset)
```
But `for sample in my_dataset:` gave me `ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).")` . This is because default `__iter__` will stop when it catched `IndexError`.
You can also see my [work](https://github.com/richardyy1188/Pretrain-MLM-and-finetune-on-GLUE-with-fastai/blob/master/GLUE_with_fastai.ipynb) that uses fastai2 to show/load batches from huggingface/nlp GLUE datasets
So I hope we can use `IndexError` instead to let other people who want to wrap it for any purpose won't be caught by this caveat.
BTW, I super appreciate your work, both transformers and nlp save my life. 💖💖💖💖💖💖💖
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/184/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/184/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3766
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3766/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3766/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3766/events
|
https://github.com/huggingface/datasets/pull/3766
| 1,145,829,289
|
PR_kwDODunzps4zOujH
| 3,766
|
Fix head_qa data URL
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-02-21T13:52:50Z
| 2022-02-21T14:39:20Z
| 2022-02-21T14:39:19Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3766.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3766",
"merged_at": "2022-02-21T14:39:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3766.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3766"
}
|
Fix #3758.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3766/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3766/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/342
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/342/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/342/comments
|
https://api.github.com/repos/huggingface/datasets/issues/342/events
|
https://github.com/huggingface/datasets/issues/342
| 651,333,194
|
MDU6SXNzdWU2NTEzMzMxOTQ=
| 342
|
Features should be updated when `map()` changes schema
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[
"`dataset.column_names` are being updated but `dataset.features` aren't indeed..."
] | 2020-07-06T08:03:23Z
| 2020-07-23T10:15:16Z
| 2020-07-23T10:15:16Z
|
MEMBER
| null | null | null |
`dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/342/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/342/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5381
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5381/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5381/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5381/events
|
https://github.com/huggingface/datasets/issues/5381
| 1,504,498,387
|
I_kwDODunzps5ZrNLT
| 5,381
|
Wrong URL for the_pile dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45738728?v=4",
"events_url": "https://api.github.com/users/LeoGrin/events{/privacy}",
"followers_url": "https://api.github.com/users/LeoGrin/followers",
"following_url": "https://api.github.com/users/LeoGrin/following{/other_user}",
"gists_url": "https://api.github.com/users/LeoGrin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LeoGrin",
"id": 45738728,
"login": "LeoGrin",
"node_id": "MDQ6VXNlcjQ1NzM4NzI4",
"organizations_url": "https://api.github.com/users/LeoGrin/orgs",
"received_events_url": "https://api.github.com/users/LeoGrin/received_events",
"repos_url": "https://api.github.com/users/LeoGrin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LeoGrin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeoGrin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LeoGrin"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! This error can happen if there is a local file/folder with the same name as the requested dataset. And to avoid it, rename the local file/folder.\r\n\r\nSoon, it will be possible to explicitly request a Hub dataset as follows:https://github.com/huggingface/datasets/issues/5228#issuecomment-1313494020"
] | 2022-12-20T12:40:14Z
| 2023-02-15T16:24:57Z
| 2023-02-15T16:24:57Z
|
NONE
| null | null | null |
### Describe the bug
When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error.
### Steps to reproduce the bug
Steps to reproduce:
Run:
```
from datasets import load_dataset
dataset = load_dataset("the_pile")
```
I get the output:
"name": "FileNotFoundError",
"message": "Unable to resolve any data file that matches '['**']' at /storage/store/work/lgrinszt/memorization/the_pile with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'GRIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG', 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF', 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ircam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'OGG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']"
### Expected behavior
`the_pile` dataset should be dowloaded.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.27
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5381/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5381/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4684
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4684/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4684/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4684/events
|
https://github.com/huggingface/datasets/issues/4684
| 1,305,554,654
|
I_kwDODunzps5N0S7e
| 4,684
|
How to assign new values to Dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/beyondguo",
"id": 37113676,
"login": "beyondguo",
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/beyondguo"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! One option is use `map` with a function that overwrites the labels (`dset = dset.map(lamba _: {\"label\": 0}, features=dset.features`)). Or you can use the `remove_column` + `add_column` combination (`dset = dset.remove_columns(\"label\").add_column(\"label\", [0]*len(data)).cast(dset.features)`, but note that this approach creates an in-memory table for the added column instead of writing to disk, which could be problematic for large datasets.",
"Hi! I tried your proposed solution, but it does not solve my problem unfortunately. I am working with a set of protein sequences that have been tokenized with ESM, but some sequences are longer than `max_length`, they have been truncated in the tokenization. So now I want to truncate my labels as well, but that does not work with a mapping (e.g. `dset.map` as you suggested). Specifically, what I did was the following:\r\n\r\n```\r\ndef postprocess_tokenize(tokenized_data):\r\n \"\"\"\r\n adjust label lengths if they dont match.\r\n \"\"\"\r\n if len(tokenized_data['input_ids']) < len(tokenized_data['labels']):\r\n new_labels = tokenized_data['labels'][:len(tokenized_data['input_ids'])]\r\n tokenized_data[\"labels\"] = new_labels\r\n return tokenized_data\r\n\r\ntokenized_data = tokenized_data.map(postprocess_tokenize, batched=True) # this does not adjust the labels...\r\n```\r\n\r\nAny tips on how to do this properly?\r\n\r\nMore generally, I am wondering why the DataCollator supports padding but does not support truncation? Seems odd to me.\r\n\r\nThanks in advance!"
] | 2022-07-15T04:17:57Z
| 2023-03-20T15:50:41Z
| 2022-10-10T11:53:38Z
|
NONE
| null | null | null |

Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import load_dataset
data = load_dataset('glue','sst2')
data['train']['label'] = [0]*len(data)
```
I will get the error:
```
TypeError: 'Dataset' object does not support item assignment
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4684/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4684/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5963/events
|
https://github.com/huggingface/datasets/issues/5963
| 1,762,774,457
|
I_kwDODunzps5pEc25
| 5,963
|
Got an error _pickle.PicklingError use Dataset.from_spark.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112800614?v=4",
"events_url": "https://api.github.com/users/yanzia12138/events{/privacy}",
"followers_url": "https://api.github.com/users/yanzia12138/followers",
"following_url": "https://api.github.com/users/yanzia12138/following{/other_user}",
"gists_url": "https://api.github.com/users/yanzia12138/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanzia12138",
"id": 112800614,
"login": "yanzia12138",
"node_id": "U_kgDOBrkzZg",
"organizations_url": "https://api.github.com/users/yanzia12138/orgs",
"received_events_url": "https://api.github.com/users/yanzia12138/received_events",
"repos_url": "https://api.github.com/users/yanzia12138/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanzia12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanzia12138/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanzia12138"
}
|
[] |
closed
| false
| null |
[] | null |
[
"i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?",
"@lhoestq ",
"cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset builder, and therefore contains the SparkContext itself.\r\n\r\nI think it can be fixed by defining `create_cache_and_write_probe` outside the Spark dataset builder, and pass a `partial(create_cache_and_write_probe, cache_dir=self._cache_dir)` to `mapPartitions`",
"Just saw this; thanks for flagging! Your proposed solution sounds good. I can prepare a PR",
"@maddiedawson can you show me the demo ,so i can test in local .before your PR"
] | 2023-06-19T05:30:35Z
| 2023-07-24T11:55:46Z
| 2023-07-24T11:55:46Z
|
NONE
| null | null | null |
python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="/pnc-data/data/nuplan/t5_spark/cache_data")
ds.save_to_disk(args.output_data)
Error :
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma
tion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
_Originally posted by @yanzia12138 in https://github.com/huggingface/datasets/issues/5701#issuecomment-1594674306_
W
Traceback (most recent call last):
File "/home/work/main.py", line 100, in <module>
run(args)
File "/home/work/main.py", line 80, in run
ds = Dataset.from_spark(df1, keep_in_memory=True,
File "/home/work/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1281, in from_spark
return SparkDatasetReader(
File "/home/work/.local/lib/python3.9/site-packages/datasets/io/spark.py", line 53, in read
self.builder.download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 254, in _prepare_split
self._validate_cache_dir()
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 122, in _validate_cache_dir
self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 950, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2951, in _jrdd
wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2830, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2816, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/serializers.py", line 447, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S
parkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5963/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5963/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4892
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4892/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4892/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4892/events
|
https://github.com/huggingface/datasets/pull/4892
| 1,350,636,499
|
PR_kwDODunzps49yCD3
| 4,892
|
Add citation to ro_sts and ro_sts_parallel datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4892). All of your documentation changes will be reflected on that endpoint."
] | 2022-08-25T09:51:06Z
| 2022-08-25T10:49:56Z
| 2022-08-25T10:49:56Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4892.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4892",
"merged_at": "2022-08-25T10:49:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4892.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4892"
}
|
This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:
- https://github.com/dumitrescustefan/RO-STS/issues/4
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4892/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4892/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2710
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2710/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2710/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2710/events
|
https://github.com/huggingface/datasets/pull/2710
| 951,723,326
|
MDExOlB1bGxSZXF1ZXN0Njk2MDYyNjAy
| 2,710
|
Update WikiANN data URL
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"We have to update the URL in the XTREME benchmark as well:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0dfc639cec450ed8762a997789a2ed63e63cdcf2/datasets/xtreme/xtreme.py#L411-L411\r\n\r\n"
] | 2021-07-23T16:29:21Z
| 2021-07-26T09:34:23Z
| 2021-07-26T09:34:23Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2710",
"merged_at": "2021-07-26T09:34:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2710"
}
|
WikiANN data source URL is no longer accessible: 404 error from Dropbox.
We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card.
Close #2691.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2710/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2710/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3442
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3442/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3442/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3442/events
|
https://github.com/huggingface/datasets/pull/3442
| 1,081,862,747
|
PR_kwDODunzps4v7oBZ
| 3,442
|
Extend text to support yielding lines, paragraphs or documents
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)",
"> The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)\r\n\r\n@lhoestq @mariosasko I would avoid the term `split` in this context and keep it only for \"train\", \"validation\" and \"test\" splits.\r\n- https://huggingface.co/docs/datasets/process.html#split\r\n > datasets.Dataset.train_test_split() creates train and test splits, if your dataset doesn’t already have them.\r\n- https://huggingface.co/docs/datasets/process.html#process-multiple-splits\r\n > Many datasets have splits that you can process simultaneously with datasets.DatasetDict.map().\r\n\r\nPlease note that in the documentation, one of the terms more frequently used in this context is **\"row\"**:\r\n- https://huggingface.co/docs/datasets/access.html#features-and-columns\r\n > A dataset is a table of rows and typed columns.\r\n\r\n > Return the number of rows and columns with the following standard attributes:\r\n > dataset.num_columns\r\n > 4\r\n > dataset.num_rows\r\n > 3668\r\n\r\n- https://huggingface.co/docs/datasets/access.html#rows-slices-batches-and-columns\r\n > Get several rows of your dataset at a time with slice notation or a list of indices:\r\n- https://huggingface.co/docs/datasets/process.html#map\r\n > This function can even create new rows and columns.\r\n\r\nOther of the terms more frequently used in the docs (in the code as well) is **\"example\"**:\r\n- https://huggingface.co/docs/datasets/process.html#map\r\n > It allows you to apply a processing function to each example in a dataset, independently or in batches.\r\n- https://huggingface.co/docs/datasets/process.html#batch-processing\r\n > datasets.Dataset.map() also supports working with batches of examples.\r\n- https://huggingface.co/docs/datasets/process.html#split-long-examples\r\n > When your examples are too long, you may want to split them\r\n- https://huggingface.co/docs/datasets/process.html#data-augmentation\r\n > With batch processing, you can even augment your dataset with additional examples.\r\n\r\nLess frequently used: **\"item\"**:\r\n- https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_item\r\n > Add item to Dataset.\r\n\r\nOther term used in the docs (although less frequently) is **\"sample\"**. The advantage of this word is that it is also a verb, so we can use the parameter: \"sample_by\" (if you insist on using a verb instead of a noun).\r\n\r\nIn summary, these proposals:\r\n- config.row\r\n- config.example\r\n- config.item\r\n- config.sample\r\n- config.sample_by",
"I like `sample_by`. Another idea I had was `separate_by`.\r\n\r\nIt could also be `sampling`, `sampling_method`, `separation_method`.\r\n\r\nNot a big fan of the proposed nouns alone since they are very generic, that's why I tried to have something more specific.\r\n\r\nI also agree that we actually should avoid `split` to avoid any confusion",
"Thanks for the analysis of the used terms. I also like `sample_by` (`separate_by` is good too).",
"Thank you !! :D "
] | 2021-12-16T07:33:17Z
| 2021-12-20T16:59:10Z
| 2021-12-20T16:39:18Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3442.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3442",
"merged_at": "2021-12-20T16:39:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3442.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3442"
}
|
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents.
Feel free to comment on the name of the config parameter `row`:
- Currently, the docs state datasets are made of rows and columns
- Other names I considered: `example`, `item`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3442/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3442/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3803
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3803/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3803/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3803/events
|
https://github.com/huggingface/datasets/pull/3803
| 1,157,271,679
|
PR_kwDODunzps4z1T48
| 3,803
|
Remove deprecated methods/params (preparation for v2.0)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-03-02T14:29:12Z
| 2022-03-02T14:53:21Z
| 2022-03-02T14:53:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3803.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3803",
"merged_at": "2022-03-02T14:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3803.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3803"
}
|
This PR removes the following deprecated methos/params:
* `Dataset.cast_`/`DatasetDict.cast_`
* `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_`
* `Dataset.remove_columns_`/`DatasetDict.remove_columns_`
* `Dataset.rename_columns_`/`DatasetDict.rename_columns_`
* `prepare_module`
* param `script_version` in `load_dataset`/`load_metric`
* param `version` in `hf_github_url`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3803/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3803/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6326
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6326/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6326/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6326/events
|
https://github.com/huggingface/datasets/pull/6326
| 1,955,420,536
|
PR_kwDODunzps5dcSRa
| 6,326
|
Create battery_analysis.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https://api.github.com/users/vinitkm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vinitkm",
"id": 130216732,
"login": "vinitkm",
"node_id": "U_kgDOB8LzHA",
"organizations_url": "https://api.github.com/users/vinitkm/orgs",
"received_events_url": "https://api.github.com/users/vinitkm/received_events",
"repos_url": "https://api.github.com/users/vinitkm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vinitkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinitkm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vinitkm"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-10-21T10:07:48Z
| 2023-10-23T14:56:20Z
| 2023-10-23T14:56:20Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6326.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6326",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6326.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6326"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6326/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6326/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3522
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3522/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3522/events
|
https://github.com/huggingface/datasets/issues/3522
| 1,093,807,586
|
I_kwDODunzps5BMi3i
| 3,522
|
wmt19 is broken (zh-en)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AjayP13",
"id": 5404177,
"login": "AjayP13",
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AjayP13"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] | null |
[
"This issue is not reproducible."
] | 2022-01-04T22:33:45Z
| 2022-05-06T16:27:37Z
| 2022-05-06T16:27:37Z
|
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wmt19", 'zh-en')
```
## Expected results
The dataset should download.
## Actual results
`ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip`
## Environment info
- `datasets` version: 1.15.1
- Platform: Linux
- Python version: 3.8
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3522/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/893/events
|
https://github.com/huggingface/datasets/pull/893
| 751,703,696
|
MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx
| 893
|
add metrec: arabic poetry dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zaidalyafeai",
"id": 15667714,
"login": "zaidalyafeai",
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zaidalyafeai"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq removed prints and added the dataset card. ",
"@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ",
"Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n- The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`",
"> Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n> \r\n> Couple of last comments:\r\n> \r\n> * this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n> * The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`\r\n\r\nI have no idea how some other files changed. I tried to rebase and push but this created some errors. I had to run the command \r\n`git push -u --force origin add-metrec-dataset` which might cause some problems. ",
"Feel free to create another branch/another PR without all the other changes",
"@yjernite can you explain which other files are changed because of the PR ? https://github.com/huggingface/datasets/pull/893/files only shows files related to the dataset. ",
"Right ! github is nice with us today :)",
"Looks like this one is ready to merge, thanks @zaidalyafeai !",
"@lhoestq thanks for the merge. I am not a GitHub geek. I already have another dataset to add. I'm not sure how to add another given my forked repo. Do I follow the same steps with a different checkout name ?",
"If you've followed the instructions in here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment\r\n\r\n(especially point 2. and the command `git remote add upstream ....`)\r\n\r\nThen you can try\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my-new-dataset-name>\r\n```"
] | 2020-11-26T16:10:16Z
| 2020-12-01T16:24:55Z
| 2020-12-01T15:15:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/893.diff",
"html_url": "https://github.com/huggingface/datasets/pull/893",
"merged_at": "2020-12-01T15:15:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/893.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/893"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/893/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/893/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/1880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1880/events
|
https://github.com/huggingface/datasets/pull/1880
| 808,563,439
|
MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0
| 1,880
|
Update multi_woz_v22 checksums
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-15T14:00:18Z
| 2021-02-15T14:18:19Z
| 2021-02-15T14:18:18Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1880.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1880",
"merged_at": "2021-02-15T14:18:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1880.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1880"
}
|
As noticed in #1876 the checksums of this dataset are outdated.
I updated them in this PR
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1880/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3951
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3951/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3951/events
|
https://github.com/huggingface/datasets/issues/3951
| 1,171,568,814
|
I_kwDODunzps5F1Liu
| 3,951
|
Forked streaming datasets try to `open` data urls rather than use network
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dlwh",
"id": 9633,
"login": "dlwh",
"node_id": "MDQ6VXNlcjk2MzM=",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"repos_url": "https://api.github.com/users/dlwh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dlwh"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this"
] | 2022-03-16T21:21:02Z
| 2022-06-10T20:47:26Z
| 2022-06-10T20:47:26Z
|
NONE
| null | null | null |
## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
```python
from multiprocessing import freeze_support
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
import torch.utils.data
# work around #3950
class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset):
pass
def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset:
return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling)
if __name__ == '__main__':
freeze_support()
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
ds = _ensure_format(ds)
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
I'd expect the dataset to load the url correctly and produce examples.
## Actual results
```
warnings.warn(
***** Running training *****
Num examples = 8000
Num Epochs = 9223372036854775807
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 1000
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__
for key, example in self._iter():
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter
yield from ex_iterable
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples
with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15.
0%| | 0/1000 [00:02<?, ?it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3951/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/805
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/805/comments
|
https://api.github.com/repos/huggingface/datasets/issues/805/events
|
https://github.com/huggingface/datasets/issues/805
| 737,019,360
|
MDU6SXNzdWU3MzcwMTkzNjA=
| 805
|
On loading a metric from datasets, I get the following error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laibamehnaz",
"id": 36405283,
"login": "laibamehnaz",
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laibamehnaz"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] | 2020-11-05T15:14:38Z
| 2022-02-14T15:32:59Z
| 2022-02-14T15:32:59Z
|
NONE
| null | null | null |
`from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/805/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6471
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6471/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6471/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6471/events
|
https://github.com/huggingface/datasets/pull/6471
| 2,026,100,761
|
PR_kwDODunzps5hLEni
| 6,471
|
Remove delete doc CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6471). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005573 / 0.011353 (-0.005780) | 0.003449 / 0.011008 (-0.007559) | 0.063323 / 0.038508 (0.024815) | 0.049369 / 0.023109 (0.026260) | 0.254280 / 0.275898 (-0.021618) | 0.267721 / 0.323480 (-0.055759) | 0.002894 / 0.007986 (-0.005092) | 0.002646 / 0.004328 (-0.001683) | 0.049284 / 0.004250 (0.045033) | 0.037947 / 0.037052 (0.000895) | 0.251654 / 0.258489 (-0.006836) | 0.279729 / 0.293841 (-0.014112) | 0.028022 / 0.128546 (-0.100525) | 0.010653 / 0.075646 (-0.064993) | 0.208567 / 0.419271 (-0.210704) | 0.035863 / 0.043533 (-0.007670) | 0.248522 / 0.255139 (-0.006617) | 0.270274 / 0.283200 (-0.012925) | 0.019683 / 0.141683 (-0.122000) | 1.136342 / 1.452155 (-0.315812) | 1.206757 / 1.492716 (-0.285960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094682 / 0.018006 (0.076676) | 0.304092 / 0.000490 (0.303602) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018606 / 0.037411 (-0.018805) | 0.060568 / 0.014526 (0.046042) | 0.074067 / 0.176557 (-0.102490) | 0.118979 / 0.737135 (-0.618156) | 0.075676 / 0.296338 (-0.220663) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290452 / 0.215209 (0.075243) | 2.848868 / 2.077655 (0.771213) | 1.534932 / 1.504120 (0.030812) | 1.386717 / 1.541195 (-0.154478) | 1.416645 / 1.468490 (-0.051845) | 0.569020 / 4.584777 (-4.015757) | 2.421168 / 3.745712 (-1.324545) | 2.781358 / 5.269862 (-2.488503) | 1.758495 / 4.565676 (-2.807182) | 0.063851 / 0.424275 (-0.360424) | 0.004968 / 0.007607 (-0.002639) | 0.339198 / 0.226044 (0.113154) | 3.356392 / 2.268929 (1.087464) | 1.858145 / 55.444624 (-53.586479) | 1.589000 / 6.876477 (-5.287477) | 1.569175 / 2.142072 (-0.572897) | 0.650571 / 4.805227 (-4.154657) | 0.120288 / 6.500664 (-6.380376) | 0.042489 / 0.075469 (-0.032980) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939963 / 1.841788 (-0.901824) | 11.493612 / 8.074308 (3.419304) | 10.353780 / 10.191392 (0.162388) | 0.141945 / 0.680424 (-0.538479) | 0.014397 / 0.534201 (-0.519804) | 0.286971 / 0.579283 (-0.292312) | 0.266787 / 0.434364 (-0.167577) | 0.330385 / 0.540337 (-0.209952) | 0.438542 / 1.386936 (-0.948394) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005360 / 0.011353 (-0.005993) | 0.003720 / 0.011008 (-0.007288) | 0.048790 / 0.038508 (0.010282) | 0.050256 / 0.023109 (0.027147) | 0.275445 / 0.275898 (-0.000453) | 0.297725 / 0.323480 (-0.025755) | 0.004077 / 0.007986 (-0.003909) | 0.002759 / 0.004328 (-0.001569) | 0.047653 / 0.004250 (0.043403) | 0.040205 / 0.037052 (0.003153) | 0.281028 / 0.258489 (0.022539) | 0.304682 / 0.293841 (0.010841) | 0.030158 / 0.128546 (-0.098388) | 0.010957 / 0.075646 (-0.064689) | 0.058193 / 0.419271 (-0.361079) | 0.033277 / 0.043533 (-0.010256) | 0.279501 / 0.255139 (0.024362) | 0.295381 / 0.283200 (0.012181) | 0.017889 / 0.141683 (-0.123794) | 1.121354 / 1.452155 (-0.330801) | 1.225702 / 1.492716 (-0.267014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093385 / 0.018006 (0.075378) | 0.304642 / 0.000490 (0.304152) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021456 / 0.037411 (-0.015955) | 0.068536 / 0.014526 (0.054010) | 0.080867 / 0.176557 (-0.095689) | 0.119093 / 0.737135 (-0.618042) | 0.081875 / 0.296338 (-0.214464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304434 / 0.215209 (0.089225) | 2.990303 / 2.077655 (0.912649) | 1.616959 / 1.504120 (0.112839) | 1.493256 / 1.541195 (-0.047939) | 1.542857 / 1.468490 (0.074367) | 0.575517 / 4.584777 (-4.009260) | 2.455165 / 3.745712 (-1.290547) | 2.810089 / 5.269862 (-2.459773) | 1.756502 / 4.565676 (-2.809175) | 0.064801 / 0.424275 (-0.359475) | 0.004969 / 0.007607 (-0.002638) | 0.360227 / 0.226044 (0.134183) | 3.575029 / 2.268929 (1.306100) | 1.989955 / 55.444624 (-53.454669) | 1.705306 / 6.876477 (-5.171171) | 1.688523 / 2.142072 (-0.453550) | 0.663266 / 4.805227 (-4.141962) | 0.121852 / 6.500664 (-6.378812) | 0.041853 / 0.075469 (-0.033616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983535 / 1.841788 (-0.858252) | 11.827656 / 8.074308 (3.753348) | 10.663265 / 10.191392 (0.471873) | 0.145942 / 0.680424 (-0.534482) | 0.016004 / 0.534201 (-0.518197) | 0.288907 / 0.579283 (-0.290376) | 0.279100 / 0.434364 (-0.155264) | 0.328061 / 0.540337 (-0.212276) | 0.570253 / 1.386936 (-0.816683) |\n\n</details>\n</details>\n\n\n"
] | 2023-12-05T12:37:50Z
| 2023-12-05T12:44:59Z
| 2023-12-05T12:38:50Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6471",
"merged_at": "2023-12-05T12:38:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6471"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6471/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6471/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2210
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2210/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2210/events
|
https://github.com/huggingface/datasets/issues/2210
| 855,709,400
|
MDU6SXNzdWU4NTU3MDk0MDA=
| 2,210
|
dataloading slow when using HUGE dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] | 2021-04-12T08:33:02Z
| 2021-04-13T02:03:05Z
| 2021-04-13T02:03:05Z
|
NONE
| null | null | null |
Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2210/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4311
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4311/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4311/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4311/events
|
https://github.com/huggingface/datasets/pull/4311
| 1,231,369,438
|
PR_kwDODunzps43ln8-
| 4,311
|
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it"
] | 2022-05-10T15:52:15Z
| 2022-05-10T17:19:42Z
| 2022-05-10T17:11:47Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4311",
"merged_at": "2022-05-10T17:11:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4311"
}
|
I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- raise informative error messages when metadata and images aren't linked correctly:
- when an image is missing a metadata file
- when a metadata file is missing an image
I added some tests for these changes as well
cc @mariosasko
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4311/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4311/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3800
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3800/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3800/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3800/events
|
https://github.com/huggingface/datasets/pull/3800
| 1,155,620,761
|
PR_kwDODunzps4zvkjA
| 3,800
|
Added computer vision tasks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-03-01T17:37:46Z
| 2022-03-04T07:15:55Z
| 2022-03-04T07:15:55Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3800",
"merged_at": "2022-03-04T07:15:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3800"
}
|
Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3800/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3800/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2406
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2406/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2406/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2406/events
|
https://github.com/huggingface/datasets/issues/2406
| 902,643,844
|
MDU6SXNzdWU5MDI2NDM4NDQ=
| 2,406
|
Add guide on using task templates to documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[] | 2021-05-26T16:28:26Z
| 2022-10-05T17:07:00Z
| 2022-10-05T17:07:00Z
|
MEMBER
| null | null | null |
Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2406/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2406/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1678
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1678/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1678/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1678/events
|
https://github.com/huggingface/datasets/pull/1678
| 777,567,920
|
MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy
| 1,678
|
Switchboard Dialog Act Corpus added under `datasets/swda`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gmihaila",
"id": 22454783,
"login": "gmihaila",
"node_id": "MDQ6VXNlcjIyNDU0Nzgz",
"organizations_url": "https://api.github.com/users/gmihaila/orgs",
"received_events_url": "https://api.github.com/users/gmihaila/received_events",
"repos_url": "https://api.github.com/users/gmihaila/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gmihaila"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.",
"It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ",
"Hi @lhoestq,\r\nI'm working on this to add the full dataset",
"> It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik\r\n\r\n@lhoestq Any info on how to add them?",
"@gmihaila, instead of using the current repo you should look into [this](https://github.com/cgpotts/swda). You can use the `csv` files uploaded in this repo (`swda.zip`) to access other fields and include them in this dataset. It has one dependency too, `swda.py`, you can download that separately and include it in your dataset's folder to be imported while reading the `csv` files.\r\n\r\nAlmost all the attributes of `Transcript` and `Utterance` objects are of the type str, int, or list. As far as `trees` attribute is concerned in utterance object you can simply parse it as string and user can maybe later convert it to nltk.tree object",
"@bhavitvyamalik Thank you for the clarification! \r\n\r\nI didn't use [that](https://github.com/cgpotts/swda) because it doesn't have the splits. I think in combination with [what I used](https://github.com/NathanDuran/Switchboard-Corpus) would help.\r\n\r\nLet me know if I can help! I can make those changes if you don't have the time.",
"I'm a bit busy for the next 2 weeks. I'll be able to complete it by end of January only. Maybe you can start with it and I'll help you?\r\nAlso, I looked into the official train/val/test splits and not all the files are there in the repo I used so I think either we'll have to skip them or put all of that into just train",
"Yes, I can start working on it and ask you to do a code review.\r\n\r\nYes, not all files are there. I'll try to find papers that have the correct and full splits, if not, I'll do like you suggested.\r\n\r\nThank you again for your help @bhavitvyamalik !"
] | 2021-01-03T03:53:41Z
| 2021-01-08T18:09:21Z
| 2021-01-05T10:06:35Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1678.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1678",
"merged_at": "2021-01-05T10:06:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1678.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1678"
}
|
Switchboard Dialog Act Corpus
Intro:
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,
with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information
about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
Details:
[homepage](http://compprag.christopherpotts.net/swda.html)
[repo](https://github.com/NathanDuran/Switchboard-Corpus/raw/master/swda_data/)
I believe this is an important dataset to have since there is no dataset related to dialogue act added.
I didn't find any formatting for pull request. I hope all this information is enough.
For any support please contact me.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1678/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1678/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3114
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3114/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3114/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3114/events
|
https://github.com/huggingface/datasets/issues/3114
| 1,030,693,130
|
I_kwDODunzps49byEK
| 3,114
|
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francisco-perez-sorrosal",
"id": 918006,
"login": "francisco-perez-sorrosal",
"node_id": "MDQ6VXNlcjkxODAwNg==",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francisco-perez-sorrosal"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.",
"Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` once I update arrow to 6.0.0.\r\n\r\nThanks!"
] | 2021-10-19T20:01:45Z
| 2022-02-14T14:00:28Z
| 2022-02-14T14:00:28Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter.
## Steps to reproduce the bug
The documentation for the `fs` parameter states:
```
fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``):
Instance of the remote filesystem used to download the files from.
```
`PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error.
```python
from fsspec.implementations.hdfs import PyArrowHDFS
...
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
```
## Expected results
Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing:
```python
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
my_datasets.save_to_disk(transformed_corpus_path, fs=fs)
```
As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS:
```sh
$ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/"
Found 4 items
-rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation
```
I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above.
## Actual results
However, when trying to recover the saved datasets, I get this error:
```
...
File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download
TypeError: download() got an unexpected keyword argument 'recursive'
```
Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter:
```python
def download(self, path, stream, buffer_size=None):
with self.open(path, 'rb') as f:
f.download(stream, buffer_size=buffer_size)
```
## Environment info
- `datasets` version: 1.13.3
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3114/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3114/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1631
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1631/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1631/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1631/events
|
https://github.com/huggingface/datasets/pull/1631
| 774,349,222
|
MDExOlB1bGxSZXF1ZXN0NTQ1Mjc5MTE2
| 1,631
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/savasy",
"id": 6584825,
"login": "savasy",
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"repos_url": "https://api.github.com/users/savasy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/savasy"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-24T11:45:52Z
| 2020-12-28T17:35:41Z
| 2020-12-28T17:16:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1631",
"merged_at": "2020-12-28T17:16:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1631"
}
|
I made small change for citation
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1631/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1631/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/182
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/182/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/182/comments
|
https://api.github.com/repos/huggingface/datasets/issues/182/events
|
https://github.com/huggingface/datasets/pull/182
| 622,646,770
|
MDExOlB1bGxSZXF1ZXN0NDIxNDcxMjg4
| 182
|
Update newsroom.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3289873?v=4",
"events_url": "https://api.github.com/users/yoavartzi/events{/privacy}",
"followers_url": "https://api.github.com/users/yoavartzi/followers",
"following_url": "https://api.github.com/users/yoavartzi/following{/other_user}",
"gists_url": "https://api.github.com/users/yoavartzi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yoavartzi",
"id": 3289873,
"login": "yoavartzi",
"node_id": "MDQ6VXNlcjMyODk4NzM=",
"organizations_url": "https://api.github.com/users/yoavartzi/orgs",
"received_events_url": "https://api.github.com/users/yoavartzi/received_events",
"repos_url": "https://api.github.com/users/yoavartzi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yoavartzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoavartzi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yoavartzi"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null |
[] | 2020-05-21T17:07:43Z
| 2020-05-22T16:38:23Z
| 2020-05-22T16:38:23Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/182.diff",
"html_url": "https://github.com/huggingface/datasets/pull/182",
"merged_at": "2020-05-22T16:38:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/182.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/182"
}
|
Updated the URL for Newsroom download so it's more robust to future changes.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/182/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/182/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3417
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3417/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3417/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3417/events
|
https://github.com/huggingface/datasets/pull/3417
| 1,076,943,343
|
PR_kwDODunzps4vrwd7
| 3,417
|
Fix type of bridge field in QED
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-10T15:07:21Z
| 2021-12-14T14:39:06Z
| 2021-12-14T14:39:05Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3417.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3417",
"merged_at": "2021-12-14T14:39:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3417.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3417"
}
|
Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`.
The following paragraph in the QED repo explains the purpose of this field:
>Each annotation in referential_equalities is a pair of spans, the question_reference and the sentence_reference, corresponding to an entity mention in the question and the selected_sentence respectively. As described in the paper, sentence_references can be "bridged in", in which case they do not correspond with any actual span in the selected_sentence. Hence, sentence_reference spans contain an additional field, bridge, which is a prepositional phrase when a reference is bridged, and is False otherwise. Prepositional phrases serve to link bridged references to an anchoring phrase in the selected_sentence. In the case a sentence_reference is bridged, the start and end, as well as the span string, map to such an anchoring phrase in the selected_sentence.
Fix #3346
cc @VictorSanh
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3417/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3417/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2751
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2751/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2751/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2751/events
|
https://github.com/huggingface/datasets/pull/2751
| 959,021,262
|
MDExOlB1bGxSZXF1ZXN0NzAyMTk5MjA5
| 2,751
|
Update metadata for wikihow dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-03T11:31:57Z
| 2021-08-03T15:52:09Z
| 2021-08-03T15:52:09Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2751",
"merged_at": "2021-08-03T15:52:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2751"
}
|
Update metadata for wikihow dataset:
- Remove leading new line character in description and citation
- Update metadata JSON
- Remove no longer necessary `urls_checksums/checksums.txt` file
Related to #2748.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2751/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2751/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3399
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3399/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3399/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3399/events
|
https://github.com/huggingface/datasets/issues/3399
| 1,073,593,861
|
I_kwDODunzps4__b4F
| 3,399
|
Add Wikisource dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"See notebook by @geohci: https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikisource%20Processing.ipynb"
] | 2021-12-07T17:21:31Z
| 2021-12-10T17:26:26Z
| null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** *wikisource*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** Additional high quality textual data, besides Wikipedia.
Add loading script as "canonical" dataset (as it is the case for ""wikipedia").
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
CC: @geohci, @yjernite
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3399/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3399/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/727/events
|
https://github.com/huggingface/datasets/issues/727
| 719,386,366
|
MDU6SXNzdWU3MTkzODYzNjY=
| 727
|
Parallel downloads progress bar flickers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2020-10-12T13:36:05Z
| 2020-10-12T13:36:05Z
| null |
MEMBER
| null | null | null |
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/727/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6237
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6237/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6237/events
|
https://github.com/huggingface/datasets/issues/6237
| 1,893,822,321
|
I_kwDODunzps5w4W9x
| 6,237
|
Tokenization with multiple workers is too slow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"events_url": "https://api.github.com/users/macabdul9/events{/privacy}",
"followers_url": "https://api.github.com/users/macabdul9/followers",
"following_url": "https://api.github.com/users/macabdul9/following{/other_user}",
"gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/macabdul9",
"id": 25720695,
"login": "macabdul9",
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"organizations_url": "https://api.github.com/users/macabdul9/orgs",
"received_events_url": "https://api.github.com/users/macabdul9/received_events",
"repos_url": "https://api.github.com/users/macabdul9/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions",
"type": "User",
"url": "https://api.github.com/users/macabdul9"
}
|
[] |
closed
| false
| null |
[] | null |
[
"[This](https://huggingface.co/docs/datasets/nlp_process#map) is the most performant way to tokenize a dataset (`batched=True, num_proc=None, return_tensors=\"np\"`) \r\n\r\nIf`tokenizer.is_fast` returns `True`, `num_proc` must be `None/1` to benefit from the fast tokenizers' parallelism (the fast tokenizers are implemented in Rust, and Rust multi-threading doesn't work well with Python multi-processing)"
] | 2023-09-13T06:18:34Z
| 2023-09-19T21:54:58Z
| 2023-09-19T21:54:58Z
|
NONE
| null | null | null |
I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever.
Code snippet:
```
raw_datasets.map(
encode_function,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
remove_columns=[name for name in raw_datasets["train"].column_names if name not in ["input_ids", "labels", "attention_mask"]],
desc="Tokenizing data",
)
```
Details:
```
transformers==4.28.0.dev0
datasets==4.28.0.dev0
preprocessing_num_workers==48
```
tokenizer == decapoda-research/llama-7b-hf
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6237/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4362
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4362/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4362/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4362/events
|
https://github.com/huggingface/datasets/pull/4362
| 1,238,680,112
|
PR_kwDODunzps439bkf
| 4,362
|
Update dataset_infos for UDHN/udhr dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leondz",
"id": 121934,
"login": "leondz",
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"repos_url": "https://api.github.com/users/leondz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leondz"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for contributing @leondz.\r\n\r\nThe checksums of the files have changed because more languages have been added:\r\n- the new language codes need to be added to the dataset card (README file)\r\n- I think the dataset version number should also be increased, so that users who had previously cached it, get a new dataset download (with the additional languages)",
"Yep! All done (also fixed the language tags in the README which were iso639-3 instead of the expected bcp47)",
"I guess the language code CI failure is due to languages.json being a subset of bcp47 (see issue #4304), happy to contribute a solution here, e.g. autogeneration of the lang list from the relevant isos and the ietf bcp47 subtag register or full code for validation",
"> Thanks again for your contribution, @leondz.\r\n> \r\n> Yes, I think it is OK to set version 1.0.0 (as previous was 0.0.0).\r\n> \r\n> One of the CI failures is related to dummy data: once you have updated the dataset version, the dummy_data ZIP file should be moved from \"dummy/0.0.0/dummy_data.zip\" to \"dummy/1.0.0/dummy_data.zip\".\r\n\r\nOh, thanks, I missed that one\r\n\r\n\r\n> Other CI failure is related to missing languages in our resources file. This has been addressed in this PR:\r\n> \r\n> * #4371\r\n> \r\n> You should merge master branch into your feature branch to incorporate that fix.\r\n\r\nYeah, I saw this :) I already have the merge, thanks. I'm talking about the longer-term picture: every time another language code comes up (e.g. da-bornholm or es-VE), the json will need updating, because the current approach is non-exhaustive manual whitelisting instead of relying on the established bcp standard."
] | 2022-05-17T13:52:59Z
| 2022-06-08T19:20:11Z
| 2022-06-08T19:11:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4362",
"merged_at": "2022-06-08T19:11:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4362"
}
|
Checksum update to `udhr` for issue #4361
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4362/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4362/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1823
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1823/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1823/events
|
https://github.com/huggingface/datasets/pull/1823
| 802,042,181
|
MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx
| 1,823
|
Add FewRel Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?",
"Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What do you think ?",
"Hi @lhoestq,\r\n\r\nSorry again, the last couple of weeks were a bit busy for me. I am wondering how do you want me to achieve that. Using a custom BuilderConfig which takes in whether it is the regular data or \"pid2name\"? \"pid2name\" is only useful for \"train_wiki\", \"val_nyt\" and \"val_wiki\". So, based on my understanding, it would look like this:\r\n\r\n```python\r\nwiki_data = load_dataset('few_rel','train_wiki')\r\nid2name = load_dataset('few_rel','pid2name')\r\n```\r\nand this will be handled in the multiple configs.\r\n\r\n\r\nA better alternative could be providing name of the relationship in only \"train_wiki\", \"val_nyt\" and \"val_wiki\" as an extra feature in the dataset, and doing away with \"pid2name\" entirely. I'll only download pid2name if any of those datasets are requested, and then during generation I'll return the list with the dataset under \"names\" feature. How does this sound?\r\n\r\nEDIT:\r\nThere is one issue with the second approach, the entire pid2name is saved with all three datasets - \"train_wiki\", \"val_nyt\" and \"val_wiki\" ([see code below](https://github.com/huggingface/datasets/pull/1823#issuecomment-786402026)). In dummy data, I can address this by manually editing the pid2name to contain only a few id-name pairs, those matching with the examples in the corresponding example file. But this seems to be inefficient for the entire dataset - storing the same file in multiple places.",
"Okay, I apologize, I guess I finally understand what is required.\r\n\r\nBasically, using:\r\n\r\n```python\r\nfew_rel = load_dataset('few_rel')\r\n```\r\nshould give all the files. This seems difficult since \"pid2name\" has a different format. Any suggestions on this?",
"Yes that's it, sorry if that wasn't clear !",
"Hi @lhoestq,\n\nSince pid2name has different features from the rest of the files, how will I add them to the same config?\n\nDo we want to exclude pid2name totally and add \"names\" to every example?",
"If I understand correctly each sample in the \"default\" config has one relation, and each relation has corresponding names in pid2name.\r\nWould it be possible to also include the names in the \"default\" configuration for each sample ? The names of one sample can be retrieved using the relation id no ?",
"Yes, that can be done. But for some files, the name is already given instead of ID. Only \"train_wiki\", \"val_wiki\", \"val_nyc\" have IDs. For others, I can set the names equal to a list of key.",
"I think that's fine as long as we mention this processing explicitly in the dataset card.",
"Hi @lhoestq,\r\n\r\nI have added the changes. Please let me know in case of any remaining issues.\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq,\r\n\r\nThanks for fixing it and approving :)"
] | 2021-02-05T10:22:03Z
| 2021-03-01T11:56:20Z
| 2021-03-01T10:21:39Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1823.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1823",
"merged_at": "2021-03-01T10:21:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1823.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1823"
}
|
Hi,
This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757.
I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `"relation"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `"relation":""` in the dictionary.
Please recommend better alternatives, if any.
Thanks,
Gunjan
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1823/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4031
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4031/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4031/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4031/events
|
https://github.com/huggingface/datasets/issues/4031
| 1,182,415,124
|
I_kwDODunzps5GejkU
| 4,031
|
Cannot load the dataset conll2012_ontonotesv5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8326473?v=4",
"events_url": "https://api.github.com/users/cathyxl/events{/privacy}",
"followers_url": "https://api.github.com/users/cathyxl/followers",
"following_url": "https://api.github.com/users/cathyxl/following{/other_user}",
"gists_url": "https://api.github.com/users/cathyxl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cathyxl",
"id": 8326473,
"login": "cathyxl",
"node_id": "MDQ6VXNlcjgzMjY0NzM=",
"organizations_url": "https://api.github.com/users/cathyxl/orgs",
"received_events_url": "https://api.github.com/users/cathyxl/received_events",
"repos_url": "https://api.github.com/users/cathyxl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cathyxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cathyxl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cathyxl"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Hi @cathyxl, thanks for reporting.\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists."
] | 2022-03-27T07:38:23Z
| 2022-03-28T06:58:31Z
| 2022-03-28T06:31:18Z
|
NONE
| null | null | null |
## Describe the bug
Cannot load the dataset conll2012_ontonotesv5
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test")
print(dataset)
```
## Expected results
The datasets should be downloaded successfully
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 7.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4031/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4031/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5671/events
|
https://github.com/huggingface/datasets/issues/5671
| 1,640,840,012
|
I_kwDODunzps5hzTtM
| 5,671
|
How to use `load_dataset('glue', 'cola')`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4",
"events_url": "https://api.github.com/users/makinzm/events{/privacy}",
"followers_url": "https://api.github.com/users/makinzm/followers",
"following_url": "https://api.github.com/users/makinzm/following{/other_user}",
"gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/makinzm",
"id": 40193664,
"login": "makinzm",
"node_id": "MDQ6VXNlcjQwMTkzNjY0",
"organizations_url": "https://api.github.com/users/makinzm/orgs",
"received_events_url": "https://api.github.com/users/makinzm/received_events",
"repos_url": "https://api.github.com/users/makinzm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makinzm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/makinzm"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to import `transformers` but it's no longer the case, so you could also simply update `datasets` and `transformers` won't be imported",
"Thank you for advising me to update these libraries versions.\r\n\r\nI can implement codes using `datasets==2.10.1` and `transformers==4.27.3`"
] | 2023-03-26T09:40:34Z
| 2023-03-28T07:43:44Z
| 2023-03-28T07:43:43Z
|
NONE
| null | null | null |
### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
---------------------------------------------------------------------------
InvalidVersion Traceback (most recent call last)
File <timed exec>:1
(Omit because of long error message)
File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version)
195 match = self._regex.search(version)
196 if not match:
--> 197 raise InvalidVersion(f"Invalid version: '{version}'")
199 # Store the parsed out pieces of the version
200 self._version = _Version(
201 epoch=int(match.group("epoch")) if match.group("epoch") else 0,
202 release=tuple(int(i) for i in match.group("release").split(".")),
(...)
208 local=_parse_local_version(match.group("local")),
209 )
InvalidVersion: Invalid version: '0.10.1,<0.11'
```
- You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb)
### Steps to reproduce the bug
- This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup)
1. cd `/DockerImage` and command `docker build . -t week0`
2. cd `/` and command `docker-compose up`
3. Run `experimental_notebooks/data_exploration.ipynb`
----
Just to be sure, I wrote down Dockerfile and requirements.txt
- Dockerfile
```Dockerfile
FROM python:3.8
WORKDIR /root/working
RUN apt-get update && \
apt-get install -y python3-dev python3-pip python3-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt
CMD ["bash"]
```
- requirements.txt
```txt
pytorch-lightning==1.2.10
datasets==1.6.2
transformers==4.5.1
scikit-learn==0.24.2
```
### Expected behavior
There is no bug to implement `load_dataset('glue', 'cola')`
### Environment info
I already wrote it.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5671/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1540
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1540/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1540/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1540/events
|
https://github.com/huggingface/datasets/pull/1540
| 765,357,702
|
MDExOlB1bGxSZXF1ZXN0NTM4OTQ1NDc2
| 1,540
|
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq, can you help with creating dummy_data?\r\n",
"Hi @yavuzKomecoglu did you manage to build the dummy data ?",
"> Hi @yavuzKomecoglu did you manage to build the dummy data ?\r\n\r\nHi, sorry for the return. I've created dummy_data.zip manually.",
"> Nice thank you !\r\n> \r\n> Before we merge can you fill the two sections of the dataset card I suggested ?\r\n> And also remove one remaining print statement\r\n\r\nI updated your suggestions. Thank you very much for your support.",
"I think you accidentally pushed the readme of another dataset (name_to_nation).\r\nI removed it so you have to `git pull`\r\n\r\nBecause of that I guess your changes about the ttc4900 was not included.\r\nFeel free to ping me once they're added\r\n\r\n\r\n",
"> I think you accidentally pushed the readme of another dataset (name_to_nation).\r\n> I removed it so you have to `git pull`\r\n> \r\n> Because of that I guess your changes about the ttc4900 was not included.\r\n> Feel free to ping me once they're added\r\n\r\nI did `git pull` and updated readme **ttc4900**.",
"merging since the Ci is fixed on master"
] | 2020-12-13T12:43:33Z
| 2020-12-18T10:09:01Z
| 2020-12-18T10:09:01Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1540",
"merged_at": "2020-12-18T10:09:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1540"
}
|
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz.
Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900)
Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1540/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1540/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5754
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5754/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5754/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5754/events
|
https://github.com/huggingface/datasets/pull/5754
| 1,668,755,035
|
PR_kwDODunzps5OWozh
| 5,754
|
Minor tqdm fixes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004592 / 0.011008 (-0.006416) | 0.097239 / 0.038508 (0.058731) | 0.028609 / 0.023109 (0.005499) | 0.309225 / 0.275898 (0.033327) | 0.340015 / 0.323480 (0.016535) | 0.004857 / 0.007986 (-0.003129) | 0.004649 / 0.004328 (0.000320) | 0.074770 / 0.004250 (0.070520) | 0.038351 / 0.037052 (0.001299) | 0.313360 / 0.258489 (0.054871) | 0.350256 / 0.293841 (0.056416) | 0.030770 / 0.128546 (-0.097776) | 0.011591 / 0.075646 (-0.064055) | 0.322444 / 0.419271 (-0.096828) | 0.043704 / 0.043533 (0.000171) | 0.311790 / 0.255139 (0.056651) | 0.339183 / 0.283200 (0.055984) | 0.088041 / 0.141683 (-0.053642) | 1.490649 / 1.452155 (0.038494) | 1.561789 / 1.492716 (0.069072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208984 / 0.018006 (0.190978) | 0.406105 / 0.000490 (0.405616) | 0.003152 / 0.000200 (0.002952) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022622 / 0.037411 (-0.014790) | 0.095819 / 0.014526 (0.081294) | 0.105132 / 0.176557 (-0.071424) | 0.165684 / 0.737135 (-0.571451) | 0.106706 / 0.296338 (-0.189632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426126 / 0.215209 (0.210917) | 4.233864 / 2.077655 (2.156209) | 1.918727 / 1.504120 (0.414607) | 1.729905 / 1.541195 (0.188710) | 1.760342 / 1.468490 (0.291852) | 0.695449 / 4.584777 (-3.889328) | 3.413531 / 3.745712 (-0.332181) | 1.904557 / 5.269862 (-3.365305) | 1.270604 / 4.565676 (-3.295072) | 0.083018 / 0.424275 (-0.341257) | 0.012760 / 0.007607 (0.005152) | 0.523991 / 0.226044 (0.297947) | 5.236132 / 2.268929 (2.967204) | 2.360959 / 55.444624 (-53.083665) | 1.996533 / 6.876477 (-4.879943) | 2.072934 / 2.142072 (-0.069138) | 0.804133 / 4.805227 (-4.001094) | 0.150976 / 6.500664 (-6.349688) | 0.065503 / 0.075469 (-0.009966) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211828 / 1.841788 (-0.629960) | 13.657743 / 8.074308 (5.583435) | 13.887148 / 10.191392 (3.695756) | 0.145996 / 0.680424 (-0.534428) | 0.016562 / 0.534201 (-0.517639) | 0.380359 / 0.579283 (-0.198924) | 0.388698 / 0.434364 (-0.045666) | 0.440373 / 0.540337 (-0.099965) | 0.531753 / 1.386936 (-0.855183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.004569 / 0.011008 (-0.006439) | 0.076239 / 0.038508 (0.037731) | 0.028462 / 0.023109 (0.005352) | 0.365540 / 0.275898 (0.089642) | 0.398242 / 0.323480 (0.074762) | 0.005785 / 0.007986 (-0.002200) | 0.003346 / 0.004328 (-0.000982) | 0.076296 / 0.004250 (0.072046) | 0.039853 / 0.037052 (0.002800) | 0.367684 / 0.258489 (0.109195) | 0.409570 / 0.293841 (0.115730) | 0.030536 / 0.128546 (-0.098010) | 0.011534 / 0.075646 (-0.064112) | 0.084962 / 0.419271 (-0.334309) | 0.042708 / 0.043533 (-0.000825) | 0.344058 / 0.255139 (0.088919) | 0.389096 / 0.283200 (0.105897) | 0.090559 / 0.141683 (-0.051124) | 1.507101 / 1.452155 (0.054946) | 1.563977 / 1.492716 (0.071260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228740 / 0.018006 (0.210734) | 0.396890 / 0.000490 (0.396400) | 0.000392 / 0.000200 (0.000192) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025052 / 0.037411 (-0.012360) | 0.099951 / 0.014526 (0.085426) | 0.106847 / 0.176557 (-0.069710) | 0.156666 / 0.737135 (-0.580469) | 0.110344 / 0.296338 (-0.185994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442363 / 0.215209 (0.227154) | 4.429571 / 2.077655 (2.351917) | 2.076501 / 1.504120 (0.572381) | 1.875226 / 1.541195 (0.334031) | 1.909093 / 1.468490 (0.440603) | 0.703047 / 4.584777 (-3.881730) | 3.457036 / 3.745712 (-0.288676) | 2.866648 / 5.269862 (-2.403214) | 1.524430 / 4.565676 (-3.041246) | 0.083687 / 0.424275 (-0.340588) | 0.012251 / 0.007607 (0.004643) | 0.543945 / 0.226044 (0.317901) | 5.440559 / 2.268929 (3.171630) | 2.522924 / 55.444624 (-52.921700) | 2.188770 / 6.876477 (-4.687707) | 2.249632 / 2.142072 (0.107559) | 0.813499 / 4.805227 (-3.991728) | 0.152861 / 6.500664 (-6.347803) | 0.067189 / 0.075469 (-0.008280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284255 / 1.841788 (-0.557533) | 14.207864 / 8.074308 (6.133556) | 14.279691 / 10.191392 (4.088299) | 0.167027 / 0.680424 (-0.513396) | 0.016455 / 0.534201 (-0.517746) | 0.380798 / 0.579283 (-0.198485) | 0.390013 / 0.434364 (-0.044351) | 0.445493 / 0.540337 (-0.094845) | 0.526278 / 1.386936 (-0.860658) |\n\n</details>\n</details>\n\n\n"
] | 2023-04-14T18:15:14Z
| 2023-04-20T15:27:58Z
| 2023-04-20T15:21:00Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5754",
"merged_at": "2023-04-20T15:21:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5754"
}
|
`GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560).
Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again, this bug was introduced by me in the linked PR 😎)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5754/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5754/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3023
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3023/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3023/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3023/events
|
https://github.com/huggingface/datasets/pull/3023
| 1,015,923,031
|
PR_kwDODunzps4srQ4i
| 3,023
|
Fix typo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qqaatw",
"id": 24835382,
"login": "qqaatw",
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qqaatw"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-10-05T06:06:11Z
| 2021-10-05T11:56:55Z
| 2021-10-05T11:56:55Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3023",
"merged_at": "2021-10-05T11:56:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3023"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3023/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3023/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4639
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4639/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4639/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4639/events
|
https://github.com/huggingface/datasets/issues/4639
| 1,295,367,322
|
I_kwDODunzps5NNbya
| 4,639
|
Add HaGRID -- HAnd Gesture Recognition Image Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] | null |
[] | 2022-07-06T07:41:32Z
| 2022-07-06T07:41:32Z
| null |
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset
- **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.
- **Paper:** https://arxiv.org/abs/2206.08219
- **Data:** https://github.com/hukenovs/hagrid
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4639/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4639/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5115
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5115/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5115/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5115/events
|
https://github.com/huggingface/datasets/pull/5115
| 1,409,250,020
|
PR_kwDODunzps5Az9Pm
| 5,115
|
Fix iter_batches
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I also ran the code in https://github.com/huggingface/datasets/issues/5111 and it works fine now :)",
"This is ready for review :)"
] | 2022-10-14T12:06:14Z
| 2022-10-14T15:02:15Z
| 2022-10-14T14:59:58Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5115.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5115",
"merged_at": "2022-10-14T14:59:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5115.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5115"
}
|
The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user
Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on one batch of one element:
```python
from datasets import Dataset, concatenate_datasets
ds = concatenate_datasets([Dataset.from_dict({"a": [i]}) for i in range(10)])
ds2 = ds.map(lambda _: {}, batched=True)
assert list(ds2) == list(ds)
```
This was introduced in https://github.com/huggingface/datasets/pull/5030
Close https://github.com/huggingface/datasets/issues/5111
This will require a patch release along with https://github.com/huggingface/datasets/pull/5113
TODO:
- [x] fix tests
- [x] add more tests
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5115/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5115/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6440
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6440/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6440/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6440/events
|
https://github.com/huggingface/datasets/issues/6440
| 2,004,509,301
|
I_kwDODunzps53emJ1
| 6,440
|
`.map` not hashing under python 3.9
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/changyeli",
"id": 9058204,
"login": "changyeli",
"node_id": "MDQ6VXNlcjkwNTgyMDQ=",
"organizations_url": "https://api.github.com/users/changyeli/orgs",
"received_events_url": "https://api.github.com/users/changyeli/received_events",
"repos_url": "https://api.github.com/users/changyeli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changyeli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/changyeli"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Tried to upgrade Python to 3.11 - still get this message. A partial solution is to NOT use `num_proc` at all. It will be considerably longer to finish the job.",
"Hi! The `model = torch.compile(model)` line is problematic for our hashing logic. We would have to merge https://github.com/huggingface/datasets/pull/5867 to support hashing `torch.compile`-ed models/functions. \r\n\r\nI've started refactoring the hashing logic and plan to incorporate a fix for `torch.compile` as part of it, so this should be addressed soon (probably this or next week). "
] | 2023-11-21T15:14:54Z
| 2023-11-28T16:29:33Z
| 2023-11-28T16:29:33Z
|
NONE
| null | null | null |
### Describe the bug
The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message:
`Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
### Steps to reproduce the bug
```python
def map_to_pred(batch):
"""
Perform inference on an audio batch
Parameters:
batch (dict): A dictionary containing audio data and other related information.
Returns:
dict: The input batch dictionary with added prediction and transcription fields.
"""
audio = batch['audio']
input_features = processor(
audio['array'], sampling_rate=audio['sampling_rate'], return_tensors="pt").input_features
input_features = input_features.to('cuda')
with torch.no_grad():
predicted_ids = model.generate(input_features)
preds = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
batch['prediction'] = processor.tokenizer._normalize(preds)
batch["transcription"] = processor.tokenizer._normalize(batch['transcription'])
return batch
MODEL_CARD = "openai/whisper-small"
MODEL_NAME = MODEL_CARD.rsplit('/', maxsplit=1)[-1]
model = WhisperForConditionalGeneration.from_pretrained(MODEL_CARD)
processor = AutoProcessor.from_pretrained(
MODEL_CARD, language="english", task="transcribe")
model = torch.compile(model)
dt = load_dataset("audiofolder", data_dir=config['DATA']['dataset'], split="test")
dt = dt.cast_column("audio", Audio(sampling_rate=16000))
result = coraal_dt.map(map_to_pred, num_proc=16)
```
### Expected behavior
Hashed and cached dataset starts inferencing
### Environment info
- `transformers` version: 4.35.0
- Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6440/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6440/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1962
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1962/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1962/events
|
https://github.com/huggingface/datasets/pull/1962
| 818,089,156
|
MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4
| 1,962
|
Fix unused arguments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq Re-added the arg. The ConnectionError in CI seems unrelated to this PR (the same test fails on master as well).",
"Thanks !\r\nI'm re-running the CI, maybe this was an issue with circleCI",
"Looks all good now, merged :)"
] | 2021-02-28T02:47:07Z
| 2021-03-11T02:18:17Z
| 2021-03-03T16:37:50Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1962.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1962",
"merged_at": "2021-03-03T16:37:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1962.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1962"
}
|
Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1962/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/122
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/122/comments
|
https://api.github.com/repos/huggingface/datasets/issues/122/events
|
https://github.com/huggingface/datasets/pull/122
| 618,813,182
|
MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3
| 122
|
Final cleanup of readme and metrics
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-15T09:00:52Z
| 2021-09-03T19:40:09Z
| 2020-05-15T09:02:22Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/122",
"merged_at": "2020-05-15T09:02:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/122"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/122/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/122/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/5816
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5816/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5816/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5816/events
|
https://github.com/huggingface/datasets/pull/5816
| 1,694,590,856
|
PR_kwDODunzps5Ps4t9
| 5,816
|
Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007862 / 0.011353 (-0.003491) | 0.005747 / 0.011008 (-0.005261) | 0.106818 / 0.038508 (0.068310) | 0.036630 / 0.023109 (0.013521) | 0.344218 / 0.275898 (0.068320) | 0.398803 / 0.323480 (0.075324) | 0.006187 / 0.007986 (-0.001799) | 0.005686 / 0.004328 (0.001358) | 0.078568 / 0.004250 (0.074318) | 0.051786 / 0.037052 (0.014734) | 0.361736 / 0.258489 (0.103247) | 0.396323 / 0.293841 (0.102482) | 0.037943 / 0.128546 (-0.090603) | 0.013957 / 0.075646 (-0.061689) | 0.366782 / 0.419271 (-0.052490) | 0.054700 / 0.043533 (0.011167) | 0.349692 / 0.255139 (0.094553) | 0.366481 / 0.283200 (0.083281) | 0.117394 / 0.141683 (-0.024289) | 1.593156 / 1.452155 (0.141001) | 1.708864 / 1.492716 (0.216148) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229529 / 0.018006 (0.211523) | 0.490531 / 0.000490 (0.490042) | 0.002934 / 0.000200 (0.002734) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028074 / 0.037411 (-0.009337) | 0.122321 / 0.014526 (0.107795) | 0.129120 / 0.176557 (-0.047436) | 0.188413 / 0.737135 (-0.548722) | 0.138983 / 0.296338 (-0.157355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479350 / 0.215209 (0.264141) | 4.926201 / 2.077655 (2.848546) | 2.265557 / 1.504120 (0.761437) | 2.014580 / 1.541195 (0.473386) | 2.120517 / 1.468490 (0.652027) | 0.795334 / 4.584777 (-3.789443) | 4.509754 / 3.745712 (0.764042) | 4.328313 / 5.269862 (-0.941548) | 2.153304 / 4.565676 (-2.412373) | 0.102942 / 0.424275 (-0.321333) | 0.053504 / 0.007607 (0.045896) | 0.609392 / 0.226044 (0.383347) | 6.114048 / 2.268929 (3.845119) | 2.773306 / 55.444624 (-52.671318) | 2.443434 / 6.876477 (-4.433042) | 2.612005 / 2.142072 (0.469932) | 0.950435 / 4.805227 (-3.854792) | 0.194081 / 6.500664 (-6.306583) | 0.074513 / 0.075469 (-0.000956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402897 / 1.841788 (-0.438891) | 18.263033 / 8.074308 (10.188724) | 16.579809 / 10.191392 (6.388417) | 0.212319 / 0.680424 (-0.468104) | 0.020468 / 0.534201 (-0.513733) | 0.494850 / 0.579283 (-0.084433) | 0.483790 / 0.434364 (0.049426) | 0.572073 / 0.540337 (0.031735) | 0.684353 / 1.386936 (-0.702583) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009732 / 0.011353 (-0.001621) | 0.005901 / 0.011008 (-0.005107) | 0.084568 / 0.038508 (0.046060) | 0.038743 / 0.023109 (0.015634) | 0.431323 / 0.275898 (0.155425) | 0.472124 / 0.323480 (0.148644) | 0.006255 / 0.007986 (-0.001731) | 0.005892 / 0.004328 (0.001563) | 0.081913 / 0.004250 (0.077662) | 0.055560 / 0.037052 (0.018507) | 0.442857 / 0.258489 (0.184368) | 0.481887 / 0.293841 (0.188046) | 0.040730 / 0.128546 (-0.087816) | 0.014339 / 0.075646 (-0.061307) | 0.099258 / 0.419271 (-0.320013) | 0.054692 / 0.043533 (0.011159) | 0.436323 / 0.255139 (0.181184) | 0.461046 / 0.283200 (0.177846) | 0.125972 / 0.141683 (-0.015710) | 1.673173 / 1.452155 (0.221018) | 1.781364 / 1.492716 (0.288648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271450 / 0.018006 (0.253444) | 0.514484 / 0.000490 (0.513994) | 0.000455 / 0.000200 (0.000255) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036104 / 0.037411 (-0.001308) | 0.143306 / 0.014526 (0.128780) | 0.151105 / 0.176557 (-0.025451) | 0.210737 / 0.737135 (-0.526399) | 0.151404 / 0.296338 (-0.144934) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.573613 / 0.215209 (0.358404) | 5.828222 / 2.077655 (3.750567) | 2.993028 / 1.504120 (1.488908) | 2.617900 / 1.541195 (1.076706) | 2.754673 / 1.468490 (1.286183) | 1.010624 / 4.584777 (-3.574152) | 4.971261 / 3.745712 (1.225549) | 4.382017 / 5.269862 (-0.887845) | 1.971894 / 4.565676 (-2.593782) | 0.104404 / 0.424275 (-0.319871) | 0.014595 / 0.007607 (0.006988) | 0.657684 / 0.226044 (0.431639) | 6.566151 / 2.268929 (4.297222) | 3.221378 / 55.444624 (-52.223246) | 2.809402 / 6.876477 (-4.067075) | 2.882426 / 2.142072 (0.740354) | 1.006134 / 4.805227 (-3.799093) | 0.204469 / 6.500664 (-6.296196) | 0.078147 / 0.075469 (0.002678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574768 / 1.841788 (-0.267020) | 18.193335 / 8.074308 (10.119027) | 17.275353 / 10.191392 (7.083961) | 0.166890 / 0.680424 (-0.513534) | 0.020612 / 0.534201 (-0.513589) | 0.496179 / 0.579283 (-0.083104) | 0.507824 / 0.434364 (0.073460) | 0.620984 / 0.540337 (0.080647) | 0.749727 / 1.386936 (-0.637209) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006534 / 0.011353 (-0.004819) | 0.004456 / 0.011008 (-0.006553) | 0.097978 / 0.038508 (0.059470) | 0.027614 / 0.023109 (0.004505) | 0.309833 / 0.275898 (0.033935) | 0.337006 / 0.323480 (0.013526) | 0.004986 / 0.007986 (-0.002999) | 0.004521 / 0.004328 (0.000193) | 0.075053 / 0.004250 (0.070803) | 0.037095 / 0.037052 (0.000043) | 0.305430 / 0.258489 (0.046941) | 0.345298 / 0.293841 (0.051457) | 0.029784 / 0.128546 (-0.098762) | 0.011449 / 0.075646 (-0.064197) | 0.323346 / 0.419271 (-0.095925) | 0.042188 / 0.043533 (-0.001345) | 0.318653 / 0.255139 (0.063514) | 0.333799 / 0.283200 (0.050599) | 0.088194 / 0.141683 (-0.053488) | 1.511012 / 1.452155 (0.058857) | 1.578205 / 1.492716 (0.085489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229695 / 0.018006 (0.211689) | 0.413276 / 0.000490 (0.412786) | 0.009142 / 0.000200 (0.008942) | 0.000537 / 0.000054 (0.000482) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024327 / 0.037411 (-0.013084) | 0.097953 / 0.014526 (0.083427) | 0.105551 / 0.176557 (-0.071005) | 0.169397 / 0.737135 (-0.567738) | 0.109784 / 0.296338 (-0.186554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417713 / 0.215209 (0.202504) | 4.190703 / 2.077655 (2.113048) | 1.873504 / 1.504120 (0.369384) | 1.664540 / 1.541195 (0.123346) | 1.704539 / 1.468490 (0.236049) | 0.699840 / 4.584777 (-3.884937) | 3.480605 / 3.745712 (-0.265107) | 1.844229 / 5.269862 (-3.425633) | 1.155793 / 4.565676 (-3.409883) | 0.083013 / 0.424275 (-0.341262) | 0.012414 / 0.007607 (0.004807) | 0.518357 / 0.226044 (0.292313) | 5.186136 / 2.268929 (2.917207) | 2.329263 / 55.444624 (-53.115361) | 1.991395 / 6.876477 (-4.885081) | 2.074563 / 2.142072 (-0.067509) | 0.801388 / 4.805227 (-4.003839) | 0.152236 / 6.500664 (-6.348428) | 0.067414 / 0.075469 (-0.008055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197290 / 1.841788 (-0.644497) | 13.666537 / 8.074308 (5.592229) | 13.017190 / 10.191392 (2.825798) | 0.142109 / 0.680424 (-0.538314) | 0.016321 / 0.534201 (-0.517880) | 0.378434 / 0.579283 (-0.200849) | 0.381101 / 0.434364 (-0.053263) | 0.444113 / 0.540337 (-0.096225) | 0.521448 / 1.386936 (-0.865488) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006273 / 0.011353 (-0.005080) | 0.004408 / 0.011008 (-0.006600) | 0.077100 / 0.038508 (0.038592) | 0.027361 / 0.023109 (0.004251) | 0.358170 / 0.275898 (0.082272) | 0.390125 / 0.323480 (0.066646) | 0.004736 / 0.007986 (-0.003250) | 0.004663 / 0.004328 (0.000334) | 0.077626 / 0.004250 (0.073376) | 0.037103 / 0.037052 (0.000051) | 0.360044 / 0.258489 (0.101555) | 0.411539 / 0.293841 (0.117698) | 0.030173 / 0.128546 (-0.098373) | 0.011618 / 0.075646 (-0.064028) | 0.086036 / 0.419271 (-0.333235) | 0.039077 / 0.043533 (-0.004456) | 0.382223 / 0.255139 (0.127084) | 0.384817 / 0.283200 (0.101618) | 0.094591 / 0.141683 (-0.047092) | 1.494961 / 1.452155 (0.042807) | 1.583769 / 1.492716 (0.091053) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227467 / 0.018006 (0.209460) | 0.396648 / 0.000490 (0.396159) | 0.000382 / 0.000200 (0.000182) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025346 / 0.037411 (-0.012065) | 0.102086 / 0.014526 (0.087560) | 0.108570 / 0.176557 (-0.067986) | 0.158777 / 0.737135 (-0.578359) | 0.112885 / 0.296338 (-0.183453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460731 / 0.215209 (0.245522) | 4.556450 / 2.077655 (2.478795) | 2.258185 / 1.504120 (0.754065) | 2.122584 / 1.541195 (0.581389) | 2.224638 / 1.468490 (0.756148) | 0.691909 / 4.584777 (-3.892868) | 3.482634 / 3.745712 (-0.263078) | 2.772837 / 5.269862 (-2.497024) | 1.533897 / 4.565676 (-3.031780) | 0.083025 / 0.424275 (-0.341250) | 0.012629 / 0.007607 (0.005022) | 0.548397 / 0.226044 (0.322352) | 5.492005 / 2.268929 (3.223077) | 2.669841 / 55.444624 (-52.774784) | 2.366947 / 6.876477 (-4.509529) | 2.496795 / 2.142072 (0.354722) | 0.804868 / 4.805227 (-4.000359) | 0.151686 / 6.500664 (-6.348978) | 0.068333 / 0.075469 (-0.007136) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320414 / 1.841788 (-0.521374) | 14.367567 / 8.074308 (6.293258) | 14.047702 / 10.191392 (3.856310) | 0.129087 / 0.680424 (-0.551337) | 0.016658 / 0.534201 (-0.517543) | 0.381949 / 0.579283 (-0.197335) | 0.390105 / 0.434364 (-0.044258) | 0.445947 / 0.540337 (-0.094390) | 0.531074 / 1.386936 (-0.855862) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-03T18:34:18Z
| 2023-05-04T14:31:55Z
| 2023-05-04T14:24:49Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5816.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5816",
"merged_at": "2023-05-04T14:24:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5816.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5816"
}
|
Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities.
Fix #5812
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5816/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5816/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/505
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/505/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/505/comments
|
https://api.github.com/repos/huggingface/datasets/issues/505/events
|
https://github.com/huggingface/datasets/pull/505
| 678,791,400
|
MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4
| 505
|
tmp_file referenced before assignment
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17853685?v=4",
"events_url": "https://api.github.com/users/avloss/events{/privacy}",
"followers_url": "https://api.github.com/users/avloss/followers",
"following_url": "https://api.github.com/users/avloss/following{/other_user}",
"gists_url": "https://api.github.com/users/avloss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avloss",
"id": 17853685,
"login": "avloss",
"node_id": "MDQ6VXNlcjE3ODUzNjg1",
"organizations_url": "https://api.github.com/users/avloss/orgs",
"received_events_url": "https://api.github.com/users/avloss/received_events",
"repos_url": "https://api.github.com/users/avloss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avloss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avloss"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)",
"I'm closing this one as I created the other PR."
] | 2020-08-13T23:27:33Z
| 2020-08-14T13:42:46Z
| 2020-08-14T13:42:46Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/505.diff",
"html_url": "https://github.com/huggingface/datasets/pull/505",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/505.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/505"
}
|
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/505/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/505/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1365
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1365/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1365/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1365/events
|
https://github.com/huggingface/datasets/pull/1365
| 760,188,457
|
MDExOlB1bGxSZXF1ZXN0NTM1MDYxNTI2
| 1,365
|
Add Mkqa dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cceyda",
"id": 15624271,
"login": "cceyda",
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"repos_url": "https://api.github.com/users/cceyda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cceyda"
}
|
[] |
closed
| false
| null |
[] | null |
[
"the `RemoteDatasetTest ` error pf the CI is fixed on master so it's fine",
"merging since the CI is fixed on master"
] | 2020-12-09T10:06:33Z
| 2020-12-10T15:37:56Z
| 2020-12-10T15:37:56Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1365",
"merged_at": "2020-12-10T15:37:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1365"
}
|
# MKQA: Multilingual Knowledge Questions & Answers Dataset
Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉
There is no official data splits so I added just a `train` split.
differently from the original:
- answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions)
- answer:entity field has a default value of empty string '' (since this key is not available for all in original)
- answer:alias has default value of []
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1365/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1365/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4530
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4530/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4530/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4530/events
|
https://github.com/huggingface/datasets/pull/4530
| 1,276,884,962
|
PR_kwDODunzps458n_S
| 4,530
|
Add AudioFolder packaged loader
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq @mariosasko I don't know what to do with the test, do you have any ideas? :)",
"also it's passed in `pyarrow_latest_WIN`",
"If the error only happens on 3.6, maybe #4460 can help ^^' It seems to work in 3.7 on the windows CI\r\n\r\n> inferring labels is not the default behavior (drop_labels is set to True in config)\r\n\r\nI think it a missed opportunity to have a consistent API between imagefolder and audiofolder, since they do everything the same way. Can you give more details why you think we should drop the labels by default ?",
"Considering audio classification in audio is not as common as image classification in image, I'm ok with having different config defaults as long as they are properly documented (check [Papers With Code](https://paperswithcode.com/datasets) for stats and compare the classification numbers to the other tasks, do this for both modalities)\r\n\r\nAlso, WDYT about creating a generic folder loader that ImageFolder and AudioFolder then subclass to avoid having to update both of them when there is something to update/fix?",
"@lhoestq I think it doesn't change the API itself, it just doesn't infer labels by default, but you can **still** set `drop_labels=False` to `load_dataset` and the labels will be inferred. \r\nSuppose that one has data structured as follows:\r\n```\r\ndata/\r\n train/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n test/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n```\r\nIf users load this dataset with `load_dataset(\"audiofolder\", data_dir=\"data\")` (the most native way), they will get a `label` feature that will always be equal to 0 (= \"audio\"). To mitigate this, they will have to always specify `load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=True)` explicitly and I believe it's not convenient. \r\n\r\nAt the same time, `label` column can be added just as easy as adding one argument:` load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=False)`. As classification task is not as common, I think it should require more symbols to be added to the code :D \r\n\r\nBut this is definitely should be explained in the docs, which I've forgotten to update... I'll add this section soon.\r\n\r\nAlso +to the generic loader, will work on it. \r\n\r\n",
"If a metadata.jsonl file is present, then it doesn't have to infer the labels I agree. Note that this is already the case for imagefolder ;) in your case `load_dataset(\"audiofolder\", data_dir=\"data\")` won't return labels !\r\n\r\nLabels are only inferred if there are no metadata.jsonl",
"Feel free to merge the `main` branch into yours after updating your fork of `datasets`: https://github.com/huggingface/datasets/issues/4629\r\n\r\nThis should fix some errors in the CI",
"@mariosasko could you please review this PR again? :)\r\n\r\nmost of the tests for AutoFolder (base class for AudioFolder and ImageFolder) are now basically copied from Image/AudioFolder (their tests are also almost identical too) and adapted to test other methods. it should be refactored but i think this is not that important for now and might be done in the future PR, wdyt?",
"@mariosasko thank you for the review! I'm sorry I accidentally asked for the review again, ignore it."
] | 2022-06-20T12:54:02Z
| 2022-08-22T14:36:49Z
| 2022-08-22T14:20:40Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4530",
"merged_at": "2022-08-22T14:20:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4530"
}
|
will close #3964
AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though.
The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` is `True`. Here is the log from the CI:
```
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/features/audio.py:237: in _decode_non_mp3_path_like
array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/util/decorators.py:88: in inner_f
return f(*args, **kwargs)
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:176: in load
raise (exc)
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:155: in load
context = sf.SoundFile(path)
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:629: in __init__
self._file = self._open(file, mode_int, closefd)
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:1184: in _open
"Error opening {0!r}: ".format(self.name))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
err = 72
prefix = "Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: "
def _error_check(err, prefix=""):
"""Pretty-print a numerical error code if there is an error."""
if err != 0:
err_str = _snd.sf_error_number(err)
> raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
E RuntimeError: Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: Error in WAV file. No 'data' chunk marker.
```
I hadn't been able to reproduce this locally until I created the same test environment (I mean with `pip install .[tests]`) with python3.6. The same env but with python3.8 passes the test! I didn't manage to figure out what's wrong, I also tried simply to replace the test wav file and still got the same error. Versions of `soundfile`, `librosa` and `libsndfile` are identical. Might it be something with zip compression? Sounds weird but I don't have any other ideas...
TODO:
- [x] align with #4622
- [x] documentation
- [x] tests for AutoFolder?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4530/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4530/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6192
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6192/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6192/events
|
https://github.com/huggingface/datasets/pull/6192
| 1,871,911,640
|
PR_kwDODunzps5ZDGnI
| 6,192
|
Set minimal fsspec version requirement to 2023.1.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005972 / 0.011353 (-0.005381) | 0.003636 / 0.011008 (-0.007372) | 0.080254 / 0.038508 (0.041746) | 0.059564 / 0.023109 (0.036455) | 0.310615 / 0.275898 (0.034717) | 0.359307 / 0.323480 (0.035827) | 0.003408 / 0.007986 (-0.004578) | 0.002941 / 0.004328 (-0.001388) | 0.063699 / 0.004250 (0.059449) | 0.046072 / 0.037052 (0.009020) | 0.318670 / 0.258489 (0.060181) | 0.369677 / 0.293841 (0.075836) | 0.026995 / 0.128546 (-0.101552) | 0.007954 / 0.075646 (-0.067693) | 0.261667 / 0.419271 (-0.157604) | 0.045167 / 0.043533 (0.001634) | 0.314276 / 0.255139 (0.059137) | 0.348871 / 0.283200 (0.065672) | 0.021748 / 0.141683 (-0.119935) | 1.438598 / 1.452155 (-0.013557) | 1.530119 / 1.492716 (0.037403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196894 / 0.018006 (0.178888) | 0.445757 / 0.000490 (0.445267) | 0.002842 / 0.000200 (0.002642) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024923 / 0.037411 (-0.012488) | 0.075186 / 0.014526 (0.060661) | 0.087193 / 0.176557 (-0.089364) | 0.147496 / 0.737135 (-0.589639) | 0.087083 / 0.296338 (-0.209255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423545 / 0.215209 (0.208336) | 4.187927 / 2.077655 (2.110273) | 2.008656 / 1.504120 (0.504536) | 1.791313 / 1.541195 (0.250119) | 1.849836 / 1.468490 (0.381346) | 0.499458 / 4.584777 (-4.085318) | 2.983206 / 3.745712 (-0.762506) | 2.801005 / 5.269862 (-2.468856) | 1.886207 / 4.565676 (-2.679469) | 0.057343 / 0.424275 (-0.366932) | 0.006666 / 0.007607 (-0.000941) | 0.483948 / 0.226044 (0.257904) | 4.874818 / 2.268929 (2.605890) | 2.439393 / 55.444624 (-53.005231) | 2.049861 / 6.876477 (-4.826616) | 2.217050 / 2.142072 (0.074977) | 0.589760 / 4.805227 (-4.215467) | 0.125298 / 6.500664 (-6.375366) | 0.061123 / 0.075469 (-0.014347) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234721 / 1.841788 (-0.607067) | 18.193756 / 8.074308 (10.119448) | 13.682835 / 10.191392 (3.491443) | 0.129345 / 0.680424 (-0.551078) | 0.016589 / 0.534201 (-0.517612) | 0.332355 / 0.579283 (-0.246928) | 0.358408 / 0.434364 (-0.075955) | 0.382044 / 0.540337 (-0.158293) | 0.535403 / 1.386936 (-0.851533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006193 / 0.011353 (-0.005160) | 0.003674 / 0.011008 (-0.007335) | 0.062481 / 0.038508 (0.023973) | 0.062096 / 0.023109 (0.038987) | 0.449592 / 0.275898 (0.173694) | 0.479245 / 0.323480 (0.155765) | 0.004793 / 0.007986 (-0.003193) | 0.002896 / 0.004328 (-0.001433) | 0.062887 / 0.004250 (0.058636) | 0.050049 / 0.037052 (0.012997) | 0.454940 / 0.258489 (0.196451) | 0.486115 / 0.293841 (0.192274) | 0.028585 / 0.128546 (-0.099961) | 0.007954 / 0.075646 (-0.067692) | 0.067744 / 0.419271 (-0.351528) | 0.040473 / 0.043533 (-0.003060) | 0.448408 / 0.255139 (0.193269) | 0.472423 / 0.283200 (0.189223) | 0.020549 / 0.141683 (-0.121133) | 1.563618 / 1.452155 (0.111463) | 1.520149 / 1.492716 (0.027432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226604 / 0.018006 (0.208598) | 0.417615 / 0.000490 (0.417126) | 0.003386 / 0.000200 (0.003186) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027264 / 0.037411 (-0.010147) | 0.081709 / 0.014526 (0.067184) | 0.091793 / 0.176557 (-0.084763) | 0.145559 / 0.737135 (-0.591576) | 0.091869 / 0.296338 (-0.204469) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462917 / 0.215209 (0.247708) | 4.629512 / 2.077655 (2.551857) | 2.555715 / 1.504120 (1.051595) | 2.388064 / 1.541195 (0.846870) | 2.458320 / 1.468490 (0.989830) | 0.511615 / 4.584777 (-4.073162) | 3.124566 / 3.745712 (-0.621146) | 2.839190 / 5.269862 (-2.430672) | 1.894551 / 4.565676 (-2.671126) | 0.059565 / 0.424275 (-0.364710) | 0.006481 / 0.007607 (-0.001126) | 0.532023 / 0.226044 (0.305979) | 5.361507 / 2.268929 (3.092579) | 2.982594 / 55.444624 (-52.462031) | 2.644870 / 6.876477 (-4.231606) | 2.831476 / 2.142072 (0.689404) | 0.607381 / 4.805227 (-4.197846) | 0.126067 / 6.500664 (-6.374597) | 0.062130 / 0.075469 (-0.013339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350442 / 1.841788 (-0.491345) | 18.829553 / 8.074308 (10.755245) | 14.796701 / 10.191392 (4.605309) | 0.145393 / 0.680424 (-0.535031) | 0.018218 / 0.534201 (-0.515983) | 0.335500 / 0.579283 (-0.243783) | 0.359190 / 0.434364 (-0.075174) | 0.388377 / 0.540337 (-0.151960) | 0.534994 / 1.386936 (-0.851942) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006741 / 0.011353 (-0.004612) | 0.004097 / 0.011008 (-0.006911) | 0.084513 / 0.038508 (0.046005) | 0.074216 / 0.023109 (0.051107) | 0.352481 / 0.275898 (0.076583) | 0.394806 / 0.323480 (0.071326) | 0.005603 / 0.007986 (-0.002383) | 0.003482 / 0.004328 (-0.000847) | 0.065165 / 0.004250 (0.060914) | 0.054065 / 0.037052 (0.017013) | 0.359399 / 0.258489 (0.100910) | 0.409776 / 0.293841 (0.115935) | 0.030997 / 0.128546 (-0.097550) | 0.008717 / 0.075646 (-0.066929) | 0.288692 / 0.419271 (-0.130579) | 0.052372 / 0.043533 (0.008840) | 0.353867 / 0.255139 (0.098728) | 0.391212 / 0.283200 (0.108012) | 0.024033 / 0.141683 (-0.117650) | 1.496552 / 1.452155 (0.044398) | 1.567267 / 1.492716 (0.074550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294074 / 0.018006 (0.276067) | 0.595421 / 0.000490 (0.594931) | 0.003826 / 0.000200 (0.003626) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028676 / 0.037411 (-0.008736) | 0.082064 / 0.014526 (0.067538) | 0.542399 / 0.176557 (0.365842) | 0.217188 / 0.737135 (-0.519947) | 0.099364 / 0.296338 (-0.196975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384282 / 0.215209 (0.169073) | 3.832204 / 2.077655 (1.754550) | 1.842500 / 1.504120 (0.338380) | 1.668192 / 1.541195 (0.126997) | 1.745207 / 1.468490 (0.276717) | 0.481881 / 4.584777 (-4.102896) | 3.677819 / 3.745712 (-0.067893) | 3.329062 / 5.269862 (-1.940799) | 2.056882 / 4.565676 (-2.508795) | 0.056898 / 0.424275 (-0.367377) | 0.007624 / 0.007607 (0.000016) | 0.459712 / 0.226044 (0.233667) | 4.611100 / 2.268929 (2.342171) | 2.370244 / 55.444624 (-53.074381) | 2.032756 / 6.876477 (-4.843721) | 2.336056 / 2.142072 (0.193984) | 0.583503 / 4.805227 (-4.221725) | 0.135041 / 6.500664 (-6.365623) | 0.062245 / 0.075469 (-0.013224) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303894 / 1.841788 (-0.537894) | 20.315185 / 8.074308 (12.240876) | 14.388779 / 10.191392 (4.197387) | 0.169060 / 0.680424 (-0.511364) | 0.018609 / 0.534201 (-0.515592) | 0.395140 / 0.579283 (-0.184143) | 0.418231 / 0.434364 (-0.016133) | 0.461496 / 0.540337 (-0.078842) | 0.630298 / 1.386936 (-0.756638) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006999 / 0.011353 (-0.004354) | 0.004197 / 0.011008 (-0.006812) | 0.064524 / 0.038508 (0.026016) | 0.078791 / 0.023109 (0.055682) | 0.397563 / 0.275898 (0.121665) | 0.423056 / 0.323480 (0.099576) | 0.005697 / 0.007986 (-0.002288) | 0.003592 / 0.004328 (-0.000736) | 0.066178 / 0.004250 (0.061928) | 0.058114 / 0.037052 (0.021062) | 0.398619 / 0.258489 (0.140130) | 0.435496 / 0.293841 (0.141655) | 0.032758 / 0.128546 (-0.095788) | 0.008677 / 0.075646 (-0.066970) | 0.071359 / 0.419271 (-0.347913) | 0.048636 / 0.043533 (0.005103) | 0.389762 / 0.255139 (0.134623) | 0.412109 / 0.283200 (0.128910) | 0.023511 / 0.141683 (-0.118172) | 1.514768 / 1.452155 (0.062613) | 1.580163 / 1.492716 (0.087446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.370491 / 0.018006 (0.352485) | 0.529751 / 0.000490 (0.529261) | 0.016959 / 0.000200 (0.016759) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033361 / 0.037411 (-0.004051) | 0.091610 / 0.014526 (0.077084) | 0.106642 / 0.176557 (-0.069915) | 0.160906 / 0.737135 (-0.576229) | 0.106894 / 0.296338 (-0.189444) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429932 / 0.215209 (0.214723) | 4.276459 / 2.077655 (2.198804) | 2.268518 / 1.504120 (0.764398) | 2.092512 / 1.541195 (0.551317) | 2.182218 / 1.468490 (0.713728) | 0.494464 / 4.584777 (-4.090313) | 3.750731 / 3.745712 (0.005019) | 3.352370 / 5.269862 (-1.917492) | 2.105630 / 4.565676 (-2.460046) | 0.058465 / 0.424275 (-0.365810) | 0.007449 / 0.007607 (-0.000158) | 0.506896 / 0.226044 (0.280851) | 5.070201 / 2.268929 (2.801272) | 2.758128 / 55.444624 (-52.686496) | 2.408378 / 6.876477 (-4.468099) | 2.690633 / 2.142072 (0.548561) | 0.595662 / 4.805227 (-4.209565) | 0.134355 / 6.500664 (-6.366309) | 0.060113 / 0.075469 (-0.015356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.380413 / 1.841788 (-0.461375) | 20.691210 / 8.074308 (12.616901) | 15.682282 / 10.191392 (5.490890) | 0.165887 / 0.680424 (-0.514536) | 0.020541 / 0.534201 (-0.513660) | 0.397846 / 0.579283 (-0.181437) | 0.425374 / 0.434364 (-0.008990) | 0.476261 / 0.540337 (-0.064076) | 0.648617 / 1.386936 (-0.738319) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008787 / 0.011353 (-0.002566) | 0.007569 / 0.011008 (-0.003439) | 0.103918 / 0.038508 (0.065410) | 0.083347 / 0.023109 (0.060238) | 0.441838 / 0.275898 (0.165940) | 0.420202 / 0.323480 (0.096722) | 0.007295 / 0.007986 (-0.000690) | 0.005366 / 0.004328 (0.001037) | 0.082659 / 0.004250 (0.078409) | 0.059711 / 0.037052 (0.022658) | 0.401821 / 0.258489 (0.143332) | 0.432906 / 0.293841 (0.139065) | 0.048662 / 0.128546 (-0.079885) | 0.014091 / 0.075646 (-0.061555) | 0.352583 / 0.419271 (-0.066689) | 0.064739 / 0.043533 (0.021206) | 0.410890 / 0.255139 (0.155751) | 0.443450 / 0.283200 (0.160251) | 0.035817 / 0.141683 (-0.105866) | 1.754687 / 1.452155 (0.302532) | 1.887338 / 1.492716 (0.394622) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209440 / 0.018006 (0.191434) | 0.519641 / 0.000490 (0.519152) | 0.005726 / 0.000200 (0.005526) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031027 / 0.037411 (-0.006384) | 0.097503 / 0.014526 (0.082977) | 0.106985 / 0.176557 (-0.069572) | 0.178235 / 0.737135 (-0.558900) | 0.108110 / 0.296338 (-0.188228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.594325 / 0.215209 (0.379116) | 6.159414 / 2.077655 (4.081759) | 2.664892 / 1.504120 (1.160772) | 2.363355 / 1.541195 (0.822160) | 2.410754 / 1.468490 (0.942264) | 0.842557 / 4.584777 (-3.742220) | 5.112059 / 3.745712 (1.366347) | 4.633152 / 5.269862 (-0.636709) | 2.965891 / 4.565676 (-1.599785) | 0.097922 / 0.424275 (-0.326353) | 0.008602 / 0.007607 (0.000995) | 0.773029 / 0.226044 (0.546985) | 7.462314 / 2.268929 (5.193386) | 3.584776 / 55.444624 (-51.859848) | 2.752375 / 6.876477 (-4.124102) | 2.976345 / 2.142072 (0.834272) | 1.049423 / 4.805227 (-3.755804) | 0.212001 / 6.500664 (-6.288663) | 0.074095 / 0.075469 (-0.001374) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.577905 / 1.841788 (-0.263883) | 23.280931 / 8.074308 (15.206623) | 21.017946 / 10.191392 (10.826554) | 0.228746 / 0.680424 (-0.451678) | 0.027877 / 0.534201 (-0.506324) | 0.469173 / 0.579283 (-0.110110) | 0.567614 / 0.434364 (0.133250) | 0.545041 / 0.540337 (0.004704) | 0.754743 / 1.386936 (-0.632194) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008958 / 0.011353 (-0.002395) | 0.005077 / 0.011008 (-0.005931) | 0.083990 / 0.038508 (0.045482) | 0.078586 / 0.023109 (0.055476) | 0.482164 / 0.275898 (0.206266) | 0.525575 / 0.323480 (0.202095) | 0.006031 / 0.007986 (-0.001955) | 0.003922 / 0.004328 (-0.000407) | 0.084547 / 0.004250 (0.080296) | 0.064539 / 0.037052 (0.027487) | 0.501256 / 0.258489 (0.242767) | 0.531985 / 0.293841 (0.238144) | 0.050438 / 0.128546 (-0.078109) | 0.014004 / 0.075646 (-0.061642) | 0.091269 / 0.419271 (-0.328003) | 0.060825 / 0.043533 (0.017292) | 0.492573 / 0.255139 (0.237434) | 0.517060 / 0.283200 (0.233861) | 0.033576 / 0.141683 (-0.108107) | 1.775719 / 1.452155 (0.323564) | 1.866865 / 1.492716 (0.374149) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225026 / 0.018006 (0.207020) | 0.510715 / 0.000490 (0.510225) | 0.005791 / 0.000200 (0.005591) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032795 / 0.037411 (-0.004616) | 0.109206 / 0.014526 (0.094680) | 0.121441 / 0.176557 (-0.055115) | 0.179735 / 0.737135 (-0.557401) | 0.115825 / 0.296338 (-0.180514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633259 / 0.215209 (0.418050) | 6.298084 / 2.077655 (4.220430) | 2.892604 / 1.504120 (1.388484) | 2.570858 / 1.541195 (1.029663) | 2.611441 / 1.468490 (1.142951) | 0.897801 / 4.584777 (-3.686976) | 5.185863 / 3.745712 (1.440151) | 4.656897 / 5.269862 (-0.612965) | 3.078575 / 4.565676 (-1.487101) | 0.100563 / 0.424275 (-0.323712) | 0.008368 / 0.007607 (0.000761) | 0.749152 / 0.226044 (0.523108) | 7.687484 / 2.268929 (5.418556) | 3.689238 / 55.444624 (-51.755387) | 2.896779 / 6.876477 (-3.979698) | 3.158688 / 2.142072 (1.016615) | 1.083490 / 4.805227 (-3.721737) | 0.216994 / 6.500664 (-6.283670) | 0.074053 / 0.075469 (-0.001416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.732812 / 1.841788 (-0.108976) | 23.952127 / 8.074308 (15.877819) | 22.078140 / 10.191392 (11.886748) | 0.229491 / 0.680424 (-0.450933) | 0.032070 / 0.534201 (-0.502131) | 0.503344 / 0.579283 (-0.075939) | 0.588489 / 0.434364 (0.154125) | 0.550199 / 0.540337 (0.009861) | 0.778203 / 1.386936 (-0.608733) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007569 / 0.011353 (-0.003784) | 0.004447 / 0.011008 (-0.006561) | 0.098573 / 0.038508 (0.060064) | 0.081743 / 0.023109 (0.058634) | 0.379912 / 0.275898 (0.104013) | 0.411203 / 0.323480 (0.087723) | 0.004492 / 0.007986 (-0.003494) | 0.005627 / 0.004328 (0.001298) | 0.075974 / 0.004250 (0.071724) | 0.062512 / 0.037052 (0.025459) | 0.386971 / 0.258489 (0.128482) | 0.433299 / 0.293841 (0.139458) | 0.035935 / 0.128546 (-0.092611) | 0.009845 / 0.075646 (-0.065801) | 0.342940 / 0.419271 (-0.076331) | 0.061343 / 0.043533 (0.017810) | 0.381984 / 0.255139 (0.126845) | 0.417921 / 0.283200 (0.134721) | 0.028469 / 0.141683 (-0.113214) | 1.758472 / 1.452155 (0.306317) | 1.847768 / 1.492716 (0.355051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234297 / 0.018006 (0.216291) | 0.520020 / 0.000490 (0.519531) | 0.007375 / 0.000200 (0.007175) | 0.000767 / 0.000054 (0.000713) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032738 / 0.037411 (-0.004673) | 0.097656 / 0.014526 (0.083130) | 0.112476 / 0.176557 (-0.064080) | 0.179222 / 0.737135 (-0.557913) | 0.113638 / 0.296338 (-0.182700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453677 / 0.215209 (0.238467) | 4.528143 / 2.077655 (2.450489) | 2.243874 / 1.504120 (0.739754) | 2.051546 / 1.541195 (0.510351) | 2.196050 / 1.468490 (0.727560) | 0.567345 / 4.584777 (-4.017432) | 4.133591 / 3.745712 (0.387879) | 3.855286 / 5.269862 (-1.414576) | 2.393496 / 4.565676 (-2.172180) | 0.066567 / 0.424275 (-0.357708) | 0.009038 / 0.007607 (0.001431) | 0.549166 / 0.226044 (0.323122) | 5.472767 / 2.268929 (3.203839) | 2.788012 / 55.444624 (-52.656612) | 2.426132 / 6.876477 (-4.450345) | 2.684856 / 2.142072 (0.542784) | 0.680198 / 4.805227 (-4.125029) | 0.157782 / 6.500664 (-6.342882) | 0.073000 / 0.075469 (-0.002469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.622435 / 1.841788 (-0.219352) | 22.965715 / 8.074308 (14.891407) | 16.626903 / 10.191392 (6.435511) | 0.197156 / 0.680424 (-0.483268) | 0.025599 / 0.534201 (-0.508602) | 0.495550 / 0.579283 (-0.083733) | 0.466575 / 0.434364 (0.032211) | 0.565862 / 0.540337 (0.025525) | 0.793835 / 1.386936 (-0.593102) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007721 / 0.011353 (-0.003632) | 0.004652 / 0.011008 (-0.006356) | 0.076636 / 0.038508 (0.038127) | 0.082183 / 0.023109 (0.059074) | 0.474665 / 0.275898 (0.198767) | 0.511593 / 0.323480 (0.188113) | 0.006240 / 0.007986 (-0.001746) | 0.003750 / 0.004328 (-0.000578) | 0.076939 / 0.004250 (0.072689) | 0.063333 / 0.037052 (0.026281) | 0.476469 / 0.258489 (0.217980) | 0.512514 / 0.293841 (0.218674) | 0.037802 / 0.128546 (-0.090744) | 0.009975 / 0.075646 (-0.065671) | 0.084190 / 0.419271 (-0.335081) | 0.056705 / 0.043533 (0.013172) | 0.475429 / 0.255139 (0.220290) | 0.496414 / 0.283200 (0.213215) | 0.026039 / 0.141683 (-0.115644) | 1.796059 / 1.452155 (0.343905) | 1.867461 / 1.492716 (0.374745) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285219 / 0.018006 (0.267213) | 0.506311 / 0.000490 (0.505821) | 0.018545 / 0.000200 (0.018345) | 0.000142 / 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037832 / 0.037411 (0.000420) | 0.110437 / 0.014526 (0.095911) | 0.122953 / 0.176557 (-0.053604) | 0.187049 / 0.737135 (-0.550087) | 0.123539 / 0.296338 (-0.172800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508120 / 0.215209 (0.292911) | 5.082836 / 2.077655 (3.005182) | 2.800411 / 1.504120 (1.296291) | 2.579457 / 1.541195 (1.038262) | 2.645945 / 1.468490 (1.177455) | 0.578574 / 4.584777 (-4.006203) | 4.163401 / 3.745712 (0.417689) | 3.858575 / 5.269862 (-1.411286) | 2.389892 / 4.565676 (-2.175785) | 0.068639 / 0.424275 (-0.355636) | 0.008779 / 0.007607 (0.001172) | 0.598925 / 0.226044 (0.372880) | 5.987147 / 2.268929 (3.718219) | 3.361791 / 55.444624 (-52.082833) | 2.910425 / 6.876477 (-3.966051) | 3.156849 / 2.142072 (1.014776) | 0.690945 / 4.805227 (-4.114283) | 0.157441 / 6.500664 (-6.343223) | 0.071596 / 0.075469 (-0.003873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.672763 / 1.841788 (-0.169025) | 23.599525 / 8.074308 (15.525217) | 17.520087 / 10.191392 (7.328695) | 0.169174 / 0.680424 (-0.511250) | 0.023470 / 0.534201 (-0.510731) | 0.469234 / 0.579283 (-0.110050) | 0.470020 / 0.434364 (0.035656) | 0.579949 / 0.540337 (0.039611) | 0.771353 / 1.386936 (-0.615583) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-29T15:23:41Z
| 2023-08-30T14:01:56Z
| 2023-08-30T13:51:32Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6192.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6192",
"merged_at": "2023-08-30T13:51:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6192.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6192"
}
|
Fix https://github.com/huggingface/datasets/issues/6141
Colab installs 2023.6.0, so we should be good 🙂
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6192/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3432
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3432/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3432/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3432/events
|
https://github.com/huggingface/datasets/pull/3432
| 1,079,910,769
|
PR_kwDODunzps4v1NGS
| 3,432
|
Correctly indent builder config in dataset script docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-14T15:39:47Z
| 2021-12-14T17:35:17Z
| 2021-12-14T17:35:17Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3432",
"merged_at": "2021-12-14T17:35:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3432"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3432/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3432/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6336
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6336/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6336/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6336/events
|
https://github.com/huggingface/datasets/pull/6336
| 1,956,827,232
|
PR_kwDODunzps5dgy0w
| 6,336
|
unpin-fsspec
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6336). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006202 / 0.011353 (-0.005151) | 0.003627 / 0.011008 (-0.007381) | 0.080643 / 0.038508 (0.042135) | 0.057135 / 0.023109 (0.034026) | 0.315853 / 0.275898 (0.039955) | 0.348503 / 0.323480 (0.025023) | 0.004762 / 0.007986 (-0.003224) | 0.002884 / 0.004328 (-0.001445) | 0.063208 / 0.004250 (0.058958) | 0.046777 / 0.037052 (0.009725) | 0.321426 / 0.258489 (0.062937) | 0.362128 / 0.293841 (0.068287) | 0.027494 / 0.128546 (-0.101052) | 0.007931 / 0.075646 (-0.067715) | 0.262262 / 0.419271 (-0.157009) | 0.044330 / 0.043533 (0.000797) | 0.310504 / 0.255139 (0.055366) | 0.339409 / 0.283200 (0.056209) | 0.021030 / 0.141683 (-0.120652) | 1.405333 / 1.452155 (-0.046822) | 1.493497 / 1.492716 (0.000781) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225431 / 0.018006 (0.207425) | 0.451723 / 0.000490 (0.451233) | 0.007763 / 0.000200 (0.007563) | 0.000310 / 0.000054 (0.000256) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023381 / 0.037411 (-0.014031) | 0.074183 / 0.014526 (0.059657) | 0.084003 / 0.176557 (-0.092553) | 0.143628 / 0.737135 (-0.593507) | 0.084543 / 0.296338 (-0.211796) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393062 / 0.215209 (0.177853) | 3.905649 / 2.077655 (1.827994) | 1.923155 / 1.504120 (0.419035) | 1.751554 / 1.541195 (0.210359) | 1.816141 / 1.468490 (0.347651) | 0.502789 / 4.584777 (-4.081988) | 3.006149 / 3.745712 (-0.739564) | 2.979645 / 5.269862 (-2.290216) | 1.877408 / 4.565676 (-2.688269) | 0.057544 / 0.424275 (-0.366731) | 0.006733 / 0.007607 (-0.000874) | 0.468469 / 0.226044 (0.242425) | 4.695595 / 2.268929 (2.426667) | 2.367238 / 55.444624 (-53.077387) | 2.041035 / 6.876477 (-4.835442) | 2.087396 / 2.142072 (-0.054676) | 0.586866 / 4.805227 (-4.218361) | 0.125616 / 6.500664 (-6.375049) | 0.060535 / 0.075469 (-0.014934) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244753 / 1.841788 (-0.597035) | 17.652902 / 8.074308 (9.578594) | 13.733195 / 10.191392 (3.541803) | 0.143741 / 0.680424 (-0.536683) | 0.016775 / 0.534201 (-0.517426) | 0.335487 / 0.579283 (-0.243797) | 0.350292 / 0.434364 (-0.084072) | 0.388744 / 0.540337 (-0.151594) | 0.536630 / 1.386936 (-0.850306) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006008 / 0.011353 (-0.005345) | 0.003708 / 0.011008 (-0.007301) | 0.062504 / 0.038508 (0.023996) | 0.058570 / 0.023109 (0.035461) | 0.450549 / 0.275898 (0.174651) | 0.467768 / 0.323480 (0.144288) | 0.004955 / 0.007986 (-0.003031) | 0.002903 / 0.004328 (-0.001426) | 0.062778 / 0.004250 (0.058528) | 0.048750 / 0.037052 (0.011698) | 0.439848 / 0.258489 (0.181359) | 0.471780 / 0.293841 (0.177939) | 0.028472 / 0.128546 (-0.100074) | 0.008221 / 0.075646 (-0.067425) | 0.068325 / 0.419271 (-0.350946) | 0.040612 / 0.043533 (-0.002921) | 0.435530 / 0.255139 (0.180391) | 0.458992 / 0.283200 (0.175792) | 0.020143 / 0.141683 (-0.121539) | 1.479101 / 1.452155 (0.026947) | 1.507408 / 1.492716 (0.014692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207723 / 0.018006 (0.189717) | 0.406596 / 0.000490 (0.406106) | 0.004431 / 0.000200 (0.004231) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027037 / 0.037411 (-0.010374) | 0.081576 / 0.014526 (0.067050) | 0.091177 / 0.176557 (-0.085379) | 0.146191 / 0.737135 (-0.590944) | 0.092485 / 0.296338 (-0.203854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456676 / 0.215209 (0.241467) | 4.556214 / 2.077655 (2.478559) | 2.500146 / 1.504120 (0.996026) | 2.325175 / 1.541195 (0.783981) | 2.421023 / 1.468490 (0.952533) | 0.512135 / 4.584777 (-4.072641) | 3.167070 / 3.745712 (-0.578642) | 2.897697 / 5.269862 (-2.372165) | 1.881974 / 4.565676 (-2.683702) | 0.058453 / 0.424275 (-0.365823) | 0.006515 / 0.007607 (-0.001092) | 0.530742 / 0.226044 (0.304698) | 5.304943 / 2.268929 (3.036014) | 2.928824 / 55.444624 (-52.515800) | 2.598023 / 6.876477 (-4.278454) | 2.758496 / 2.142072 (0.616423) | 0.601777 / 4.805227 (-4.203450) | 0.126701 / 6.500664 (-6.373964) | 0.061808 / 0.075469 (-0.013661) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.357844 / 1.841788 (-0.483943) | 17.887666 / 8.074308 (9.813358) | 14.561904 / 10.191392 (4.370512) | 0.146788 / 0.680424 (-0.533636) | 0.018277 / 0.534201 (-0.515924) | 0.343168 / 0.579283 (-0.236115) | 0.382220 / 0.434364 (-0.052144) | 0.401234 / 0.540337 (-0.139104) | 0.546246 / 1.386936 (-0.840690) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008919 / 0.011353 (-0.002434) | 0.006110 / 0.011008 (-0.004898) | 0.110554 / 0.038508 (0.072046) | 0.075705 / 0.023109 (0.052596) | 0.391235 / 0.275898 (0.115336) | 0.458331 / 0.323480 (0.134851) | 0.007489 / 0.007986 (-0.000497) | 0.003744 / 0.004328 (-0.000585) | 0.078124 / 0.004250 (0.073874) | 0.057244 / 0.037052 (0.020192) | 0.393251 / 0.258489 (0.134762) | 0.460153 / 0.293841 (0.166312) | 0.047245 / 0.128546 (-0.081301) | 0.014086 / 0.075646 (-0.061560) | 0.421272 / 0.419271 (0.002001) | 0.067668 / 0.043533 (0.024135) | 0.397325 / 0.255139 (0.142186) | 0.432683 / 0.283200 (0.149483) | 0.039086 / 0.141683 (-0.102596) | 1.764898 / 1.452155 (0.312744) | 1.848820 / 1.492716 (0.356104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.258163 / 0.018006 (0.240156) | 0.498655 / 0.000490 (0.498165) | 0.014959 / 0.000200 (0.014759) | 0.000465 / 0.000054 (0.000410) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028889 / 0.037411 (-0.008522) | 0.091568 / 0.014526 (0.077042) | 0.102700 / 0.176557 (-0.073857) | 0.173580 / 0.737135 (-0.563555) | 0.108763 / 0.296338 (-0.187576) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.610147 / 0.215209 (0.394938) | 5.851239 / 2.077655 (3.773584) | 2.467471 / 1.504120 (0.963351) | 2.117189 / 1.541195 (0.575995) | 2.197947 / 1.468490 (0.729457) | 0.851736 / 4.584777 (-3.733041) | 5.163183 / 3.745712 (1.417471) | 5.039564 / 5.269862 (-0.230297) | 3.067215 / 4.565676 (-1.498462) | 0.098593 / 0.424275 (-0.325682) | 0.008646 / 0.007607 (0.001038) | 0.788397 / 0.226044 (0.562352) | 7.340837 / 2.268929 (5.071909) | 3.511611 / 55.444624 (-51.933013) | 2.767479 / 6.876477 (-4.108998) | 2.687368 / 2.142072 (0.545296) | 1.046387 / 4.805227 (-3.758841) | 0.215902 / 6.500664 (-6.284763) | 0.072939 / 0.075469 (-0.002530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.512795 / 1.841788 (-0.328992) | 22.086131 / 8.074308 (14.011823) | 20.235550 / 10.191392 (10.044158) | 0.240381 / 0.680424 (-0.440043) | 0.029171 / 0.534201 (-0.505030) | 0.465123 / 0.579283 (-0.114160) | 0.569260 / 0.434364 (0.134896) | 0.540967 / 0.540337 (0.000629) | 0.764006 / 1.386936 (-0.622930) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011024 / 0.011353 (-0.000329) | 0.005915 / 0.011008 (-0.005094) | 0.076455 / 0.038508 (0.037947) | 0.087842 / 0.023109 (0.064733) | 0.471732 / 0.275898 (0.195834) | 0.513666 / 0.323480 (0.190186) | 0.007062 / 0.007986 (-0.000924) | 0.004013 / 0.004328 (-0.000315) | 0.076016 / 0.004250 (0.071766) | 0.061296 / 0.037052 (0.024244) | 0.487277 / 0.258489 (0.228788) | 0.508185 / 0.293841 (0.214344) | 0.049963 / 0.128546 (-0.078583) | 0.013774 / 0.075646 (-0.061873) | 0.089376 / 0.419271 (-0.329895) | 0.067502 / 0.043533 (0.023969) | 0.471283 / 0.255139 (0.216144) | 0.507365 / 0.283200 (0.224165) | 0.033638 / 0.141683 (-0.108045) | 1.785544 / 1.452155 (0.333390) | 1.878765 / 1.492716 (0.386048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230462 / 0.018006 (0.212456) | 0.502458 / 0.000490 (0.501968) | 0.005987 / 0.000200 (0.005787) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031588 / 0.037411 (-0.005824) | 0.113566 / 0.014526 (0.099040) | 0.115734 / 0.176557 (-0.060822) | 0.174162 / 0.737135 (-0.562974) | 0.121574 / 0.296338 (-0.174764) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.662837 / 0.215209 (0.447628) | 6.420327 / 2.077655 (4.342672) | 3.033522 / 1.504120 (1.529402) | 2.728294 / 1.541195 (1.187099) | 2.790621 / 1.468490 (1.322131) | 0.852478 / 4.584777 (-3.732299) | 5.033637 / 3.745712 (1.287925) | 4.543152 / 5.269862 (-0.726709) | 2.980261 / 4.565676 (-1.585415) | 0.102444 / 0.424275 (-0.321831) | 0.008362 / 0.007607 (0.000755) | 0.786868 / 0.226044 (0.560823) | 7.887665 / 2.268929 (5.618737) | 4.010614 / 55.444624 (-51.434010) | 3.220715 / 6.876477 (-3.655762) | 3.317316 / 2.142072 (1.175244) | 1.098137 / 4.805227 (-3.707090) | 0.218309 / 6.500664 (-6.282355) | 0.078182 / 0.075469 (0.002713) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696740 / 1.841788 (-0.145047) | 23.762454 / 8.074308 (15.688146) | 21.802645 / 10.191392 (11.611253) | 0.233654 / 0.680424 (-0.446770) | 0.032911 / 0.534201 (-0.501290) | 0.511760 / 0.579283 (-0.067524) | 0.586299 / 0.434364 (0.151935) | 0.583704 / 0.540337 (0.043367) | 0.780762 / 1.386936 (-0.606174) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-23T10:16:46Z
| 2023-10-23T10:28:46Z
| 2023-10-23T10:17:48Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6336.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6336",
"merged_at": "2023-10-23T10:17:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6336.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6336"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6336/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6336/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1423
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1423/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1423/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1423/events
|
https://github.com/huggingface/datasets/pull/1423
| 760,712,421
|
MDExOlB1bGxSZXF1ZXN0NTM1NDk3OTk5
| 1,423
|
Imppres
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aclifton314",
"id": 53267795,
"login": "aclifton314",
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aclifton314"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Feel free to ping me once you're ready for another review :) ",
"For sure! Gonna work on this now!",
"I incorporated all the changes but when I go to rebase I get the following error:\r\n```python\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git rebase upstream/master\r\nerror: cannot rebase: You have unstaged changes.\r\nerror: Please commit or stash them.\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git stash\r\nSaved working directory and index state WIP on imppres: 51736236 Incorporated secondary sets as configurations instead of splits.\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git rebase upstream/master\r\nCONFLICT (add/add): Merge conflict in datasets/wiki_movies/wiki_movies.py\r\nAuto-merging datasets/wiki_movies/wiki_movies.py\r\nCONFLICT (add/add): Merge conflict in datasets/wiki_movies/dataset_infos.json\r\nAuto-merging datasets/wiki_movies/dataset_infos.json\r\nCONFLICT (add/add): Merge conflict in datasets/wiki_movies/README.md\r\nAuto-merging datasets/wiki_movies/README.md\r\nerror: could not apply 04d08587... Created wiki_movies dataset.\r\nResolve all conflicts manually, mark them as resolved with\r\n\"git add/rm <conflicted_files>\", then run \"git rebase --continue\".\r\nYou can instead skip this commit: run \"git rebase --skip\".\r\nTo abort and get back to the state before \"git rebase\", run \"git rebase --abort\".\r\nCould not apply 04d08587... Created wiki_movies dataset.\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git branch\r\n* (no branch, rebasing imppres)\r\n imppres\r\n logiqa_en\r\n master\r\n wiki_movies\r\n wiki_movies_htl\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git checkout imppres \r\ndatasets/wiki_movies/README.md: needs merge\r\ndatasets/wiki_movies/dataset_infos.json: needs merge\r\ndatasets/wiki_movies/wiki_movies.py: needs merge\r\nerror: you need to resolve your current index first\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ \r\n```",
"I think it's because the current branch includes changes about wiki_movies.\r\n\r\nCan you create a new branch from `master` and create another PR please ?",
"I get this response when I try to switch to master:\r\n```\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git checkout master\r\ndatasets/wiki_movies/README.md: needs merge\r\ndatasets/wiki_movies/dataset_infos.json: needs merge\r\ndatasets/wiki_movies/wiki_movies.py: needs merge\r\nerror: you need to resolve your current index first\r\n```",
"Maybe you have to remove the changes in wiki_movies before checkout to master\r\n```\r\ngit stash\r\n```\r\n\r\nshould do the job",
"Here is what I get:\r\n```\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git stash\r\ndatasets/wiki_movies/README.md: needs merge\r\ndatasets/wiki_movies/dataset_infos.json: needs merge\r\ndatasets/wiki_movies/wiki_movies.py: needs merge\r\n```",
"Ok I see\r\nLooks like you're in a `merge` process.\r\nYou can abort it with `git reset --merge`\r\n\r\nThen `git checkout master` should work",
"So close! I got the new branch made and went through all the tests. When I went to push, I got the following:\r\n```\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git push -u origin imppres\r\nUsername for 'https://github.com': aclifton314\r\nPassword for 'https://aclifton314@github.com': \r\nTo https://github.com/aclifton314/datasets\r\n ! [rejected] imppres -> imppres (non-fast-forward)\r\nerror: failed to push some refs to 'https://github.com/aclifton314/datasets'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```",
"after a rebase you need to `git push --force`",
"Done!"
] | 2020-12-09T22:14:12Z
| 2020-12-17T18:27:14Z
| 2020-12-17T18:27:14Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1423.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1423",
"merged_at": "2020-12-17T18:27:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1423.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1423"
}
|
2nd PR ever! Hopefully I'm starting to get the hang of this. This is for the IMPPRES dataset. Please let me know of any corrections or changes that need to be made.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1423/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1423/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3902
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3902/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3902/events
|
https://github.com/huggingface/datasets/issues/3902
| 1,167,403,377
|
I_kwDODunzps5FlSlx
| 3,902
|
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3166852?v=4",
"events_url": "https://api.github.com/users/arunasank/events{/privacy}",
"followers_url": "https://api.github.com/users/arunasank/followers",
"following_url": "https://api.github.com/users/arunasank/following{/other_user}",
"gists_url": "https://api.github.com/users/arunasank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arunasank",
"id": 3166852,
"login": "arunasank",
"node_id": "MDQ6VXNlcjMxNjY4NTI=",
"organizations_url": "https://api.github.com/users/arunasank/orgs",
"received_events_url": "https://api.github.com/users/arunasank/received_events",
"repos_url": "https://api.github.com/users/arunasank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arunasank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arunasank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arunasank"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`",
"Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).\r\n\r\nIn order to fix this, you should update `fsspec` from within the \"problematic\" Python virtual env:\r\n```\r\npip install -U \"fsspec[http]>=2021.05.0\"",
"I'm closing this issue, @arunasank.\r\n\r\nFeel free to re-open it if the problem persists. ",
"from lightgbm import LGBMModel,LGBMClassifier, plot_importance\r\nafter importing lib getting (partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) error, can help me",
"@deepakmahtha I think you are not using `datasets`: this is the GitHub repository of Hugging Face Datasets.\r\n\r\nIf you are using `lightgbm`, you should report the issue to their repository instead.\r\n\r\nAnyway, we have proposed a possible fix just in a comment above: to update fsspec.\r\nhttps://github.com/huggingface/datasets/issues/3902#issuecomment-1066517824"
] | 2022-03-12T21:22:03Z
| 2023-02-09T14:53:49Z
| 2022-03-22T07:10:41Z
|
NONE
| null | null | null |
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3902/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4623
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4623/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4623/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4623/events
|
https://github.com/huggingface/datasets/issues/4623
| 1,293,042,894
|
I_kwDODunzps5NEkTO
| 4,623
|
Loading MNIST as Pytorch Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56592797?v=4",
"events_url": "https://api.github.com/users/jameschapman19/events{/privacy}",
"followers_url": "https://api.github.com/users/jameschapman19/followers",
"following_url": "https://api.github.com/users/jameschapman19/following{/other_user}",
"gists_url": "https://api.github.com/users/jameschapman19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jameschapman19",
"id": 56592797,
"login": "jameschapman19",
"node_id": "MDQ6VXNlcjU2NTkyNzk3",
"organizations_url": "https://api.github.com/users/jameschapman19/orgs",
"received_events_url": "https://api.github.com/users/jameschapman19/received_events",
"repos_url": "https://api.github.com/users/jameschapman19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jameschapman19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameschapman19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jameschapman19"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ",
"So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ",
"This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```",
"Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!"
] | 2022-07-04T11:33:10Z
| 2022-07-04T14:40:50Z
| null |
NONE
| null | null | null |
## Describe the bug
Conversion of MNIST dataset to pytorch fails with bug
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mnist", split="train")
dataset.set_format('torch')
dataset[0]
print()
```
## Expected results
Expect to see torch tensors image and label
## Actual results
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module>
dataset[0]
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__
return self._getitem(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem
formatted_output = format_table(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested
mapped = [
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested
return function(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
python-BaseException
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Windows-10-10.0.22579-SP0
- Python version: 3.9.2
- PyArrow version: 8.0.0
- Pandas version: 1.4.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4623/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4623/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2234
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2234/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2234/events
|
https://github.com/huggingface/datasets/pull/2234
| 860,442,246
|
MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3
| 2,234
|
Fix bash snippet formatting in ADD_NEW_DATASET.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-04-17T16:01:08Z
| 2021-04-19T10:57:31Z
| 2021-04-19T07:51:36Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2234",
"merged_at": "2021-04-19T07:51:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2234"
}
|
This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2234/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2234/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4030
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4030/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4030/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4030/events
|
https://github.com/huggingface/datasets/pull/4030
| 1,182,157,056
|
PR_kwDODunzps41FxjE
| 4,030
|
Use a constant for the articles regex in SQuAD v2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-26T23:06:30Z
| 2022-04-12T16:30:45Z
| 2022-04-12T11:00:24Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4030",
"merged_at": "2022-04-12T11:00:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4030"
}
|
The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks.
BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe receive a regex as an optional param, with the current value as the default? Similarly for SQuAD v1 (can't they re-use code?).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4030/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4030/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2791
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2791/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2791/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2791/events
|
https://github.com/huggingface/datasets/pull/2791
| 968,360,314
|
MDExOlB1bGxSZXF1ZXN0NzEwNDgxNDAy
| 2,791
|
Fix typo in cnn_dailymail
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42531544?v=4",
"events_url": "https://api.github.com/users/omaralsayed/events{/privacy}",
"followers_url": "https://api.github.com/users/omaralsayed/followers",
"following_url": "https://api.github.com/users/omaralsayed/following{/other_user}",
"gists_url": "https://api.github.com/users/omaralsayed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omaralsayed",
"id": 42531544,
"login": "omaralsayed",
"node_id": "MDQ6VXNlcjQyNTMxNTQ0",
"organizations_url": "https://api.github.com/users/omaralsayed/orgs",
"received_events_url": "https://api.github.com/users/omaralsayed/received_events",
"repos_url": "https://api.github.com/users/omaralsayed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omaralsayed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omaralsayed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omaralsayed"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-12T08:38:42Z
| 2021-08-12T11:17:59Z
| 2021-08-12T11:17:59Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2791",
"merged_at": "2021-08-12T11:17:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2791"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2791/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2791/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3630
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3630/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3630/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3630/events
|
https://github.com/huggingface/datasets/issues/3630
| 1,114,578,625
|
I_kwDODunzps5Cbx7B
| 3,630
|
DuplicatedKeysError of NewsQA dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StevenTang1998",
"id": 37647985,
"login": "StevenTang1998",
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StevenTang1998"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm fixing it. "
] | 2022-01-26T03:05:49Z
| 2022-02-14T08:37:19Z
| 2022-02-14T08:37:19Z
|
NONE
| null | null | null |
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it:
```
a = load_dataset('newsqa', data_dir='news')
```
and the following error occurred:
```
Using custom data configuration default-data_dir=news
Downloading and preparing dataset newsqa/default to /root/.cache/huggingface/datasets/newsqa/default-data_dir=news/1.0.0/b0b23e22d94a3d352ad9d75aff2b71375264a122fae301463079ee8595e05ab9...
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1084, in _prepare_split
writer.write(example, key)
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 442, in write
self.check_duplicate_keys()
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 1694, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 595, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 684, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1086, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 524, in finalize
self.check_duplicate_keys()
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story
Keys should be unique and deterministic in nature
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3630/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3630/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1966
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1966/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1966/events
|
https://github.com/huggingface/datasets/pull/1966
| 819,101,253
|
MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0
| 1,966
|
Fix metrics collision in separate multiprocessed experiments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Since the failure was originally intermittent, there is no 100% telling that the problem is gone. \r\nBut if my artificial race condition setup https://github.com/huggingface/datasets/issues/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.\r\n\r\nThank you for taking care of this, @lhoestq - locking can be very tricky to do right!"
] | 2021-03-01T17:45:18Z
| 2021-03-02T13:05:45Z
| 2021-03-02T13:05:44Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1966",
"merged_at": "2021-03-02T13:05:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1966"
}
|
As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.
Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.
To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.
Finally I added missing tests for separate experiments in distributed setup.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1966/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1813
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1813/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1813/events
|
https://github.com/huggingface/datasets/pull/1813
| 800,435,973
|
MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz
| 1,813
|
Support future datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-03T15:26:49Z
| 2021-02-05T10:33:48Z
| 2021-02-05T10:33:47Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1813.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1813",
"merged_at": "2021-02-05T10:33:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1813.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1813"
}
|
If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.
However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to make it work.
However we could automatically get the dataset from master instead in this case.
I added this feature in this PR.
I also added a warning if a dataset is not available at the version of the local installation of `datasets` but is loaded from master:
```python
>>> load_dataset("silicone", "dyda_da")
Couldn't find file locally at silicone/silicone.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/silicone/silicone.py.
The file was picked from the master branch on github instead at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/silicone/silicone.py.
Downloading and preparing dataset silicone/dyda_da (download: 8.46 MiB, generated: 9.39 MiB, post-processed: Unknown size, total: 17.86 MiB) to /Users/quentinlhoest/.cache/huggingface/datasets/silicone/dyda_da/1.0.0/d41d8c0b73c6df035b1369c45774418f0051163ea689b5502b8bda783adf6342...
...
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1813/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3436
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3436/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3436/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3436/events
|
https://github.com/huggingface/datasets/pull/3436
| 1,081,068,139
|
PR_kwDODunzps4v5FE3
| 3,436
|
Add the OneStopQa dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28459495?v=4",
"events_url": "https://api.github.com/users/OmerShubi/events{/privacy}",
"followers_url": "https://api.github.com/users/OmerShubi/followers",
"following_url": "https://api.github.com/users/OmerShubi/following{/other_user}",
"gists_url": "https://api.github.com/users/OmerShubi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OmerShubi",
"id": 28459495,
"login": "OmerShubi",
"node_id": "MDQ6VXNlcjI4NDU5NDk1",
"organizations_url": "https://api.github.com/users/OmerShubi/orgs",
"received_events_url": "https://api.github.com/users/OmerShubi/received_events",
"repos_url": "https://api.github.com/users/OmerShubi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OmerShubi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OmerShubi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OmerShubi"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-15T13:53:31Z
| 2021-12-17T14:32:00Z
| 2021-12-17T13:25:29Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3436",
"merged_at": "2021-12-17T13:25:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3436"
}
|
Adding OneStopQA, a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3436/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3436/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2646
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2646/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2646/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2646/events
|
https://github.com/huggingface/datasets/issues/2646
| 944,379,954
|
MDU6SXNzdWU5NDQzNzk5NTQ=
| 2,646
|
downloading of yahoo_answers_topics dataset failed
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66781249?v=4",
"events_url": "https://api.github.com/users/vikrant7k/events{/privacy}",
"followers_url": "https://api.github.com/users/vikrant7k/followers",
"following_url": "https://api.github.com/users/vikrant7k/following{/other_user}",
"gists_url": "https://api.github.com/users/vikrant7k/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikrant7k",
"id": 66781249,
"login": "vikrant7k",
"node_id": "MDQ6VXNlcjY2NzgxMjQ5",
"organizations_url": "https://api.github.com/users/vikrant7k/orgs",
"received_events_url": "https://api.github.com/users/vikrant7k/received_events",
"repos_url": "https://api.github.com/users/vikrant7k/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikrant7k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikrant7k/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikrant7k"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 \r\n\r\nFeel free to try again today, now that the quota was reset",
"Fixed once data URL was replaced:\r\n- #4023"
] | 2021-07-14T12:31:05Z
| 2022-08-04T08:28:24Z
| 2022-08-04T08:28:24Z
|
NONE
| null | null | null |
## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
# Sample code to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2646/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2646/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2059
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2059/events
|
https://github.com/huggingface/datasets/issues/2059
| 832,579,156
|
MDU6SXNzdWU4MzI1NzkxNTY=
| 2,059
|
Error while following docs to load the `ted_talks_iwslt` dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4",
"events_url": "https://api.github.com/users/ekdnam/events{/privacy}",
"followers_url": "https://api.github.com/users/ekdnam/followers",
"following_url": "https://api.github.com/users/ekdnam/following{/other_user}",
"gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ekdnam",
"id": 40426312,
"login": "ekdnam",
"node_id": "MDQ6VXNlcjQwNDI2MzEy",
"organizations_url": "https://api.github.com/users/ekdnam/orgs",
"received_events_url": "https://api.github.com/users/ekdnam/received_events",
"repos_url": "https://api.github.com/users/ekdnam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ekdnam"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] | null |
[
"@skyprince999 as you authored the PR for this dataset, any comments?",
"This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)"
] | 2021-03-16T09:12:19Z
| 2021-03-16T18:00:31Z
| 2021-03-16T18:00:07Z
|
NONE
| null | null | null |
I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error attached below.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7dcc67154ef9> in <module>()
----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
730 hash=hash,
731 features=features,
--> 732 **config_kwargs,
733 )
734
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs)
927
928 def __init__(self, *args, writer_batch_size=None, **kwargs):
--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
930 # Batch size used by the ArrowWriter
931 # It defines the number of samples that are kept in memory before writing them
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
241 name,
242 custom_features=features,
--> 243 **config_kwargs,
244 )
245
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
338 config_kwargs["version"] = self.VERSION
--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
340
341 # otherwise use the config_kwargs to overwrite the attributes
/root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)
219 description=description,
220 version=datasets.Version("1.1.0", ""),
--> 221 **kwargs,
222 )
223
TypeError: __init__() got multiple values for keyword argument 'version'
```
How to resolve this?
PS: Thanks a lot @huggingface team for creating this great library!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5624/events
|
https://github.com/huggingface/datasets/issues/5624
| 1,617,400,192
|
I_kwDODunzps5gZ5GA
| 5,624
|
glue datasets returning -1 for test split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4",
"events_url": "https://api.github.com/users/lithafnium/events{/privacy}",
"followers_url": "https://api.github.com/users/lithafnium/followers",
"following_url": "https://api.github.com/users/lithafnium/following{/other_user}",
"gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lithafnium",
"id": 8939967,
"login": "lithafnium",
"node_id": "MDQ6VXNlcjg5Mzk5Njc=",
"organizations_url": "https://api.github.com/users/lithafnium/orgs",
"received_events_url": "https://api.github.com/users/lithafnium/received_events",
"repos_url": "https://api.github.com/users/lithafnium/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lithafnium"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answered: https://huggingface.co/datasets/glue/discussions/5#63907885937867f0cb3cde31\r\n> The test labels are not public.\r\n>\r\n> Note this dataset belongs to a benchmark: people send their predictions for the test split to GLUE (https://gluebenchmark.com/) and then they get a score in their leaderboard...\r\n"
] | 2023-03-09T14:47:18Z
| 2023-03-09T16:49:29Z
| 2023-03-09T16:49:29Z
|
NONE
| null | null | null |
### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
print(d["label"]
```
### Expected behavior
Expected behavior should be 0/1 instead of -1.
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4830
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4830/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4830/events
|
https://github.com/huggingface/datasets/pull/4830
| 1,336,177,937
|
PR_kwDODunzps49Cdro
| 4,830
|
Fix task tags in dataset cards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 2022-08-11T16:06:06Z
| 2022-08-11T16:37:27Z
| 2022-08-11T16:23:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"merged_at": "2022-08-11T16:23:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4830/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1015
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1015/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1015/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1015/events
|
https://github.com/huggingface/datasets/pull/1015
| 755,508,841
|
MDExOlB1bGxSZXF1ZXN0NTMxMjA2MTgy
| 1,015
|
add hard dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zaidalyafeai",
"id": 15667714,
"login": "zaidalyafeai",
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zaidalyafeai"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks @sumanthd17 that fixed it. "
] | 2020-12-02T18:27:36Z
| 2020-12-03T15:03:54Z
| 2020-12-03T15:03:54Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1015",
"merged_at": "2020-12-03T15:03:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1015"
}
|
Hotel Reviews in Arabic language.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1015/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1015/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2153
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2153/events
|
https://github.com/huggingface/datasets/issues/2153
| 846,181,502
|
MDU6SXNzdWU4NDYxODE1MDI=
| 2,153
|
load_dataset ignoring features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GuillemGSubies",
"id": 37592763,
"login": "GuillemGSubies",
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GuillemGSubies"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201",
"Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.",
"Hi :) We're indeed working on tutorials that we will add to the docs !"
] | 2021-03-31T08:30:09Z
| 2022-10-05T13:29:12Z
| 2022-10-05T13:29:12Z
|
NONE
| null | null | null |
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work.
Code to reproduce:
```python
import datasets
data_location = "/data/prueba_multiclase"
features = datasets.Features(
{"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])}
)
dataset = datasets.load_dataset(
"csv", data_files=data_location, delimiter="\t", features=features
)
```
Dataset I used:
[prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped)
Thank you! ❤️
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/700
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/700/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/700/comments
|
https://api.github.com/repos/huggingface/datasets/issues/700/events
|
https://github.com/huggingface/datasets/pull/700
| 713,450,295
|
MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz
| 700
|
Add rouge-2 in rouge_types for metric calculation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"events_url": "https://api.github.com/users/Shashi456/events{/privacy}",
"followers_url": "https://api.github.com/users/Shashi456/followers",
"following_url": "https://api.github.com/users/Shashi456/following{/other_user}",
"gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shashi456",
"id": 18056781,
"login": "Shashi456",
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"organizations_url": "https://api.github.com/users/Shashi456/orgs",
"received_events_url": "https://api.github.com/users/Shashi456/received_events",
"repos_url": "https://api.github.com/users/Shashi456/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shashi456"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Indeed there's currently a mismatch between the description and what it rouge actually returns.\r\nThanks for proposing this fix :) \r\n\r\nI think it's better to return rouge 1-2-L.\r\nWas there a reason to only include rouge 1 and rouge L @thomwolf ? ",
"rougeLsum is also missing, could you add it ?",
"Adding `RougeLSum` would fix https://github.com/huggingface/datasets/issues/617",
"I am opening a PR with both of them right now actually :)",
"Also the format of the output isn't exactly ideal, It's usually only the F-1 score that is cared about. \r\n\r\nFormatting the output to reflect how `ROUGE-1-5-5` (the perl version thats usually used and pyrouge is a wrapper over it), would be better.\r\n\r\n",
"I'll close this since you seem to have already added it in another PR. Sorry for the delay in responding to you @lhoestq.",
"What do you mean by \"Formatting the output to reflect how ROUGE-1-5-5\" @Shashi456 ?",
"I like the idea of returning all the scores for two reason:\r\n- Rouge's aggregator does sampling and therefore it returns \"low\" \"mid\" and \"high\" scores\r\n- It is interesting to have the precision and recall to see how the F1 score was computed\r\nBut I understand your point that returning only the F1 score makes sense since it's the one that's always used ",
"@thomwolf the scores now returned look like this:\r\n```\r\n{'rouge1': AggregateScore(low=Score(precision=0.16620308156871524, recall=0.18219819615984395, fmeasure=0.16226017699359463), mid=Score(precision=0.17274338501705871, recall=0.1890957812369246, fmeasure=0.16823877588620403), high=Score(precision=0.17934569582981455, recall=0.1965626706042028, fmeasure=0.17491509794856058)), \r\n'rouge2': AggregateScore(low=Score(precision=0.12478835737689957, recall=0.1362113231755514, fmeasure=0.12055941950062395), mid=Score(precision=0.1303967602691664, recall=0.1423747229852964, fmeasure=0.1258363976151122), high=Score(precision=0.13654527560789362, recall=0.1488071465116122, fmeasure=0.13184989406704056)), \r\n'rougeL': AggregateScore(low=Score(precision=0.16568068818352072, recall=0.1811919016674486, fmeasure=0.1614784523482225), mid=Score(precision=0.17156684723552357, recall=0.1879777628247058, fmeasure=0.16720699286250762), high=Score(precision=0.17788847350584547, recall=0.1948899838530898, fmeasure=0.17316501523379826))}\r\n```\r\n\r\nWhile when computed through the perl rouge script, it looks like:\r\n```\r\nROUGE-1 Average_R: 0.34775 (95%-conf.int. 0.34546 - 0.35025)\r\nROUGE-1 Average_P: 0.19381 (95%-conf.int. 0.19246 - 0.19538)\r\nROUGE-1 Average_F: 0.24070 (95%-conf.int. 0.23925 - 0.24230)\r\n---------------------------------------------\r\nROUGE-2 Average_R: 0.07160 (95%-conf.int. 0.07010 - 0.07298)\r\nROUGE-2 Average_F: 0.04845 (95%-conf.int. 0.04741 - 0.04942)\r\n---------------------------------------------\r\nROUGE-L Average_R: 0.26404 (95%-conf.int. 0.26215 - 0.26598)\r\nROUGE-L Average_P: 0.14696 (95%-conf.int. 0.14576 - 0.14815)\r\nROUGE-L Average_F: 0.18245 (95%-conf.int. 0.18120 - 0.18367)\r\n```\r\nwhile the wrapper returns the much more readable:\r\n```\r\n[2020-07-30 18:13:38,556 INFO] Rouges at step 13000 \r\n>> ROUGE-F(1/2/3/l): 43.43/20.42/39.78 \r\nROUGE-R(1/2/3/l): 53.91/25.34/49.32\r\n```\r\n\r\nThe formatting allows for easy reading, and although \"low\", \"mid\", \"high\" make sense, this is more concise and effective. \r\n\r\nOne way of changing this might be to return a dictionary that returns values like `rouge_1_precision`, `rouge_1_F1`, `rouge_1_recall`, and maybe also having the ability to get the values you are interested in and keeping `recall` and `F1` as default.",
"cc: @lhoestq ",
"Ok I see.\r\nI think it's also important to follow one of the existing output format (there are already too many different formats, let's try not to add another different one)\r\nI'd still stick with the current format and not transform the output of the python implementation of rouge since it's already widely used.\r\nWhat do you think ?",
"Maybe we could convert the dataclasses in dictionnaries, would that help @Shashi456 ?",
"@thomwolf yeah I think that would help. I initially didn't understand the high low mid categories. Dictionaries could help in this case I guess, and if we allow the user to choose what they want i.e F1 and precision or recall."
] | 2020-10-02T08:36:45Z
| 2020-10-02T11:08:49Z
| 2020-10-02T09:59:05Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/700",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/700"
}
|
The description of the ROUGE metric says,
```
_KWARGS_DESCRIPTION = """
Calculates average rouge scores for a list of hypotheses and references
Args:
predictions: list of predictions to score. Each predictions
should be a string with tokens separated by spaces.
references: list of reference for each prediction. Each
reference should be a string with tokens separated by spaces.
Returns:
rouge1: rouge_1 f1,
rouge2: rouge_2 f1,
rougeL: rouge_l f1,
rougeLsum: rouge_l precision
"""
```
but the `rouge_types` argument defaults to `rouge_types = ["rouge1", "rougeL"]`, this PR updates and add `rouge2` to the list so as to reflect the description card.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/700/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/700/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3934
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3934/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3934/events
|
https://github.com/huggingface/datasets/pull/3934
| 1,170,292,492
|
PR_kwDODunzps40ftiC
| 3,934
|
Create MAUVE metric card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-15T21:36:07Z
| 2022-03-18T17:38:14Z
| 2022-03-18T17:34:13Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3934.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3934",
"merged_at": "2022-03-18T17:34:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3934.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3934"
}
|
Proposing a MAUVE metric card
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3934/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6439
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6439/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6439/events
|
https://github.com/huggingface/datasets/issues/6439
| 2,002,916,514
|
I_kwDODunzps53YhSi
| 6,439
|
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AntreasAntoniou",
"id": 10792502,
"login": "AntreasAntoniou",
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AntreasAntoniou"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-11-20T20:07:23Z
| 2023-11-20T20:07:37Z
| null |
NONE
| null | null | null |
### Describe the bug
I am working with a dataset I am trying to publish.
The path is Antreas/TALI.
It's a fairly large dataset, and contains images, video, audio and text.
I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process.
With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths.
Find the script I am using below:
```python
import multiprocessing as mp
import pathlib
from typing import Optional
import datasets
from rich import print
from tqdm import tqdm
def download_dataset_via_hub(
dataset_name: str,
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
):
import huggingface_hub as hf_hub
download_folder = hf_hub.snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
cache_dir=dataset_download_path,
resume_download=True,
max_workers=num_download_workers,
ignore_patterns=[],
)
return pathlib.Path(download_folder) / "data"
def load_dataset_via_hub(
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
dataset_name: Optional[str] = None,
):
from dataclasses import dataclass, field
from datasets import ClassLabel, Features, Image, Sequence, Value
dataset_path = download_dataset_via_hub(
dataset_download_path=dataset_download_path,
num_download_workers=num_download_workers,
dataset_name=dataset_name,
)
# Building a list of file paths for validation set
train_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "train" in file.as_posix()
]
val_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "val" in file.as_posix()
]
test_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "test" in file.as_posix()
]
print(
f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set"
)
data_files = {
"test": test_files,
"val": val_files,
"train": train_files,
}
features = Features(
{
"image": Image(
decode=True
), # Set `decode=True` if you want to decode the images, otherwise `decode=False`
"image_url": Value("string"),
"item_idx": Value("int64"),
"wit_features": Sequence(
{
"attribution_passes_lang_id": Value("bool"),
"caption_alt_text_description": Value("string"),
"caption_reference_description": Value("string"),
"caption_title_and_reference_description": Value("string"),
"context_page_description": Value("string"),
"context_section_description": Value("string"),
"hierarchical_section_title": Value("string"),
"is_main_image": Value("bool"),
"language": Value("string"),
"page_changed_recently": Value("bool"),
"page_title": Value("string"),
"page_url": Value("string"),
"section_title": Value("string"),
}
),
"wit_idx": Value("int64"),
"youtube_title_text": Value("string"),
"youtube_description_text": Value("string"),
"youtube_video_content": Value("binary"),
"youtube_video_starting_time": Value("string"),
"youtube_subtitle_text": Value("string"),
"youtube_video_size": Value("int64"),
"youtube_video_file_path": Value("string"),
}
)
dataset = datasets.load_dataset(
"parquet" if dataset_name is None else dataset_name,
data_files=data_files,
features=features,
num_proc=1,
cache_dir=dataset_download_path / "cache",
)
return dataset
if __name__ == "__main__":
dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/")
dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[
"test"
]
for sample in tqdm(dataset):
print(list(sample.keys()))
```
Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start!
### Steps to reproduce the bug
1. Run the code I provided to get a sense of how fast snapshot + manual is
2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP.
3. You should now have an appreciation of how long these things take.
### Expected behavior
The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6439/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5470
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5470/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5470/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5470/events
|
https://github.com/huggingface/datasets/pull/5470
| 1,558,542,611
|
PR_kwDODunzps5InLw9
| 5,470
|
Update dataset card creation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to your PR - feel free to merge :)",
"Haha thanks, you read my mind :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008332 / 0.011353 (-0.003021) | 0.004556 / 0.011008 (-0.006452) | 0.102239 / 0.038508 (0.063731) | 0.029332 / 0.023109 (0.006222) | 0.296189 / 0.275898 (0.020291) | 0.355746 / 0.323480 (0.032266) | 0.007705 / 0.007986 (-0.000281) | 0.003488 / 0.004328 (-0.000840) | 0.079142 / 0.004250 (0.074891) | 0.034980 / 0.037052 (-0.002073) | 0.307460 / 0.258489 (0.048971) | 0.345944 / 0.293841 (0.052103) | 0.033815 / 0.128546 (-0.094731) | 0.011603 / 0.075646 (-0.064044) | 0.322097 / 0.419271 (-0.097175) | 0.043753 / 0.043533 (0.000220) | 0.296706 / 0.255139 (0.041567) | 0.323195 / 0.283200 (0.039996) | 0.092295 / 0.141683 (-0.049388) | 1.542556 / 1.452155 (0.090401) | 1.571896 / 1.492716 (0.079180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191075 / 0.018006 (0.173069) | 0.407394 / 0.000490 (0.406905) | 0.002033 / 0.000200 (0.001833) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023175 / 0.037411 (-0.014236) | 0.094774 / 0.014526 (0.080248) | 0.105782 / 0.176557 (-0.070775) | 0.146608 / 0.737135 (-0.590528) | 0.107519 / 0.296338 (-0.188819) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421516 / 0.215209 (0.206306) | 4.201091 / 2.077655 (2.123436) | 1.880285 / 1.504120 (0.376165) | 1.676333 / 1.541195 (0.135139) | 1.734301 / 1.468490 (0.265811) | 0.688504 / 4.584777 (-3.896273) | 3.370289 / 3.745712 (-0.375423) | 3.127661 / 5.269862 (-2.142201) | 1.562570 / 4.565676 (-3.003106) | 0.081687 / 0.424275 (-0.342588) | 0.012334 / 0.007607 (0.004727) | 0.524125 / 0.226044 (0.298080) | 5.245595 / 2.268929 (2.976667) | 2.332622 / 55.444624 (-53.112002) | 1.973212 / 6.876477 (-4.903265) | 2.006507 / 2.142072 (-0.135565) | 0.807126 / 4.805227 (-3.998101) | 0.148254 / 6.500664 (-6.352411) | 0.064240 / 0.075469 (-0.011229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206880 / 1.841788 (-0.634907) | 13.854877 / 8.074308 (5.780569) | 13.806772 / 10.191392 (3.615380) | 0.144380 / 0.680424 (-0.536044) | 0.028492 / 0.534201 (-0.505709) | 0.393854 / 0.579283 (-0.185429) | 0.402210 / 0.434364 (-0.032154) | 0.462138 / 0.540337 (-0.078199) | 0.537480 / 1.386936 (-0.849456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004529 / 0.011008 (-0.006479) | 0.077925 / 0.038508 (0.039417) | 0.027824 / 0.023109 (0.004715) | 0.342288 / 0.275898 (0.066390) | 0.375071 / 0.323480 (0.051591) | 0.004889 / 0.007986 (-0.003097) | 0.003353 / 0.004328 (-0.000975) | 0.076198 / 0.004250 (0.071947) | 0.037797 / 0.037052 (0.000744) | 0.347834 / 0.258489 (0.089345) | 0.384200 / 0.293841 (0.090359) | 0.032184 / 0.128546 (-0.096362) | 0.011674 / 0.075646 (-0.063972) | 0.086242 / 0.419271 (-0.333029) | 0.044465 / 0.043533 (0.000932) | 0.341712 / 0.255139 (0.086573) | 0.366908 / 0.283200 (0.083709) | 0.091526 / 0.141683 (-0.050156) | 1.495798 / 1.452155 (0.043643) | 1.571700 / 1.492716 (0.078984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221962 / 0.018006 (0.203955) | 0.393095 / 0.000490 (0.392605) | 0.000385 / 0.000200 (0.000185) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.099278 / 0.014526 (0.084753) | 0.105940 / 0.176557 (-0.070617) | 0.141334 / 0.737135 (-0.595802) | 0.110898 / 0.296338 (-0.185440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446150 / 0.215209 (0.230941) | 4.471441 / 2.077655 (2.393786) | 2.124864 / 1.504120 (0.620744) | 1.909950 / 1.541195 (0.368755) | 1.970085 / 1.468490 (0.501595) | 0.706711 / 4.584777 (-3.878066) | 3.380336 / 3.745712 (-0.365376) | 1.866106 / 5.269862 (-3.403756) | 1.160657 / 4.565676 (-3.405019) | 0.082786 / 0.424275 (-0.341489) | 0.012470 / 0.007607 (0.004862) | 0.537620 / 0.226044 (0.311575) | 5.390588 / 2.268929 (3.121659) | 2.539137 / 55.444624 (-52.905488) | 2.191867 / 6.876477 (-4.684610) | 2.236212 / 2.142072 (0.094139) | 0.810756 / 4.805227 (-3.994471) | 0.150933 / 6.500664 (-6.349731) | 0.066141 / 0.075469 (-0.009328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271595 / 1.841788 (-0.570193) | 13.840013 / 8.074308 (5.765705) | 13.334443 / 10.191392 (3.143051) | 0.150096 / 0.680424 (-0.530328) | 0.016919 / 0.534201 (-0.517282) | 0.375534 / 0.579283 (-0.203749) | 0.387203 / 0.434364 (-0.047161) | 0.463500 / 0.540337 (-0.076838) | 0.553496 / 1.386936 (-0.833440) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-26T17:57:51Z
| 2023-01-27T16:27:00Z
| 2023-01-27T16:20:10Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5470",
"merged_at": "2023-01-27T16:20:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5470"
}
|
Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5470/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5470/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6211/events
|
https://github.com/huggingface/datasets/pull/6211
| 1,880,265,906
|
PR_kwDODunzps5Ze-pv
| 6,211
|
Fix empty splitinfo json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007756 / 0.011353 (-0.003597) | 0.004733 / 0.011008 (-0.006275) | 0.095874 / 0.038508 (0.057366) | 0.081957 / 0.023109 (0.058848) | 0.426430 / 0.275898 (0.150532) | 0.457670 / 0.323480 (0.134190) | 0.004448 / 0.007986 (-0.003537) | 0.004956 / 0.004328 (0.000627) | 0.074195 / 0.004250 (0.069945) | 0.061101 / 0.037052 (0.024048) | 0.435134 / 0.258489 (0.176645) | 0.457245 / 0.293841 (0.163404) | 0.034945 / 0.128546 (-0.093601) | 0.010028 / 0.075646 (-0.065618) | 0.350724 / 0.419271 (-0.068548) | 0.064433 / 0.043533 (0.020901) | 0.417882 / 0.255139 (0.162743) | 0.445087 / 0.283200 (0.161887) | 0.027576 / 0.141683 (-0.114107) | 1.824066 / 1.452155 (0.371912) | 1.957568 / 1.492716 (0.464852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238568 / 0.018006 (0.220562) | 0.505289 / 0.000490 (0.504799) | 0.003527 / 0.000200 (0.003327) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032839 / 0.037411 (-0.004572) | 0.096708 / 0.014526 (0.082182) | 0.112100 / 0.176557 (-0.064456) | 0.177215 / 0.737135 (-0.559920) | 0.111273 / 0.296338 (-0.185066) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475200 / 0.215209 (0.259991) | 4.725737 / 2.077655 (2.648082) | 2.414672 / 1.504120 (0.910552) | 2.196357 / 1.541195 (0.655162) | 2.329298 / 1.468490 (0.860808) | 0.575258 / 4.584777 (-4.009519) | 4.343630 / 3.745712 (0.597918) | 3.837665 / 5.269862 (-1.432196) | 2.497970 / 4.565676 (-2.067706) | 0.066467 / 0.424275 (-0.357808) | 0.008680 / 0.007607 (0.001073) | 0.569923 / 0.226044 (0.343878) | 5.634230 / 2.268929 (3.365302) | 2.959222 / 55.444624 (-52.485402) | 2.535954 / 6.876477 (-4.340523) | 2.804844 / 2.142072 (0.662771) | 0.682000 / 4.805227 (-4.123227) | 0.158193 / 6.500664 (-6.342471) | 0.072315 / 0.075469 (-0.003154) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.578148 / 1.841788 (-0.263639) | 22.993419 / 8.074308 (14.919110) | 16.524477 / 10.191392 (6.333085) | 0.169415 / 0.680424 (-0.511009) | 0.021520 / 0.534201 (-0.512681) | 0.455970 / 0.579283 (-0.123313) | 0.489022 / 0.434364 (0.054658) | 0.535656 / 0.540337 (-0.004682) | 0.802341 / 1.386936 (-0.584595) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008002 / 0.011353 (-0.003351) | 0.005577 / 0.011008 (-0.005431) | 0.087803 / 0.038508 (0.049295) | 0.091285 / 0.023109 (0.068176) | 0.500514 / 0.275898 (0.224616) | 0.549770 / 0.323480 (0.226290) | 0.006125 / 0.007986 (-0.001861) | 0.004031 / 0.004328 (-0.000297) | 0.077941 / 0.004250 (0.073691) | 0.071419 / 0.037052 (0.034367) | 0.497570 / 0.258489 (0.239081) | 0.542454 / 0.293841 (0.248613) | 0.040827 / 0.128546 (-0.087719) | 0.011029 / 0.075646 (-0.064617) | 0.088788 / 0.419271 (-0.330484) | 0.056970 / 0.043533 (0.013438) | 0.523934 / 0.255139 (0.268795) | 0.552507 / 0.283200 (0.269308) | 0.029794 / 0.141683 (-0.111889) | 1.817778 / 1.452155 (0.365623) | 1.955843 / 1.492716 (0.463126) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246992 / 0.018006 (0.228986) | 0.467879 / 0.000490 (0.467390) | 0.005439 / 0.000200 (0.005239) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037774 / 0.037411 (0.000363) | 0.109332 / 0.014526 (0.094806) | 0.120103 / 0.176557 (-0.056454) | 0.185259 / 0.737135 (-0.551876) | 0.126189 / 0.296338 (-0.170149) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492856 / 0.215209 (0.277646) | 5.033209 / 2.077655 (2.955554) | 2.885551 / 1.504120 (1.381431) | 2.480304 / 1.541195 (0.939109) | 2.579092 / 1.468490 (1.110602) | 0.557671 / 4.584777 (-4.027106) | 4.352765 / 3.745712 (0.607053) | 4.039124 / 5.269862 (-1.230738) | 2.534342 / 4.565676 (-2.031335) | 0.067267 / 0.424275 (-0.357008) | 0.008891 / 0.007607 (0.001284) | 0.591592 / 0.226044 (0.365547) | 5.939982 / 2.268929 (3.671053) | 3.258389 / 55.444624 (-52.186235) | 2.843899 / 6.876477 (-4.032578) | 3.074217 / 2.142072 (0.932144) | 0.695065 / 4.805227 (-4.110162) | 0.156917 / 6.500664 (-6.343747) | 0.070185 / 0.075469 (-0.005284) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.586716 / 1.841788 (-0.255072) | 23.405837 / 8.074308 (15.331529) | 17.200851 / 10.191392 (7.009459) | 0.170073 / 0.680424 (-0.510351) | 0.023345 / 0.534201 (-0.510856) | 0.459192 / 0.579283 (-0.120091) | 0.477419 / 0.434364 (0.043055) | 0.558581 / 0.540337 (0.018244) | 0.814373 / 1.386936 (-0.572563) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003661 / 0.011008 (-0.007348) | 0.081753 / 0.038508 (0.043245) | 0.061275 / 0.023109 (0.038166) | 0.316278 / 0.275898 (0.040380) | 0.350783 / 0.323480 (0.027303) | 0.004694 / 0.007986 (-0.003291) | 0.003003 / 0.004328 (-0.001326) | 0.062877 / 0.004250 (0.058627) | 0.046985 / 0.037052 (0.009933) | 0.315698 / 0.258489 (0.057208) | 0.364607 / 0.293841 (0.070766) | 0.027365 / 0.128546 (-0.101181) | 0.008016 / 0.075646 (-0.067631) | 0.261379 / 0.419271 (-0.157893) | 0.045173 / 0.043533 (0.001640) | 0.313499 / 0.255139 (0.058360) | 0.339383 / 0.283200 (0.056184) | 0.020855 / 0.141683 (-0.120828) | 1.429851 / 1.452155 (-0.022303) | 1.506112 / 1.492716 (0.013396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194872 / 0.018006 (0.176866) | 0.451951 / 0.000490 (0.451462) | 0.002790 / 0.000200 (0.002590) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024331 / 0.037411 (-0.013081) | 0.073156 / 0.014526 (0.058630) | 0.084054 / 0.176557 (-0.092502) | 0.145656 / 0.737135 (-0.591480) | 0.084998 / 0.296338 (-0.211340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391324 / 0.215209 (0.176115) | 3.898406 / 2.077655 (1.820751) | 1.891175 / 1.504120 (0.387055) | 1.698738 / 1.541195 (0.157543) | 1.774324 / 1.468490 (0.305834) | 0.495129 / 4.584777 (-4.089648) | 3.027027 / 3.745712 (-0.718685) | 2.821423 / 5.269862 (-2.448439) | 1.870761 / 4.565676 (-2.694915) | 0.057029 / 0.424275 (-0.367246) | 0.006715 / 0.007607 (-0.000892) | 0.465801 / 0.226044 (0.239757) | 4.650891 / 2.268929 (2.381962) | 2.425097 / 55.444624 (-53.019527) | 2.134731 / 6.876477 (-4.741745) | 2.312854 / 2.142072 (0.170781) | 0.589668 / 4.805227 (-4.215559) | 0.124673 / 6.500664 (-6.375991) | 0.060887 / 0.075469 (-0.014582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243622 / 1.841788 (-0.598166) | 18.501640 / 8.074308 (10.427332) | 13.853099 / 10.191392 (3.661707) | 0.130255 / 0.680424 (-0.550168) | 0.016824 / 0.534201 (-0.517377) | 0.332297 / 0.579283 (-0.246986) | 0.360346 / 0.434364 (-0.074018) | 0.388598 / 0.540337 (-0.151739) | 0.527551 / 1.386936 (-0.859385) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006181 / 0.011353 (-0.005172) | 0.003688 / 0.011008 (-0.007320) | 0.063395 / 0.038508 (0.024887) | 0.062531 / 0.023109 (0.039422) | 0.446565 / 0.275898 (0.170667) | 0.485224 / 0.323480 (0.161744) | 0.004982 / 0.007986 (-0.003004) | 0.002961 / 0.004328 (-0.001367) | 0.063124 / 0.004250 (0.058874) | 0.050234 / 0.037052 (0.013182) | 0.449731 / 0.258489 (0.191242) | 0.487293 / 0.293841 (0.193452) | 0.028528 / 0.128546 (-0.100018) | 0.008210 / 0.075646 (-0.067436) | 0.069520 / 0.419271 (-0.349751) | 0.041026 / 0.043533 (-0.002507) | 0.451370 / 0.255139 (0.196231) | 0.469151 / 0.283200 (0.185951) | 0.021076 / 0.141683 (-0.120607) | 1.439185 / 1.452155 (-0.012970) | 1.492634 / 1.492716 (-0.000082) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235932 / 0.018006 (0.217926) | 0.430070 / 0.000490 (0.429581) | 0.007347 / 0.000200 (0.007147) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026102 / 0.037411 (-0.011309) | 0.081333 / 0.014526 (0.066807) | 0.090111 / 0.176557 (-0.086446) | 0.144578 / 0.737135 (-0.592557) | 0.091961 / 0.296338 (-0.204378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455761 / 0.215209 (0.240552) | 4.536345 / 2.077655 (2.458690) | 2.496833 / 1.504120 (0.992713) | 2.323325 / 1.541195 (0.782130) | 2.388364 / 1.468490 (0.919873) | 0.512010 / 4.584777 (-4.072767) | 3.106268 / 3.745712 (-0.639444) | 2.879224 / 5.269862 (-2.390637) | 1.893859 / 4.565676 (-2.671818) | 0.059131 / 0.424275 (-0.365144) | 0.006763 / 0.007607 (-0.000844) | 0.528205 / 0.226044 (0.302161) | 5.296649 / 2.268929 (3.027720) | 2.933787 / 55.444624 (-52.510838) | 2.598258 / 6.876477 (-4.278218) | 2.768195 / 2.142072 (0.626123) | 0.597430 / 4.805227 (-4.207797) | 0.125865 / 6.500664 (-6.374799) | 0.061684 / 0.075469 (-0.013785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.341194 / 1.841788 (-0.500594) | 18.948225 / 8.074308 (10.873917) | 14.912340 / 10.191392 (4.720948) | 0.146905 / 0.680424 (-0.533519) | 0.017952 / 0.534201 (-0.516249) | 0.332299 / 0.579283 (-0.246984) | 0.362733 / 0.434364 (-0.071631) | 0.388278 / 0.540337 (-0.152060) | 0.546436 / 1.386936 (-0.840500) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008314 / 0.011353 (-0.003038) | 0.004904 / 0.011008 (-0.006105) | 0.097486 / 0.038508 (0.058978) | 0.074627 / 0.023109 (0.051518) | 0.396395 / 0.275898 (0.120497) | 0.440519 / 0.323480 (0.117039) | 0.005964 / 0.007986 (-0.002022) | 0.004203 / 0.004328 (-0.000126) | 0.079998 / 0.004250 (0.075747) | 0.055158 / 0.037052 (0.018106) | 0.415439 / 0.258489 (0.156950) | 0.476101 / 0.293841 (0.182260) | 0.044761 / 0.128546 (-0.083785) | 0.013966 / 0.075646 (-0.061680) | 0.351279 / 0.419271 (-0.067993) | 0.067250 / 0.043533 (0.023717) | 0.414310 / 0.255139 (0.159171) | 0.458104 / 0.283200 (0.174904) | 0.033678 / 0.141683 (-0.108005) | 1.730539 / 1.452155 (0.278385) | 1.840013 / 1.492716 (0.347297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272708 / 0.018006 (0.254702) | 0.593563 / 0.000490 (0.593074) | 0.005153 / 0.000200 (0.004953) | 0.000179 / 0.000054 (0.000125) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029595 / 0.037411 (-0.007816) | 0.087994 / 0.014526 (0.073469) | 0.106066 / 0.176557 (-0.070491) | 0.180491 / 0.737135 (-0.556644) | 0.103707 / 0.296338 (-0.192631) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.566711 / 0.215209 (0.351502) | 5.589034 / 2.077655 (3.511380) | 2.364034 / 1.504120 (0.859914) | 2.119050 / 1.541195 (0.577855) | 2.103823 / 1.468490 (0.635333) | 0.819906 / 4.584777 (-3.764871) | 5.178464 / 3.745712 (1.432752) | 4.433986 / 5.269862 (-0.835875) | 2.825470 / 4.565676 (-1.740207) | 0.096907 / 0.424275 (-0.327368) | 0.008573 / 0.007607 (0.000966) | 0.677607 / 0.226044 (0.451563) | 6.811090 / 2.268929 (4.542162) | 3.140923 / 55.444624 (-52.303701) | 2.492251 / 6.876477 (-4.384225) | 2.660231 / 2.142072 (0.518158) | 0.980573 / 4.805227 (-3.824655) | 0.209028 / 6.500664 (-6.291636) | 0.079413 / 0.075469 (0.003944) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.578861 / 1.841788 (-0.262926) | 22.518269 / 8.074308 (14.443961) | 21.335916 / 10.191392 (11.144524) | 0.211311 / 0.680424 (-0.469113) | 0.033216 / 0.534201 (-0.500985) | 0.473266 / 0.579283 (-0.106017) | 0.581650 / 0.434364 (0.147286) | 0.522442 / 0.540337 (-0.017895) | 0.729039 / 1.386936 (-0.657897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008349 / 0.011353 (-0.003003) | 0.005856 / 0.011008 (-0.005152) | 0.077855 / 0.038508 (0.039347) | 0.080608 / 0.023109 (0.057499) | 0.512533 / 0.275898 (0.236635) | 0.551862 / 0.323480 (0.228382) | 0.007004 / 0.007986 (-0.000982) | 0.004147 / 0.004328 (-0.000181) | 0.086625 / 0.004250 (0.082374) | 0.065962 / 0.037052 (0.028910) | 0.545590 / 0.258489 (0.287101) | 0.586313 / 0.293841 (0.292472) | 0.048719 / 0.128546 (-0.079827) | 0.014997 / 0.075646 (-0.060649) | 0.089510 / 0.419271 (-0.329761) | 0.060936 / 0.043533 (0.017404) | 0.498455 / 0.255139 (0.243316) | 0.535460 / 0.283200 (0.252260) | 0.034624 / 0.141683 (-0.107059) | 1.717401 / 1.452155 (0.265246) | 1.808772 / 1.492716 (0.316056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.379504 / 0.018006 (0.361497) | 0.601756 / 0.000490 (0.601266) | 0.061740 / 0.000200 (0.061540) | 0.000497 / 0.000054 (0.000442) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031215 / 0.037411 (-0.006196) | 0.097501 / 0.014526 (0.082975) | 0.117434 / 0.176557 (-0.059122) | 0.166014 / 0.737135 (-0.571121) | 0.116466 / 0.296338 (-0.179873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699444 / 0.215209 (0.484235) | 6.329332 / 2.077655 (4.251678) | 3.072812 / 1.504120 (1.568693) | 2.729878 / 1.541195 (1.188683) | 2.933785 / 1.468490 (1.465295) | 0.935858 / 4.584777 (-3.648919) | 5.532532 / 3.745712 (1.786820) | 4.677139 / 5.269862 (-0.592722) | 2.963527 / 4.565676 (-1.602149) | 0.099661 / 0.424275 (-0.324614) | 0.009095 / 0.007607 (0.001488) | 0.751158 / 0.226044 (0.525114) | 7.652588 / 2.268929 (5.383660) | 3.802005 / 55.444624 (-51.642619) | 3.163126 / 6.876477 (-3.713351) | 3.401125 / 2.142072 (1.259052) | 0.998627 / 4.805227 (-3.806600) | 0.203310 / 6.500664 (-6.297354) | 0.073827 / 0.075469 (-0.001642) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.662989 / 1.841788 (-0.178799) | 23.777818 / 8.074308 (15.703510) | 20.855378 / 10.191392 (10.663986) | 0.279892 / 0.680424 (-0.400532) | 0.029303 / 0.534201 (-0.504898) | 0.473681 / 0.579283 (-0.105602) | 0.579148 / 0.434364 (0.144784) | 0.546931 / 0.540337 (0.006593) | 0.769740 / 1.386936 (-0.617196) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-04T13:13:53Z
| 2023-09-04T14:58:34Z
| 2023-09-04T14:47:17Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6211.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6211",
"merged_at": "2023-09-04T14:47:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6211.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6211"
}
|
If a split is empty, then the JSON split info should mention num_bytes = 0 and num_examples = 0.
Until now they were omited because the JSON dumps ignore the fields that are equal to the default values.
This is needed in datasets-server since we parse this information to the viewer
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6211/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2669/events
|
https://github.com/huggingface/datasets/issues/2669
| 946,982,998
|
MDU6SXNzdWU5NDY5ODI5OTg=
| 2,669
|
Metric kwargs are not passed to underlying external metric f1_score
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weighted\", \"binary\"} or None, default=\"binary\"`.\r\n\r\nSecond, you should take into account that all additional metric-specific argument should be passed in the method `compute` (and not in the method `load_metric`). You can find more information in our documentation: https://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\nSo for example, if you would like to calculate the macro-averaged F1 score, you should use:\r\n```python\r\nimport datasets\r\n\r\nf1 = datasets.load_metric(\"f1\", keep_in_memory=True)\r\nf1.add_batch(predictions=[0,2,3], references=[1, 2, 3])\r\nf1.compute(average=\"macro\")\r\n```",
"Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself."
] | 2021-07-18T08:32:31Z
| 2021-07-18T18:36:05Z
| 2021-07-18T11:19:04Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to reproduce the bug
```python
import datasets
f1 = datasets.load_metric("f1", keep_in_memory=True, average="min")
f1.add_batch(predictions=[0,2,3], references=[1, 2, 3])
f1.compute()
```
## Expected results
No error, because `average="min"` should be passed correctly to f1_score in sklearn.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute
"f1": f1_score(
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score
return fbeta_score(y_true, y_pred, beta=1, labels=labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score
_, _, f, _ = precision_recall_fscore_support(y_true, y_pred,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support
labels = _check_set_wise_labels(y_true, y_pred, average, labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels
raise ValueError("Target is %s but average='binary'. Please "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2669/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1247
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1247/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1247/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1247/events
|
https://github.com/huggingface/datasets/pull/1247
| 758,431,640
|
MDExOlB1bGxSZXF1ZXN0NTMzNjA1NzE2
| 1,247
|
Adding indonlu dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6518504?v=4",
"events_url": "https://api.github.com/users/yasirabd/events{/privacy}",
"followers_url": "https://api.github.com/users/yasirabd/followers",
"following_url": "https://api.github.com/users/yasirabd/following{/other_user}",
"gists_url": "https://api.github.com/users/yasirabd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yasirabd",
"id": 6518504,
"login": "yasirabd",
"node_id": "MDQ6VXNlcjY1MTg1MDQ=",
"organizations_url": "https://api.github.com/users/yasirabd/orgs",
"received_events_url": "https://api.github.com/users/yasirabd/received_events",
"repos_url": "https://api.github.com/users/yasirabd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yasirabd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yasirabd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yasirabd"
}
|
[] |
closed
| false
| null |
[] | null |
[
"looks like this PR includes changes about many files other than the ones for IndoNLU\r\nCould you create another branch and another PR please ?",
"> looks like this PR includes changes about many files other than the ones for IndoNLU\r\n> Could you create another branch and another PR please ?\r\n\r\nOkay I'll make it"
] | 2020-12-07T11:38:45Z
| 2020-12-08T14:11:50Z
| 2020-12-08T14:11:50Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1247",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1247"
}
|
IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1247/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1247/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1760/events
|
https://github.com/huggingface/datasets/pull/1760
| 791,110,857
|
MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0
| 1,760
|
More tags
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Conll has `multilingual` but is only tagged as `en`",
"good catch, that was a bad copy paste x)"
] | 2021-01-21T13:50:10Z
| 2021-01-22T09:40:01Z
| 2021-01-22T09:40:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"merged_at": "2021-01-22T09:40:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1760"
}
|
Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1760/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4972
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4972/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4972/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4972/events
|
https://github.com/huggingface/datasets/pull/4972
| 1,371,443,306
|
PR_kwDODunzps4-3VVF
| 4,972
|
Fix map batched with torch output
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-13T13:16:34Z
| 2022-09-20T09:42:02Z
| 2022-09-20T09:39:33Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4972.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4972",
"merged_at": "2022-09-20T09:39:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4972.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4972"
}
|
Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2
Currently it fails if one uses batched `map` and the map function returns a torch tensor.
I fixed it for torch, tf, jax and pandas series.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4972/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4972/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5165
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5165/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5165/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5165/events
|
https://github.com/huggingface/datasets/issues/5165
| 1,423,616,677
|
I_kwDODunzps5U2qql
| 5,165
|
Memory explosion when trying to access 4d tensors in datasets cast to torch or np
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2022-10-26T08:14:47Z
| 2022-10-26T08:14:47Z
| null |
MEMBER
| null | null | null |
### Describe the bug
When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors.
### Steps to reproduce the bug
MWE:
```python
from datasets import load_dataset
import numpy as np
def create_4d_tensor(item):
i = item["num_nodes"]
item["x_big"] = np.random.rand(i, 2*i, int(i/2), 1) + 1 # we create a big 4d tensor
return item
if __name__ == "__main__":
dataset = load_dataset(path=f"graphs-datasets/PROTEINS")
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset = dataset.map(
create_4d_tensor,
batched=False,
writer_batch_size=100,
)
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset.set_format("torch")
print(dataset["train"].format)
# This gets killed :(
print(dataset["train"][0].keys())
```
The problem likely comes from `format_table` [here](https://cs.github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/src/datasets/arrow_dataset.py#L2328)
### Expected behavior
No memory explosion when trying to access dataset items after cast.
### Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5165/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5165/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/106
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/106/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/106/comments
|
https://api.github.com/repos/huggingface/datasets/issues/106/events
|
https://github.com/huggingface/datasets/pull/106
| 618,361,418
|
MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3
| 106
|
Add data dir test command
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Nice - I think we can merge this. I will update the checksums for `wikihow` then as well"
] | 2020-05-14T16:18:39Z
| 2020-05-14T16:49:11Z
| 2020-05-14T16:49:10Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/106",
"merged_at": "2020-05-14T16:49:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/106"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/106/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/106/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/4906
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4906/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4906/events
|
https://github.com/huggingface/datasets/issues/4906
| 1,353,223,925
|
I_kwDODunzps5QqI71
| 4,906
|
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4",
"events_url": "https://api.github.com/users/OPterminator/events{/privacy}",
"followers_url": "https://api.github.com/users/OPterminator/followers",
"following_url": "https://api.github.com/users/OPterminator/following{/other_user}",
"gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OPterminator",
"id": 63536981,
"login": "OPterminator",
"node_id": "MDQ6VXNlcjYzNTM2OTgx",
"organizations_url": "https://api.github.com/users/OPterminator/orgs",
"received_events_url": "https://api.github.com/users/OPterminator/received_events",
"repos_url": "https://api.github.com/users/OPterminator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OPterminator"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date).",
"i am also facing this issue\r\n\r\n\r\n```\r\n----> 1 import datasets\r\n 3 dataset = datasets.load_dataset(\"ucberkeley-dlab/measuring-hate-speech\", \"binary\")\r\n 4 df = dataset[\"train\"].to_pandas()\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/__init__.py:52\r\n 50 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled\r\n 51 from .info import DatasetInfo, MetricInfo\r\n---> 52 from .inspect import (\r\n 53 get_dataset_config_info,\r\n 54 get_dataset_config_names,\r\n 55 get_dataset_infos,\r\n 56 get_dataset_split_names,\r\n 57 inspect_dataset,\r\n 58 inspect_metric,\r\n 59 list_datasets,\r\n 60 list_metrics,\r\n 61 )\r\n 62 from .iterable_dataset import IterableDataset\r\n 63 from .load import load_dataset, load_dataset_builder, load_from_disk, load_metric\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/inspect.py:30\r\n 28 from .download.streaming_download_manager import StreamingDownloadManager\r\n...\r\n---> 16 logger = datasets.utils.logging.get_logger(__name__)\r\n 19 if datasets.config.PYARROW_VERSION.major >= 7:\r\n 21 def pa_table_to_pylist(table):\r\n```",
"I am facing the same question. And this happens when i installing `evaluate` package while `jupyter notebook` running. I'm not sure if the error occured because of trying to import the package installed when the notebook is running. Surpringly when i stop the notebook and rerun, the issue has been solved itself. Hope this will be helpful : )",
"I also got this error.\r\nIt helped me to find the python process and kill it, then restart the kernel and the error disappeared.",
"> I also got this error. It helped me to find the python process and kill it, then restart the kernel and the error disappeared.\r\n\r\nYes!",
"> I am facing the same question. And this happens when i installing `evaluate` package while `jupyter notebook` running. I'm not sure if the error occured because of trying to import the package installed when the notebook is running. Surpringly when i stop the notebook and rerun, the issue has been solved itself. Hope this will be helpful : )\r\n\r\nThank you! :)"
] | 2022-08-28T02:23:24Z
| 2023-10-27T20:08:28Z
| 2022-10-03T12:22:50Z
|
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
Not able to import datasets
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import os
os.environ["WANDB_API_KEY"] = "0" ## to silence warning
import numpy as np
import random
import sklearn
import matplotlib.pyplot as plt
import pandas as pd
import sys
import tensorflow as tf
import plotly.express as px
import transformers
import tokenizers
import nlp as nlp
import utils
import datasets
```
## Expected results
A clear and concise description of the expected results.
import should work normal
## Actual results
Specify the actual results or traceback.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-b3b5b0b62103> in <module>
13 import nlp as nlp
14 import utils
---> 15 import datasets
~\anaconda3\lib\site-packages\datasets\__init__.py in <module>
44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled
45 from .info import DatasetInfo, MetricInfo
---> 46 from .inspect import (
47 get_dataset_config_info,
48 get_dataset_config_names,
~\anaconda3\lib\site-packages\datasets\inspect.py in <module>
28 from .download.streaming_download_manager import StreamingDownloadManager
29 from .info import DatasetInfo
---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory
31 from .utils.file_utils import relative_to_absolute_path
32 from .utils.logging import get_logger
~\anaconda3\lib\site-packages\datasets\load.py in <module>
53 from .iterable_dataset import IterableDataset
54 from .metric import Metric
---> 55 from .packaged_modules import (
56 _EXTENSION_TO_MODULE,
57 _MODULE_SUPPORTS_METADATA,
~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module>
4 from typing import List
5
----> 6 from .csv import csv
7 from .imagefolder import imagefolder
8 from .json import json
~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module>
13
14
---> 15 logger = datasets.utils.logging.get_logger(__name__)
16
17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"]
AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.8
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
|
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4906/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1079
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1079/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1079/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1079/events
|
https://github.com/huggingface/datasets/pull/1079
| 756,652,427
|
MDExOlB1bGxSZXF1ZXN0NTMyMTY4Nzky
| 1,079
|
nkjp-ner
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1654113?v=4",
"events_url": "https://api.github.com/users/abecadel/events{/privacy}",
"followers_url": "https://api.github.com/users/abecadel/followers",
"following_url": "https://api.github.com/users/abecadel/following{/other_user}",
"gists_url": "https://api.github.com/users/abecadel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abecadel",
"id": 1654113,
"login": "abecadel",
"node_id": "MDQ6VXNlcjE2NTQxMTM=",
"organizations_url": "https://api.github.com/users/abecadel/orgs",
"received_events_url": "https://api.github.com/users/abecadel/received_events",
"repos_url": "https://api.github.com/users/abecadel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abecadel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abecadel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abecadel"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-03T22:47:26Z
| 2020-12-04T09:42:06Z
| 2020-12-04T09:42:06Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1079",
"merged_at": "2020-12-04T09:42:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1079"
}
|
- **Name:** *nkjp-ner*
- **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.*
- **Data:** *https://klejbenchmark.com/tasks/*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1079/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1079/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3455
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3455/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3455/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3455/events
|
https://github.com/huggingface/datasets/issues/3455
| 1,084,599,650
|
I_kwDODunzps5Apa1i
| 3,455
|
Easier information editing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borgr",
"id": 6416600,
"login": "borgr",
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"repos_url": "https://api.github.com/users/borgr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borgr"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! I guess you are talking about the dataset cards that are in this repository on github ?\r\n\r\nI think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.\r\n\r\nMaybe let's mention this in `CONTRIBUTING.md` ?",
"We now host all the datasets on the HF Hub, where you can easily edit them through UI (for single file changes) or Git workflow (for single/multiple file changes)"
] | 2021-12-20T10:10:43Z
| 2023-07-25T15:36:14Z
| 2023-07-25T15:36:14Z
|
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
It requires a lot of effort to improve a datasheet.
**Describe the solution you'd like**
UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.)
**Describe alternatives you've considered**
The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3455/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3455/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5687
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5687/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5687/events
|
https://github.com/huggingface/datasets/issues/5687
| 1,647,009,018
|
I_kwDODunzps5iK1z6
| 5,687
|
Document to compress data files before uploading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] | null |
[
"Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`, `.json`, `.jsonl`, and `txt`, we recommend compressing them before uploading to the Hub. These file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of supported file extensions.",
"Hi @stevhliu, thanks for your suggestion.\r\n\r\nI agree it is a good opportunity to mention that audio/image file formats are also supported.\r\n\r\nNit:\r\nI would not mention .zip, .rar after \"text, audio, and image data extensions\". Those are \"compression\" extensions and not \"text, audio, and image data extensions\".\r\n\r\nWhat about something similar to:\r\n> We support many text, audio, and image data extensions such as `.csv`, `.mp3`, and `.jpg` among many others. For text data extensions like `.csv`, `.json`, `.jsonl`, and `.txt`, we recommend compressing them before uploading to the Hub (to `.zip` or `.gz` file extension for example). \r\n>\r\n> Note that text file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of tracked file extensions by default.\r\n\r\nNote that for compressions I have mentioned:\r\n- gz, to compress individual files\r\n- zip, to compress and archive multiple files; zip is preferred rather than tar because it supports streaming out of the box",
"Perfect, thanks for making the distinction between compression and data extensions!"
] | 2023-03-30T06:41:07Z
| 2023-04-19T07:25:59Z
| 2023-04-19T07:25:59Z
|
MEMBER
| null | null | null |
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them.
I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub.
- Compressed files are tracked by Git LFS in our default `.gitattributes` file
What do you think?
CC: @stevhliu
See related issue:
- https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5687/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3162
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3162/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3162/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3162/events
|
https://github.com/huggingface/datasets/issues/3162
| 1,035,462,136
|
I_kwDODunzps49t-X4
| 3,162
|
`datasets-cli test` should work with datasets without scripts
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).\r\n> \r\n> I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!\r\n\r\nwhy don't you try to share that info with people, so you can also save some days.",
"Hi ! You can run the command if you download the repository\r\n```\r\ngit clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest\r\n```\r\nand run the command\r\n```\r\ndatasets-cli test DataMeasurementsTest/DataMeasurementsTest.py\r\n```\r\n\r\n(though on my side it doesn't manage to download the data since the dataset is private ^^)",
"> Hi ! You can run the command if you download the repository\r\n> \r\n> ```\r\n> git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest\r\n> ```\r\n> \r\n> and run the command\r\n> \r\n> ```\r\n> datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py\r\n> ```\r\n> \r\n> (though on my side it doesn't manage to download the data since the dataset is private ^^)\r\n\r\nHi! Thanks for the info. \r\ngit cannot find the repository. Do you know if they have depreciated these tests and created a new one?",
"I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`",
"> I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`\r\n\r\nyour example repo and this page `https://huggingface.co/docs/datasets/add_dataset.html` helped me to solve.. thanks a lot"
] | 2021-10-25T18:52:30Z
| 2021-11-25T16:04:29Z
| null |
NONE
| null | null | null |
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).
I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3162/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3162/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/233
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/233/comments
|
https://api.github.com/repos/huggingface/datasets/issues/233/events
|
https://github.com/huggingface/datasets/issues/233
| 630,432,132
|
MDU6SXNzdWU2MzA0MzIxMzI=
| 233
|
Fail to download c4 english corpus
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4",
"events_url": "https://api.github.com/users/donggyukimc/events{/privacy}",
"followers_url": "https://api.github.com/users/donggyukimc/followers",
"following_url": "https://api.github.com/users/donggyukimc/following{/other_user}",
"gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donggyukimc",
"id": 16605764,
"login": "donggyukimc",
"node_id": "MDQ6VXNlcjE2NjA1NzY0",
"organizations_url": "https://api.github.com/users/donggyukimc/orgs",
"received_events_url": "https://api.github.com/users/donggyukimc/received_events",
"repos_url": "https://api.github.com/users/donggyukimc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donggyukimc"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can find more info on beam datasets [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md).\r\n\r\nOur goal in the future is to make available an already-processed version of C4 (as we do for wikipedia for example) so that users without apache beam runtimes can load it.",
"@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?",
"I have the same problem as @prashant-kikani",
"Looks like a bug in the dataset script, can you open an issue ?",
"I see the same issue as @prashant-kikani. I'm using `datasets` version 1.2.0 to download C4."
] | 2020-06-04T01:06:38Z
| 2021-01-08T07:17:32Z
| 2020-06-08T09:16:59Z
|
NONE
| null | null | null |
i run following code to download c4 English corpus.
```
dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner'
, data_dir='/mypath')
```
and i met failure as follows
```
Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/.cache/huggingface/datasets/c4/en/2.3.0...
Traceback (most recent call last):
File "download_corpus.py", line 38, in <module>
, data_dir='/home/adam/data/corpus/en/c4')
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 420, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 816, in _download_and_prepare
dl_manager, verify_infos=False, pipeline=pipeline,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 457, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/datasets/c4/f545de9f63300d8d02a6795e2eb34e140c47e62a803f572ac5599e170ee66ecc/c4.py", line 175, in _split_generators
dl_manager.download_checksums(_CHECKSUMS_URL)
AttributeError: 'DownloadManager' object has no attribute 'download_checksums
```
can i get any advice?
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/233/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6311
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6311/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6311/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6311/events
|
https://github.com/huggingface/datasets/issues/6311
| 1,949,304,993
|
I_kwDODunzps50MAih
| 6,311
|
cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neiblegy",
"id": 16574677,
"login": "neiblegy",
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neiblegy"
}
|
[] |
open
| false
| null |
[] | null |
[
"Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in https://github.com/huggingface/datasets/pull/6283 (should be part of the next release).",
"> Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in #6283 (should be part of the next release).\r\n\r\ni encounter another exception while cast_column to type `Sequence(feature={\"points\": Array2D(shape=(-1, 2), dtype=\"int64\"), \"label\": ClassLabel(num_classes=num_classes, names=names)})`\r\n\r\nwhile my data like this: '{\"points\": [[0.6,0.6], [0.7,0.7], [0.8,0.8]], \"label\": \"A1\"}'\r\n\r\nhere is the backtrace info:\r\n\r\n```\r\n out = func(dataset, *args, **kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2110, in cast_column\r\n return self.cast(features)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 2055, in cast\r\n dataset = dataset.map(\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 592, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 557, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3097, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3474, in _map_single\r\n batch = apply_function_on_filtered_inputs(\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3353, in apply_function_on_filtered_inputs\r\n processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2329, in table_cast\r\n return cast_table_to_schema(table, schema)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2288, in cast_table_to_schema\r\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2288, in <listcomp>\r\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 1831, in wrapper\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 1831, in <listcomp>\r\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2073, in cast_array_to_feature\r\n arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2073, in <listcomp>\r\n arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2095, in cast_array_to_feature\r\n casted_values = _c(array.values, feature.feature)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 2144, in cast_array_to_feature\r\n return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 1833, in wrapper\r\n return func(array, *args, **kwargs)\r\n File \"/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py\", line 1967, in array_cast\r\n return pa_type.wrap_array(array)\r\n File \"pyarrow/types.pxi\", line 1369, in pyarrow.lib.BaseExtensionType.wrap_array\r\nTypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: double>>, got list<item: double>\r\n```\r\nand i print(array) in datasets/table.py:1967 indeed get 2D list. is that same issue in #6283 ?\r\n\r\nbesides this, hugging face datasets seems don't naturally support multi-labels which means `Sequence(ClassLabel)` illegal if data is [\"label1\", \"label2\"]. so i have to define a class derived from `ClassLabel`, like this:\r\n\r\n```\r\nclass AisClassLabels(ClassLabel):\r\n def encode_example(self, example_data):\r\n if self.num_classes is None:\r\n raise ValueError(\r\n \"Trying to use ClassLabel feature with undefined number of class. \"\r\n \"Please set ClassLabel.names or num_classes.\"\r\n )\r\n if not isinstance(example_data, list):\r\n example_data = [example_data]\r\n\r\n for i in range(len(example_data)):\r\n if isinstance(example_data[i], str):\r\n example_data[i] = self.str2int(example_data[i])\r\n if not -1 <= example_data[i] < self.num_classes:\r\n raise ValueError(f\"Class label {example_data:d} greater than configured num_classes {self.num_classes}\")\r\n return example_data\r\n```\r\nand it works well in my case. but is there any recommend way to implement multi-labels?",
"`Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: double>>, got list<item: double>`\r\nif i change `Array2D(shape=(-1, 2), dtype=\"int64\")` to `Sequence(Value(\"int64\"))` , every thing goes well. but my data is 2D int list",
"i test Sequence(ClassLabel) is ok if one column is label list. but it is not ok in nested column such as `Sequence(feature= {\"points\": Sequence(Value(\"int32\")), \"label\": Sequence(ClassLabel(num_classes....)))`. in this case i need override ClassLabels. encode_example as i given above."
] | 2023-10-18T09:38:05Z
| 2023-10-20T10:17:43Z
| null |
NONE
| null | null | null |
### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] = [100,100,200,200]
example["ais_image_path"] = os.path.join("images", example["image_path"]) if example["image_path"] else ""
return example
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1749/")
hf_ds = ais_dataset.map(add_new_path, batched=False, num_proc=32)
ds = hf_ds.cast_column("ais_bbox", Sequence(Value("int32"), length=4))
```
and the `cast_column` raise an exception
```
Casting the dataset: 3%|███▉
...
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2110, in cast_column
return self.cast(features)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2055, in cast
dataset = dataset.map(
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2329, in table_cast
return cast_table_to_schema(table, schema)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2145, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
list<item: int64>
to
Sequence(feature=Value(dtype='int32', id=None), length=4, id=None)
```
i check the source code and make debug info:
in datasets/table.py:2092
```
2091 if feature.length > -1:
2092 if feature.length * len(array) == len(array.values):
2093 return pa.FixedSizeListArray.from_arrays(_c(array.values, feature.feature), feature.length)
2094 print(len(array))
2095 print(len(array.values))
```
my feature.length is 4. but feature.length * len(array) == len(array.values) is false.
print(len(array)) is 262
print(len(array.values)) is 4000
then I use "for item in array" to print each item then get 262 * [100,100,200,200]
and use "for item in array.values" to print each item and get 4000 int32 which are 1000 * [100,100,200,200]
i'm wondering the `chunk` in each `array.chunks`, the "chunk.values" may get all the chunks's value rather than single chunk? but i check the pyarrow's doc seems chunk.values is chunk's value not all.
### Steps to reproduce the bug
code provided above.
### Expected behavior
feature.length * len(array) == len(array.values) should be true. and there should not has Exception.
### Environment info
python3.9
x86_64
datasets: 2.14.4
pyarrow: 13.0.0 or 10.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6311/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6311/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/472
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/472/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/472/comments
|
https://api.github.com/repos/huggingface/datasets/issues/472/events
|
https://github.com/huggingface/datasets/pull/472
| 672,000,745
|
MDExOlB1bGxSZXF1ZXN0NDYyMTE1MjA4
| 472
|
add crd3 dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This PR was already approved by @lhoestq in #456 . This one just make style to remove some typos"
] | 2020-08-03T11:15:02Z
| 2020-08-03T11:22:10Z
| 2020-08-03T11:22:09Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/472",
"merged_at": "2020-08-03T11:22:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/472"
}
|
opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/472/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/472/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5830
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5830/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5830/events
|
https://github.com/huggingface/datasets/pull/5830
| 1,701,451,399
|
PR_kwDODunzps5QEFEi
| 5,830
|
Debug windows #2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HyukjinKwon",
"id": 6477701,
"login": "HyukjinKwon",
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HyukjinKwon"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-05-09T06:40:34Z
| 2023-05-09T06:40:47Z
| 2023-05-09T06:40:47Z
|
NONE
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5830/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6138
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6138/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6138/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6138/events
|
https://github.com/huggingface/datasets/pull/6138
| 1,844,952,496
|
PR_kwDODunzps5XoH2V
| 6,138
|
Ignore CI lint rule violation in Pickler.memoize
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.003890 / 0.011008 (-0.007118) | 0.084044 / 0.038508 (0.045536) | 0.071893 / 0.023109 (0.048784) | 0.346926 / 0.275898 (0.071028) | 0.397487 / 0.323480 (0.074007) | 0.004065 / 0.007986 (-0.003921) | 0.003218 / 0.004328 (-0.001111) | 0.064670 / 0.004250 (0.060420) | 0.052414 / 0.037052 (0.015362) | 0.355413 / 0.258489 (0.096924) | 0.398894 / 0.293841 (0.105053) | 0.030763 / 0.128546 (-0.097783) | 0.008590 / 0.075646 (-0.067056) | 0.286857 / 0.419271 (-0.132415) | 0.051126 / 0.043533 (0.007593) | 0.346125 / 0.255139 (0.090986) | 0.395673 / 0.283200 (0.112474) | 0.025766 / 0.141683 (-0.115917) | 1.466238 / 1.452155 (0.014084) | 1.543117 / 1.492716 (0.050400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213210 / 0.018006 (0.195204) | 0.451981 / 0.000490 (0.451491) | 0.003784 / 0.000200 (0.003585) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027756 / 0.037411 (-0.009655) | 0.082446 / 0.014526 (0.067920) | 0.095414 / 0.176557 (-0.081142) | 0.151812 / 0.737135 (-0.585323) | 0.096296 / 0.296338 (-0.200042) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383729 / 0.215209 (0.168520) | 3.835126 / 2.077655 (1.757471) | 1.891972 / 1.504120 (0.387852) | 1.719934 / 1.541195 (0.178739) | 1.899980 / 1.468490 (0.431490) | 0.488741 / 4.584777 (-4.096036) | 3.634120 / 3.745712 (-0.111592) | 3.243314 / 5.269862 (-2.026547) | 2.028382 / 4.565676 (-2.537294) | 0.057355 / 0.424275 (-0.366920) | 0.007717 / 0.007607 (0.000110) | 0.459835 / 0.226044 (0.233790) | 4.591793 / 2.268929 (2.322864) | 2.346861 / 55.444624 (-53.097764) | 2.067357 / 6.876477 (-4.809120) | 2.254954 / 2.142072 (0.112882) | 0.587016 / 4.805227 (-4.218211) | 0.133918 / 6.500664 (-6.366746) | 0.060311 / 0.075469 (-0.015158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250016 / 1.841788 (-0.591772) | 19.674333 / 8.074308 (11.600025) | 14.522764 / 10.191392 (4.331372) | 0.145741 / 0.680424 (-0.534683) | 0.018593 / 0.534201 (-0.515608) | 0.392833 / 0.579283 (-0.186450) | 0.408194 / 0.434364 (-0.026170) | 0.455164 / 0.540337 (-0.085174) | 0.622722 / 1.386936 (-0.764214) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006583 / 0.011353 (-0.004770) | 0.004008 / 0.011008 (-0.007000) | 0.064688 / 0.038508 (0.026180) | 0.074969 / 0.023109 (0.051860) | 0.360504 / 0.275898 (0.084606) | 0.396926 / 0.323480 (0.073446) | 0.005190 / 0.007986 (-0.002796) | 0.003363 / 0.004328 (-0.000966) | 0.064372 / 0.004250 (0.060122) | 0.054428 / 0.037052 (0.017376) | 0.361204 / 0.258489 (0.102715) | 0.400917 / 0.293841 (0.107077) | 0.031117 / 0.128546 (-0.097429) | 0.008406 / 0.075646 (-0.067241) | 0.069655 / 0.419271 (-0.349617) | 0.048582 / 0.043533 (0.005049) | 0.365396 / 0.255139 (0.110257) | 0.381344 / 0.283200 (0.098145) | 0.023809 / 0.141683 (-0.117874) | 1.472926 / 1.452155 (0.020772) | 1.547298 / 1.492716 (0.054582) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276912 / 0.018006 (0.258906) | 0.449096 / 0.000490 (0.448607) | 0.018921 / 0.000200 (0.018721) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030237 / 0.037411 (-0.007174) | 0.088610 / 0.014526 (0.074084) | 0.101529 / 0.176557 (-0.075027) | 0.154070 / 0.737135 (-0.583065) | 0.103471 / 0.296338 (-0.192867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416047 / 0.215209 (0.200838) | 4.152374 / 2.077655 (2.074719) | 2.111181 / 1.504120 (0.607061) | 1.943582 / 1.541195 (0.402387) | 2.031729 / 1.468490 (0.563239) | 0.486740 / 4.584777 (-4.098037) | 3.631547 / 3.745712 (-0.114165) | 3.251202 / 5.269862 (-2.018660) | 2.041272 / 4.565676 (-2.524405) | 0.057287 / 0.424275 (-0.366988) | 0.007303 / 0.007607 (-0.000304) | 0.491027 / 0.226044 (0.264982) | 4.906757 / 2.268929 (2.637829) | 2.581694 / 55.444624 (-52.862931) | 2.250996 / 6.876477 (-4.625481) | 2.441771 / 2.142072 (0.299698) | 0.600714 / 4.805227 (-4.204514) | 0.133233 / 6.500664 (-6.367431) | 0.060856 / 0.075469 (-0.014613) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340062 / 1.841788 (-0.501725) | 19.973899 / 8.074308 (11.899591) | 14.347381 / 10.191392 (4.155989) | 0.166651 / 0.680424 (-0.513773) | 0.018691 / 0.534201 (-0.515510) | 0.393580 / 0.579283 (-0.185703) | 0.409425 / 0.434364 (-0.024939) | 0.474409 / 0.540337 (-0.065929) | 0.649423 / 1.386936 (-0.737514) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006593 / 0.011353 (-0.004760) | 0.004123 / 0.011008 (-0.006885) | 0.084424 / 0.038508 (0.045916) | 0.076867 / 0.023109 (0.053758) | 0.309149 / 0.275898 (0.033251) | 0.348572 / 0.323480 (0.025092) | 0.005463 / 0.007986 (-0.002523) | 0.003440 / 0.004328 (-0.000889) | 0.064604 / 0.004250 (0.060353) | 0.053920 / 0.037052 (0.016868) | 0.345221 / 0.258489 (0.086732) | 0.363209 / 0.293841 (0.069368) | 0.031209 / 0.128546 (-0.097337) | 0.008690 / 0.075646 (-0.066956) | 0.288851 / 0.419271 (-0.130421) | 0.052239 / 0.043533 (0.008707) | 0.308643 / 0.255139 (0.053504) | 0.346407 / 0.283200 (0.063207) | 0.023935 / 0.141683 (-0.117748) | 1.469207 / 1.452155 (0.017052) | 1.532855 / 1.492716 (0.040138) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290885 / 0.018006 (0.272879) | 0.580561 / 0.000490 (0.580071) | 0.004698 / 0.000200 (0.004498) | 0.000286 / 0.000054 (0.000231) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028015 / 0.037411 (-0.009396) | 0.081172 / 0.014526 (0.066646) | 0.096822 / 0.176557 (-0.079735) | 0.151355 / 0.737135 (-0.585781) | 0.098017 / 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384069 / 0.215209 (0.168859) | 3.828635 / 2.077655 (1.750980) | 1.829311 / 1.504120 (0.325192) | 1.672520 / 1.541195 (0.131325) | 1.743944 / 1.468490 (0.275453) | 0.481594 / 4.584777 (-4.103183) | 3.556204 / 3.745712 (-0.189509) | 3.279499 / 5.269862 (-1.990363) | 2.033243 / 4.565676 (-2.532434) | 0.056525 / 0.424275 (-0.367750) | 0.007717 / 0.007607 (0.000109) | 0.466815 / 0.226044 (0.240771) | 4.657022 / 2.268929 (2.388094) | 2.438600 / 55.444624 (-53.006024) | 2.097999 / 6.876477 (-4.778478) | 2.263122 / 2.142072 (0.121049) | 0.636001 / 4.805227 (-4.169226) | 0.147727 / 6.500664 (-6.352937) | 0.059293 / 0.075469 (-0.016176) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243111 / 1.841788 (-0.598677) | 19.558379 / 8.074308 (11.484071) | 14.141017 / 10.191392 (3.949625) | 0.169840 / 0.680424 (-0.510583) | 0.017912 / 0.534201 (-0.516289) | 0.391325 / 0.579283 (-0.187958) | 0.417169 / 0.434364 (-0.017195) | 0.457129 / 0.540337 (-0.083209) | 0.629907 / 1.386936 (-0.757029) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006687 / 0.011353 (-0.004666) | 0.004165 / 0.011008 (-0.006844) | 0.064738 / 0.038508 (0.026230) | 0.077286 / 0.023109 (0.054177) | 0.364236 / 0.275898 (0.088338) | 0.393228 / 0.323480 (0.069748) | 0.005451 / 0.007986 (-0.002535) | 0.003547 / 0.004328 (-0.000781) | 0.065761 / 0.004250 (0.061510) | 0.056526 / 0.037052 (0.019474) | 0.365523 / 0.258489 (0.107034) | 0.403331 / 0.293841 (0.109490) | 0.030900 / 0.128546 (-0.097646) | 0.008757 / 0.075646 (-0.066889) | 0.070961 / 0.419271 (-0.348311) | 0.048394 / 0.043533 (0.004861) | 0.365908 / 0.255139 (0.110769) | 0.381197 / 0.283200 (0.097998) | 0.022940 / 0.141683 (-0.118743) | 1.487909 / 1.452155 (0.035754) | 1.532931 / 1.492716 (0.040215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317506 / 0.018006 (0.299500) | 0.513391 / 0.000490 (0.512902) | 0.005464 / 0.000200 (0.005264) | 0.000214 / 0.000054 (0.000159) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032289 / 0.037411 (-0.005122) | 0.090157 / 0.014526 (0.075631) | 0.103514 / 0.176557 (-0.073043) | 0.158236 / 0.737135 (-0.578899) | 0.106554 / 0.296338 (-0.189784) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406455 / 0.215209 (0.191246) | 4.061563 / 2.077655 (1.983908) | 2.082201 / 1.504120 (0.578081) | 1.914433 / 1.541195 (0.373238) | 2.039342 / 1.468490 (0.570852) | 0.478444 / 4.584777 (-4.106333) | 3.599755 / 3.745712 (-0.145957) | 3.294453 / 5.269862 (-1.975409) | 2.028519 / 4.565676 (-2.537158) | 0.056118 / 0.424275 (-0.368157) | 0.007325 / 0.007607 (-0.000282) | 0.493177 / 0.226044 (0.267132) | 4.926218 / 2.268929 (2.657289) | 2.605033 / 55.444624 (-52.839591) | 2.239933 / 6.876477 (-4.636544) | 2.454210 / 2.142072 (0.312137) | 0.571905 / 4.805227 (-4.233322) | 0.133251 / 6.500664 (-6.367413) | 0.062422 / 0.075469 (-0.013047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352752 / 1.841788 (-0.489036) | 20.265109 / 8.074308 (12.190801) | 14.293064 / 10.191392 (4.101672) | 0.169267 / 0.680424 (-0.511157) | 0.018607 / 0.534201 (-0.515594) | 0.393655 / 0.579283 (-0.185628) | 0.402132 / 0.434364 (-0.032232) | 0.477566 / 0.540337 (-0.062772) | 0.651773 / 1.386936 (-0.735163) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-10T11:03:15Z
| 2023-08-10T11:31:45Z
| 2023-08-10T11:22:56Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6138.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6138",
"merged_at": "2023-08-10T11:22:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6138.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6138"
}
|
This PR ignores the violation of the lint rule E721 in `Pickler.memoize`.
The lint rule violation was introduced in this PR:
- #3182
@lhoestq is there a reason you did not use `isinstance` instead?
As a hotfix, we just ignore the violation of the lint rule.
Fix #6136.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6138/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6138/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3373
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3373/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3373/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3373/events
|
https://github.com/huggingface/datasets/issues/3373
| 1,070,406,391
|
I_kwDODunzps4_zRr3
| 3,373
|
Support streaming zipped CSV dataset repo by passing only repo name
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2021-12-03T09:48:24Z
| 2021-12-16T18:03:31Z
| 2021-12-16T18:03:31Z
|
MEMBER
| null | null | null |
Given a community 🤗 dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`:
```
ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab"
ds = load_dataset(ds_name, split="train", streaming=True, use_auth_token=True)
item = next(iter(ds))
```
Currently, it gives a `FileNotFoundError` because there is no glob (no "\*" after "zip://": "zip://*") in the passed URL:
```
'zip://::https://huggingface.co/datasets/bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab/resolve/e5d45f1bd9a8a798cc14f0a45ebc1ce91907c792/poems_dataset.zip'
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3373/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3373/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.