url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
β | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1088
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1088/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1088/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1088/events
|
https://github.com/huggingface/datasets/pull/1088
| 756,822,017
|
MDExOlB1bGxSZXF1ZXN0NTMyMzAyNjIz
| 1,088
|
add xquad_r dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manandey",
"id": 6687858,
"login": "manandey",
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"repos_url": "https://api.github.com/users/manandey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manandey"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-04T05:45:55Z
| 2020-12-04T10:58:13Z
| 2020-12-04T10:47:01Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1088.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1088",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1088.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1088"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1088/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1088/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/4814
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4814/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4814/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4814/events
|
https://github.com/huggingface/datasets/issues/4814
| 1,333,356,230
|
I_kwDODunzps5PeWbG
| 4,814
|
Support CSV as metadata file format in AudioFolder/ImageFolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[] | 2022-08-09T14:36:49Z
| 2022-08-31T11:59:08Z
| 2022-08-31T11:59:08Z
|
CONTRIBUTOR
| null | null | null |
Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4814/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4814/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6456
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6456/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6456/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6456/events
|
https://github.com/huggingface/datasets/pull/6456
| 2,015,186,090
|
PR_kwDODunzps5gmDJY
| 6,456
|
Don't require trust_remote_code in inspect_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005705 / 0.011353 (-0.005648) | 0.003536 / 0.011008 (-0.007473) | 0.062852 / 0.038508 (0.024343) | 0.053902 / 0.023109 (0.030793) | 0.239465 / 0.275898 (-0.036433) | 0.270829 / 0.323480 (-0.052651) | 0.004052 / 0.007986 (-0.003934) | 0.002775 / 0.004328 (-0.001554) | 0.048475 / 0.004250 (0.044225) | 0.039430 / 0.037052 (0.002377) | 0.244318 / 0.258489 (-0.014171) | 0.277539 / 0.293841 (-0.016302) | 0.027637 / 0.128546 (-0.100909) | 0.010875 / 0.075646 (-0.064771) | 0.208839 / 0.419271 (-0.210432) | 0.036984 / 0.043533 (-0.006549) | 0.246355 / 0.255139 (-0.008784) | 0.271200 / 0.283200 (-0.011999) | 0.020636 / 0.141683 (-0.121047) | 1.078472 / 1.452155 (-0.373683) | 1.155701 / 1.492716 (-0.337015) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100971 / 0.018006 (0.082965) | 0.310996 / 0.000490 (0.310507) | 0.000218 / 0.000200 (0.000018) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019300 / 0.037411 (-0.018111) | 0.060625 / 0.014526 (0.046099) | 0.073778 / 0.176557 (-0.102778) | 0.120280 / 0.737135 (-0.616855) | 0.075288 / 0.296338 (-0.221051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289838 / 0.215209 (0.074629) | 2.859492 / 2.077655 (0.781837) | 1.528478 / 1.504120 (0.024358) | 1.417911 / 1.541195 (-0.123283) | 1.444227 / 1.468490 (-0.024263) | 0.566799 / 4.584777 (-4.017978) | 2.402526 / 3.745712 (-1.343186) | 2.805241 / 5.269862 (-2.464620) | 1.798572 / 4.565676 (-2.767104) | 0.062920 / 0.424275 (-0.361355) | 0.004995 / 0.007607 (-0.002612) | 0.340688 / 0.226044 (0.114644) | 3.347967 / 2.268929 (1.079039) | 1.898464 / 55.444624 (-53.546160) | 1.604784 / 6.876477 (-5.271693) | 1.648864 / 2.142072 (-0.493209) | 0.642242 / 4.805227 (-4.162985) | 0.117567 / 6.500664 (-6.383097) | 0.041911 / 0.075469 (-0.033558) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949099 / 1.841788 (-0.892689) | 12.367323 / 8.074308 (4.293015) | 10.694238 / 10.191392 (0.502846) | 0.143424 / 0.680424 (-0.537000) | 0.014569 / 0.534201 (-0.519632) | 0.289127 / 0.579283 (-0.290156) | 0.270490 / 0.434364 (-0.163874) | 0.326470 / 0.540337 (-0.213867) | 0.432223 / 1.386936 (-0.954713) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005380 / 0.011353 (-0.005973) | 0.003582 / 0.011008 (-0.007426) | 0.049341 / 0.038508 (0.010833) | 0.053274 / 0.023109 (0.030165) | 0.284319 / 0.275898 (0.008421) | 0.334248 / 0.323480 (0.010768) | 0.004032 / 0.007986 (-0.003953) | 0.002682 / 0.004328 (-0.001646) | 0.048317 / 0.004250 (0.044067) | 0.040157 / 0.037052 (0.003105) | 0.284594 / 0.258489 (0.026105) | 0.341567 / 0.293841 (0.047726) | 0.029639 / 0.128546 (-0.098908) | 0.010780 / 0.075646 (-0.064867) | 0.057990 / 0.419271 (-0.361282) | 0.032730 / 0.043533 (-0.010803) | 0.290328 / 0.255139 (0.035189) | 0.298563 / 0.283200 (0.015363) | 0.018546 / 0.141683 (-0.123137) | 1.143157 / 1.452155 (-0.308998) | 1.191391 / 1.492716 (-0.301326) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093802 / 0.018006 (0.075796) | 0.312771 / 0.000490 (0.312282) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021867 / 0.037411 (-0.015544) | 0.069064 / 0.014526 (0.054538) | 0.082270 / 0.176557 (-0.094287) | 0.120222 / 0.737135 (-0.616913) | 0.084628 / 0.296338 (-0.211710) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295505 / 0.215209 (0.080296) | 2.891105 / 2.077655 (0.813450) | 1.619480 / 1.504120 (0.115360) | 1.498290 / 1.541195 (-0.042905) | 1.547896 / 1.468490 (0.079406) | 0.575188 / 4.584777 (-4.009589) | 2.434426 / 3.745712 (-1.311286) | 2.899286 / 5.269862 (-2.370576) | 1.806085 / 4.565676 (-2.759591) | 0.063660 / 0.424275 (-0.360616) | 0.004933 / 0.007607 (-0.002674) | 0.348274 / 0.226044 (0.122229) | 3.447900 / 2.268929 (1.178971) | 1.956237 / 55.444624 (-53.488387) | 1.680416 / 6.876477 (-5.196061) | 1.732307 / 2.142072 (-0.409766) | 0.668428 / 4.805227 (-4.136799) | 0.119161 / 6.500664 (-6.381503) | 0.041694 / 0.075469 (-0.033775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973730 / 1.841788 (-0.868058) | 12.082452 / 8.074308 (4.008144) | 10.624836 / 10.191392 (0.433444) | 0.144027 / 0.680424 (-0.536397) | 0.014830 / 0.534201 (-0.519370) | 0.289946 / 0.579283 (-0.289337) | 0.281939 / 0.434364 (-0.152424) | 0.325639 / 0.540337 (-0.214699) | 0.551690 / 1.386936 (-0.835246) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005279 / 0.011353 (-0.006074) | 0.003506 / 0.011008 (-0.007502) | 0.062579 / 0.038508 (0.024071) | 0.052809 / 0.023109 (0.029700) | 0.274693 / 0.275898 (-0.001205) | 0.283917 / 0.323480 (-0.039563) | 0.003950 / 0.007986 (-0.004036) | 0.002772 / 0.004328 (-0.001557) | 0.048127 / 0.004250 (0.043877) | 0.037771 / 0.037052 (0.000719) | 0.280595 / 0.258489 (0.022106) | 0.292310 / 0.293841 (-0.001531) | 0.027890 / 0.128546 (-0.100656) | 0.010771 / 0.075646 (-0.064875) | 0.207285 / 0.419271 (-0.211987) | 0.036179 / 0.043533 (-0.007354) | 0.253617 / 0.255139 (-0.001522) | 0.276107 / 0.283200 (-0.007093) | 0.018253 / 0.141683 (-0.123430) | 1.112219 / 1.452155 (-0.339936) | 1.166756 / 1.492716 (-0.325960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095159 / 0.018006 (0.077152) | 0.306097 / 0.000490 (0.305608) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019056 / 0.037411 (-0.018355) | 0.060445 / 0.014526 (0.045919) | 0.073553 / 0.176557 (-0.103004) | 0.120306 / 0.737135 (-0.616829) | 0.075613 / 0.296338 (-0.220725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277839 / 0.215209 (0.062630) | 2.761037 / 2.077655 (0.683382) | 1.508524 / 1.504120 (0.004404) | 1.368994 / 1.541195 (-0.172201) | 1.415961 / 1.468490 (-0.052529) | 0.570490 / 4.584777 (-4.014287) | 2.356355 / 3.745712 (-1.389357) | 2.806626 / 5.269862 (-2.463235) | 1.757849 / 4.565676 (-2.807827) | 0.063504 / 0.424275 (-0.360771) | 0.005021 / 0.007607 (-0.002586) | 0.338880 / 0.226044 (0.112836) | 3.290947 / 2.268929 (1.022018) | 1.818238 / 55.444624 (-53.626386) | 1.529970 / 6.876477 (-5.346507) | 1.557085 / 2.142072 (-0.584987) | 0.645352 / 4.805227 (-4.159876) | 0.123066 / 6.500664 (-6.377598) | 0.043387 / 0.075469 (-0.032082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974512 / 1.841788 (-0.867276) | 11.976411 / 8.074308 (3.902103) | 10.361084 / 10.191392 (0.169692) | 0.127171 / 0.680424 (-0.553253) | 0.014091 / 0.534201 (-0.520110) | 0.288608 / 0.579283 (-0.290675) | 0.261886 / 0.434364 (-0.172478) | 0.331632 / 0.540337 (-0.208705) | 0.437002 / 1.386936 (-0.949934) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005129 / 0.011353 (-0.006224) | 0.003490 / 0.011008 (-0.007518) | 0.049005 / 0.038508 (0.010497) | 0.054077 / 0.023109 (0.030968) | 0.276653 / 0.275898 (0.000755) | 0.298752 / 0.323480 (-0.024728) | 0.003979 / 0.007986 (-0.004007) | 0.002625 / 0.004328 (-0.001703) | 0.047951 / 0.004250 (0.043701) | 0.040969 / 0.037052 (0.003916) | 0.279879 / 0.258489 (0.021390) | 0.306244 / 0.293841 (0.012403) | 0.029025 / 0.128546 (-0.099522) | 0.010450 / 0.075646 (-0.065197) | 0.056846 / 0.419271 (-0.362426) | 0.033476 / 0.043533 (-0.010057) | 0.273340 / 0.255139 (0.018201) | 0.294783 / 0.283200 (0.011584) | 0.019105 / 0.141683 (-0.122578) | 1.126389 / 1.452155 (-0.325766) | 1.183369 / 1.492716 (-0.309348) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094995 / 0.018006 (0.076989) | 0.306984 / 0.000490 (0.306495) | 0.000224 / 0.000200 (0.000024) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021880 / 0.037411 (-0.015532) | 0.069674 / 0.014526 (0.055148) | 0.082191 / 0.176557 (-0.094366) | 0.120956 / 0.737135 (-0.616179) | 0.083843 / 0.296338 (-0.212495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295139 / 0.215209 (0.079929) | 2.860520 / 2.077655 (0.782865) | 1.578892 / 1.504120 (0.074772) | 1.451003 / 1.541195 (-0.090192) | 1.483099 / 1.468490 (0.014609) | 0.550491 / 4.584777 (-4.034286) | 2.430352 / 3.745712 (-1.315360) | 2.874468 / 5.269862 (-2.395393) | 1.741474 / 4.565676 (-2.824202) | 0.062563 / 0.424275 (-0.361712) | 0.004962 / 0.007607 (-0.002645) | 0.343747 / 0.226044 (0.117703) | 3.419046 / 2.268929 (1.150118) | 1.943774 / 55.444624 (-53.500851) | 1.650989 / 6.876477 (-5.225488) | 1.704083 / 2.142072 (-0.437990) | 0.645447 / 4.805227 (-4.159780) | 0.125105 / 6.500664 (-6.375559) | 0.041319 / 0.075469 (-0.034150) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959708 / 1.841788 (-0.882079) | 12.235906 / 8.074308 (4.161598) | 10.575402 / 10.191392 (0.384010) | 0.143619 / 0.680424 (-0.536805) | 0.015517 / 0.534201 (-0.518684) | 0.285231 / 0.579283 (-0.294052) | 0.281549 / 0.434364 (-0.152815) | 0.326649 / 0.540337 (-0.213689) | 0.565706 / 1.386936 (-0.821230) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-28T19:47:07Z
| 2023-11-30T10:40:23Z
| 2023-11-30T10:34:12Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6456",
"merged_at": "2023-11-30T10:34:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6456"
}
|
don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose)
(not super important but we might as well keep it until the next major release)
this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6456/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6456/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1572/events
|
https://github.com/huggingface/datasets/pull/1572
| 767,008,470
|
MDExOlB1bGxSZXF1ZXN0NTM5ODU5OTgx
| 1,572
|
add Gnad10 dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-14T23:15:02Z
| 2021-09-17T16:54:37Z
| 2020-12-16T16:52:30Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1572",
"merged_at": "2020-12-16T16:52:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1572"
}
|
reference [PR#1317](https://github.com/huggingface/datasets/pull/1317)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1572/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1572/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/186
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/186/comments
|
https://api.github.com/repos/huggingface/datasets/issues/186/events
|
https://github.com/huggingface/datasets/issues/186
| 623,595,180
|
MDU6SXNzdWU2MjM1OTUxODA=
| 186
|
Weird-ish: Not creating unique caches for different phases
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zphang",
"id": 1668462,
"login": "zphang",
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"repos_url": "https://api.github.com/users/zphang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zphang"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon",
"Good catch, it looks fixed.\r\n"
] | 2020-05-23T06:40:58Z
| 2020-05-23T20:22:18Z
| 2020-05-23T20:22:17Z
|
NONE
| null | null | null |
Sample code:
```python
import nlp
dataset = nlp.load_dataset('boolq')
def func1(x):
return x
def func2(x):
return None
train_output = dataset["train"].map(func1)
valid_output = dataset["validation"].map(func1)
print()
print(len(train_output), len(valid_output))
# Output: 9427 9427
```
The map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache.
What's weird is that the following doesn't seem to be an issue:
```python
train_output = dataset["train"].map(func2)
valid_output = dataset["validation"].map(func2)
print()
print(len(train_output), len(valid_output))
# 9427 3270
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/186/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/186/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1095
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1095/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1095/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1095/events
|
https://github.com/huggingface/datasets/pull/1095
| 756,934,964
|
MDExOlB1bGxSZXF1ZXN0NTMyMzk0Nzgy
| 1,095
|
Add TupleInf Open IE Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46804938?v=4",
"events_url": "https://api.github.com/users/mattbui/events{/privacy}",
"followers_url": "https://api.github.com/users/mattbui/followers",
"following_url": "https://api.github.com/users/mattbui/following{/other_user}",
"gists_url": "https://api.github.com/users/mattbui/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mattbui",
"id": 46804938,
"login": "mattbui",
"node_id": "MDQ6VXNlcjQ2ODA0OTM4",
"organizations_url": "https://api.github.com/users/mattbui/orgs",
"received_events_url": "https://api.github.com/users/mattbui/received_events",
"repos_url": "https://api.github.com/users/mattbui/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mattbui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mattbui/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mattbui"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Errors are in the CI are not related to this PR (RemoteDatasetError)\r\nthe CI is fixed on master so it's fine ",
"@lhoestq Added the dataset card. Please let me know if more information needs to be added."
] | 2020-12-04T09:08:07Z
| 2020-12-04T15:40:54Z
| 2020-12-04T15:40:54Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1095.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1095",
"merged_at": "2020-12-04T15:40:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1095.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1095"
}
|
For more information: https://allenai.org/data/tuple-ie
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1095/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1095/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1940
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1940/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1940/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1940/events
|
https://github.com/huggingface/datasets/issues/1940
| 815,770,012
|
MDU6SXNzdWU4MTU3NzAwMTI=
| 1,940
|
Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francisco-perez-sorrosal",
"id": 918006,
"login": "francisco-perez-sorrosal",
"node_id": "MDQ6VXNlcjkxODAwNg==",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francisco-perez-sorrosal"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for the report !\r\n\r\nCurrently we don't have a way to let the user easily disable this behavior.\r\nHowever I agree that we should support stateful processing functions, ideally by removing `does_function_return_dict`.\r\n\r\nWe needed this function in order to know whether the `map` functions needs to write data or not. if `does_function_return_dict` returns False then we don't write anything.\r\n\r\nInstead of checking the output of the processing function outside of the for loop that iterates through the dataset to process it, we can check the output of the first processed example and at that point decide if we need to write data or not.\r\n\r\nTherefore it's definitely possible to fix this unwanted behavior, any contribution going into this direction is welcome :)",
"Thanks @mariosasko for the PR!"
] | 2021-02-24T19:18:56Z
| 2021-03-23T15:26:49Z
| 2021-03-23T15:26:49Z
|
CONTRIBUTOR
| null | null | null |
Hi there!
In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end:
```python
def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter):
label = int(example['label'])
current_counter = counter.get(label, 0)
if current_counter < per_class_limit:
counter[label] = current_counter + 1
return True
return False
```
At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this:
```python
...
kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()}
datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs)
...
```
The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290).
When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained.
In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call.
I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect.
Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1)
Thanks in advance,
Francisco Perez-Sorrosal
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1940/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1940/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2240
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2240/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2240/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2240/events
|
https://github.com/huggingface/datasets/pull/2240
| 862,537,856
|
MDExOlB1bGxSZXF1ZXN0NjE5MDkyODc5
| 2,240
|
Clarify how to load wikihow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-04-20T08:02:58Z
| 2021-04-21T09:54:57Z
| 2021-04-21T09:54:57Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2240.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2240",
"merged_at": "2021-04-21T09:54:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2240.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2240"
}
|
Explain clearer how to load the dataset in the manual download instructions.
En relation with #2239.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2240/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2240/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1337
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1337/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1337/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1337/events
|
https://github.com/huggingface/datasets/pull/1337
| 759,710,482
|
MDExOlB1bGxSZXF1ZXN0NTM0NjY3NDUz
| 1,337
|
Add spanish billion words
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57645283?v=4",
"events_url": "https://api.github.com/users/mariagrandury/events{/privacy}",
"followers_url": "https://api.github.com/users/mariagrandury/followers",
"following_url": "https://api.github.com/users/mariagrandury/following{/other_user}",
"gists_url": "https://api.github.com/users/mariagrandury/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariagrandury",
"id": 57645283,
"login": "mariagrandury",
"node_id": "MDQ6VXNlcjU3NjQ1Mjgz",
"organizations_url": "https://api.github.com/users/mariagrandury/orgs",
"received_events_url": "https://api.github.com/users/mariagrandury/received_events",
"repos_url": "https://api.github.com/users/mariagrandury/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariagrandury/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariagrandury/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariagrandury"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)."
] | 2020-12-08T19:18:02Z
| 2020-12-08T22:59:38Z
| 2020-12-08T21:15:27Z
|
CONTRIBUTOR
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1337.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1337",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1337.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1337"
}
|
Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web.
The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1337/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1337/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2773
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2773/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2773/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2773/events
|
https://github.com/huggingface/datasets/issues/2773
| 963,730,497
|
MDU6SXNzdWU5NjM3MzA0OTc=
| 2,773
|
Remove dataset_infos.json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] |
open
| false
| null |
[] | null |
[] | 2021-08-09T07:43:19Z
| 2021-08-09T07:43:19Z
| null |
MEMBER
| null | null | null |
**Is your feature request related to a problem? Please describe.**
As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file.
Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",...
However, there are others that do not seem too meaningful in the README, like the checksums.
**Describe the solution you'd like**
Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept.
cc: @julien-c @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2773/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2773/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5526
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5526/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5526/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5526/events
|
https://github.com/huggingface/datasets/pull/5526
| 1,580,488,133
|
PR_kwDODunzps5JwVol
| 5,526
|
Allow loading/saving of FAISS index using fsspec
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the quick review! I updated the code with your suggestion",
"Thanks for the quick review @albertvillanova! I updated the code with your suggestions",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008577 / 0.011353 (-0.002776) | 0.005714 / 0.011008 (-0.005294) | 0.114718 / 0.038508 (0.076210) | 0.039799 / 0.023109 (0.016690) | 0.387530 / 0.275898 (0.111632) | 0.395739 / 0.323480 (0.072259) | 0.006775 / 0.007986 (-0.001211) | 0.006280 / 0.004328 (0.001952) | 0.086470 / 0.004250 (0.082220) | 0.054424 / 0.037052 (0.017371) | 0.361989 / 0.258489 (0.103500) | 0.424678 / 0.293841 (0.130837) | 0.043081 / 0.128546 (-0.085465) | 0.013903 / 0.075646 (-0.061743) | 0.397625 / 0.419271 (-0.021647) | 0.059789 / 0.043533 (0.016256) | 0.375195 / 0.255139 (0.120056) | 0.403724 / 0.283200 (0.120524) | 0.121470 / 0.141683 (-0.020213) | 1.734496 / 1.452155 (0.282341) | 1.820479 / 1.492716 (0.327763) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239672 / 0.018006 (0.221665) | 0.499373 / 0.000490 (0.498883) | 0.005034 / 0.000200 (0.004834) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033000 / 0.037411 (-0.004411) | 0.130930 / 0.014526 (0.116404) | 0.151690 / 0.176557 (-0.024866) | 0.211839 / 0.737135 (-0.525296) | 0.148727 / 0.296338 (-0.147612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480592 / 0.215209 (0.265382) | 4.809700 / 2.077655 (2.732046) | 2.232414 / 1.504120 (0.728294) | 2.035432 / 1.541195 (0.494237) | 2.115991 / 1.468490 (0.647501) | 0.817841 / 4.584777 (-3.766936) | 4.718035 / 3.745712 (0.972323) | 4.107102 / 5.269862 (-1.162759) | 2.166838 / 4.565676 (-2.398839) | 0.102207 / 0.424275 (-0.322068) | 0.014686 / 0.007607 (0.007079) | 0.599922 / 0.226044 (0.373877) | 5.985840 / 2.268929 (3.716912) | 2.769199 / 55.444624 (-52.675425) | 2.427095 / 6.876477 (-4.449382) | 2.586666 / 2.142072 (0.444593) | 0.987650 / 4.805227 (-3.817578) | 0.199419 / 6.500664 (-6.301245) | 0.076710 / 0.075469 (0.001240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.454509 / 1.841788 (-0.387278) | 18.267849 / 8.074308 (10.193541) | 16.701880 / 10.191392 (6.510488) | 0.204225 / 0.680424 (-0.476199) | 0.020295 / 0.534201 (-0.513906) | 0.504254 / 0.579283 (-0.075029) | 0.535071 / 0.434364 (0.100707) | 0.611825 / 0.540337 (0.071488) | 0.697289 / 1.386936 (-0.689647) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009141 / 0.011353 (-0.002211) | 0.005987 / 0.011008 (-0.005021) | 0.092003 / 0.038508 (0.053495) | 0.043239 / 0.023109 (0.020130) | 0.400425 / 0.275898 (0.124527) | 0.464849 / 0.323480 (0.141369) | 0.008256 / 0.007986 (0.000270) | 0.006251 / 0.004328 (0.001923) | 0.095263 / 0.004250 (0.091013) | 0.057899 / 0.037052 (0.020847) | 0.402899 / 0.258489 (0.144410) | 0.477411 / 0.293841 (0.183570) | 0.044122 / 0.128546 (-0.084424) | 0.014158 / 0.075646 (-0.061489) | 0.116354 / 0.419271 (-0.302917) | 0.061045 / 0.043533 (0.017512) | 0.411635 / 0.255139 (0.156497) | 0.466281 / 0.283200 (0.183082) | 0.129423 / 0.141683 (-0.012260) | 1.799790 / 1.452155 (0.347635) | 2.004578 / 1.492716 (0.511862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224012 / 0.018006 (0.206006) | 0.502972 / 0.000490 (0.502482) | 0.003560 / 0.000200 (0.003360) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034794 / 0.037411 (-0.002618) | 0.139646 / 0.014526 (0.125120) | 0.144330 / 0.176557 (-0.032226) | 0.202528 / 0.737135 (-0.534607) | 0.151561 / 0.296338 (-0.144777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504343 / 0.215209 (0.289133) | 5.050690 / 2.077655 (2.973035) | 2.433107 / 1.504120 (0.928987) | 2.197443 / 1.541195 (0.656248) | 2.331225 / 1.468490 (0.862734) | 0.834066 / 4.584777 (-3.750711) | 4.837648 / 3.745712 (1.091936) | 4.105672 / 5.269862 (-1.164189) | 2.281557 / 4.565676 (-2.284120) | 0.102257 / 0.424275 (-0.322018) | 0.014425 / 0.007607 (0.006818) | 0.629290 / 0.226044 (0.403245) | 6.251513 / 2.268929 (3.982585) | 2.959012 / 55.444624 (-52.485613) | 2.570031 / 6.876477 (-4.306446) | 2.657525 / 2.142072 (0.515453) | 1.002861 / 4.805227 (-3.802367) | 0.199326 / 6.500664 (-6.301338) | 0.078428 / 0.075469 (0.002958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.579587 / 1.841788 (-0.262201) | 18.567509 / 8.074308 (10.493201) | 17.162144 / 10.191392 (6.970752) | 0.193460 / 0.680424 (-0.486964) | 0.020819 / 0.534201 (-0.513382) | 0.501929 / 0.579283 (-0.077354) | 0.508039 / 0.434364 (0.073675) | 0.582656 / 0.540337 (0.042319) | 0.693624 / 1.386936 (-0.693312) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-10T23:37:14Z
| 2023-03-27T15:26:46Z
| 2023-03-27T15:18:20Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5526.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5526",
"merged_at": "2023-03-27T15:18:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5526.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5526"
}
|
Fixes #5428
Allow loading/saving of FAISS index using fsspec:
1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream.
2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense.
I can work on the documentation once the code changes are approved.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5526/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5526/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6376
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6376/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6376/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6376/events
|
https://github.com/huggingface/datasets/issues/6376
| 1,973,927,468
|
I_kwDODunzps51p74s
| 6,376
|
Caching problem when deleting a dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting! Can you also share the error message printed in step 5?",
"I did not store it at the time but I'll try to re-do a mwe next week to get it again",
"I haven't managed to reproduce this issue using a [notebook](https://colab.research.google.com/drive/1m6eduYun7pFTkigrCJAFgw0BghlbvXIL?usp=sharing) that follows the steps to reproduce the bug. So, I'm closing it.\r\n\r\nBut feel free to re-open it if you have a better reproducer."
] | 2023-11-02T10:15:58Z
| 2023-12-04T16:53:34Z
| 2023-12-04T16:53:33Z
|
MEMBER
| null | null | null |
### Describe the bug
Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail.
### Steps to reproduce the bug
1. Create a dataset with n features per row
2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)`
3. Go on the hub, delete the repo at `YOUR_PATH`
4. Update your local dataset to have n + m features per row
5. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)` will fail because of a mismatch in features number
### Expected behavior
Step 5 should work or display a message to indicate the cache has not been cleared
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6376/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6376/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5558
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5558/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5558/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5558/events
|
https://github.com/huggingface/datasets/pull/5558
| 1,593,655,815
|
PR_kwDODunzps5KcF5E
| 5,558
|
Remove instructions for `ffmpeg` system package installation on Colab
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014525 / 0.011353 (0.003172) | 0.006871 / 0.011008 (-0.004137) | 0.135577 / 0.038508 (0.097069) | 0.039620 / 0.023109 (0.016511) | 0.499829 / 0.275898 (0.223931) | 0.571000 / 0.323480 (0.247520) | 0.009726 / 0.007986 (0.001740) | 0.005654 / 0.004328 (0.001325) | 0.104732 / 0.004250 (0.100482) | 0.046849 / 0.037052 (0.009796) | 0.486667 / 0.258489 (0.228178) | 0.543611 / 0.293841 (0.249770) | 0.056414 / 0.128546 (-0.072133) | 0.019974 / 0.075646 (-0.055672) | 0.484878 / 0.419271 (0.065606) | 0.059244 / 0.043533 (0.015711) | 0.490046 / 0.255139 (0.234907) | 0.517427 / 0.283200 (0.234227) | 0.114692 / 0.141683 (-0.026991) | 1.935935 / 1.452155 (0.483780) | 1.990253 / 1.492716 (0.497537) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271008 / 0.018006 (0.253002) | 0.610964 / 0.000490 (0.610474) | 0.013423 / 0.000200 (0.013223) | 0.000523 / 0.000054 (0.000468) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031940 / 0.037411 (-0.005472) | 0.130755 / 0.014526 (0.116229) | 0.146616 / 0.176557 (-0.029941) | 0.239386 / 0.737135 (-0.497749) | 0.146612 / 0.296338 (-0.149726) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675383 / 0.215209 (0.460174) | 6.656828 / 2.077655 (4.579174) | 2.741231 / 1.504120 (1.237111) | 2.232921 / 1.541195 (0.691726) | 2.172116 / 1.468490 (0.703626) | 1.221623 / 4.584777 (-3.363154) | 5.683653 / 3.745712 (1.937941) | 5.344137 / 5.269862 (0.074275) | 2.969670 / 4.565676 (-1.596006) | 0.142107 / 0.424275 (-0.282168) | 0.015808 / 0.007607 (0.008201) | 0.767366 / 0.226044 (0.541321) | 8.059605 / 2.268929 (5.790676) | 3.333535 / 55.444624 (-52.111089) | 2.669619 / 6.876477 (-4.206857) | 2.652989 / 2.142072 (0.510917) | 1.526397 / 4.805227 (-3.278830) | 0.265609 / 6.500664 (-6.235055) | 0.082759 / 0.075469 (0.007290) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631086 / 1.841788 (-0.210701) | 18.701351 / 8.074308 (10.627043) | 22.843802 / 10.191392 (12.652410) | 0.240134 / 0.680424 (-0.440290) | 0.046683 / 0.534201 (-0.487518) | 0.576488 / 0.579283 (-0.002795) | 0.650123 / 0.434364 (0.215759) | 0.661190 / 0.540337 (0.120853) | 0.759563 / 1.386936 (-0.627373) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009883 / 0.011353 (-0.001470) | 0.006692 / 0.011008 (-0.004316) | 0.098550 / 0.038508 (0.060042) | 0.035188 / 0.023109 (0.012078) | 0.463535 / 0.275898 (0.187637) | 0.472762 / 0.323480 (0.149282) | 0.007199 / 0.007986 (-0.000787) | 0.007961 / 0.004328 (0.003632) | 0.093140 / 0.004250 (0.088890) | 0.051752 / 0.037052 (0.014700) | 0.453412 / 0.258489 (0.194922) | 0.502741 / 0.293841 (0.208900) | 0.056006 / 0.128546 (-0.072540) | 0.020164 / 0.075646 (-0.055482) | 0.116828 / 0.419271 (-0.302444) | 0.067205 / 0.043533 (0.023672) | 0.442715 / 0.255139 (0.187576) | 0.472525 / 0.283200 (0.189326) | 0.122767 / 0.141683 (-0.018915) | 1.881366 / 1.452155 (0.429212) | 1.978786 / 1.492716 (0.486069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284180 / 0.018006 (0.266174) | 0.601556 / 0.000490 (0.601067) | 0.008455 / 0.000200 (0.008255) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033515 / 0.037411 (-0.003896) | 0.136407 / 0.014526 (0.121881) | 0.143341 / 0.176557 (-0.033215) | 0.225394 / 0.737135 (-0.511741) | 0.153343 / 0.296338 (-0.142995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688202 / 0.215209 (0.472993) | 6.576502 / 2.077655 (4.498847) | 2.839175 / 1.504120 (1.335055) | 2.481152 / 1.541195 (0.939957) | 2.617227 / 1.468490 (1.148736) | 1.314854 / 4.584777 (-3.269922) | 5.805950 / 3.745712 (2.060238) | 3.188930 / 5.269862 (-2.080932) | 2.141719 / 4.565676 (-2.423957) | 0.145069 / 0.424275 (-0.279206) | 0.014567 / 0.007607 (0.006960) | 0.780000 / 0.226044 (0.553955) | 7.898016 / 2.268929 (5.629088) | 3.549060 / 55.444624 (-51.895564) | 2.856569 / 6.876477 (-4.019907) | 3.117719 / 2.142072 (0.975647) | 1.512560 / 4.805227 (-3.292668) | 0.262689 / 6.500664 (-6.237975) | 0.085979 / 0.075469 (0.010509) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623550 / 1.841788 (-0.218238) | 19.597063 / 8.074308 (11.522755) | 21.293369 / 10.191392 (11.101977) | 0.263780 / 0.680424 (-0.416643) | 0.027289 / 0.534201 (-0.506912) | 0.560361 / 0.579283 (-0.018922) | 0.646288 / 0.434364 (0.211924) | 0.712699 / 0.540337 (0.172361) | 0.818332 / 1.386936 (-0.568604) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-21T15:13:36Z
| 2023-03-01T13:46:04Z
| 2023-02-23T13:50:27Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5558",
"merged_at": "2023-02-23T13:50:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5558"
}
|
Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5558/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5558/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3034
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3034/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3034/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3034/events
|
https://github.com/huggingface/datasets/issues/3034
| 1,016,759,202
|
I_kwDODunzps48moOi
| 3,034
|
Errors loading dataset using fs = a gcsfs.GCSFileSystem
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74556552?v=4",
"events_url": "https://api.github.com/users/dconatha/events{/privacy}",
"followers_url": "https://api.github.com/users/dconatha/followers",
"following_url": "https://api.github.com/users/dconatha/following{/other_user}",
"gists_url": "https://api.github.com/users/dconatha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dconatha",
"id": 74556552,
"login": "dconatha",
"node_id": "MDQ6VXNlcjc0NTU2NTUy",
"organizations_url": "https://api.github.com/users/dconatha/orgs",
"received_events_url": "https://api.github.com/users/dconatha/received_events",
"repos_url": "https://api.github.com/users/dconatha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dconatha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dconatha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dconatha"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[] | 2021-10-05T20:07:08Z
| 2021-10-05T20:26:39Z
| null |
NONE
| null | null | null |
## Describe the bug
Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here...
Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you use gcsfs directly to download the file, but as is I can't get `load_from_disk` to work.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load some dataset
dataset = load_dataset("squad", split="train")
# save it to gcs
import gcsfs
fs = gcsfs.GCSFileSystem(project="my-gs-project")
dataset.save_to_disk("gs://my-bucket/squad", fs=fs)
# try to load it from gcs
from datasets import load_from_disk
dataset2 = load_from_disk("my-bucket/squad", fs=fs)
```
## Expected results
`dataset2` would be a copy of `dataset` but loaded from my bucket.
## Actual results
Long traceback but essentially it's a 404 error from gcsfs saying the object `my-bucket/squad` doesn't exist when this is called:
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L977
This is because there is no actual object called `my-bucket/squad`, there are objects called `my-bucket/squad/dataset.arrow`, etc.
Note that *this* works fine, since it's explicitly saying "download all the objects with this prefix":
```python
fs.download(src_dataset_path + "/*", dataset_path.as_posix(), recursive=True)
```
For example, I can do a workaround this way:
```python
import tempfile
with tempfile.TemporaryDirectory() as temppath:
fs.download("gs://my-bucket/squad/*", temppath)
dataset2 = load_from_disk(temppath)
```
It's unclear to me if it's `gcsfs`'s responsibility to say "hey that's folder not a file, I should try to get objects inside of it not the object itself", or if that's `datasets`'s responsibility... I'm leaning towards the latter since you're never loading a dataset from one file using this function/method, only a dataset folder?
Another minor thing that should maybe should be rolled into this bug...
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L968
These fail if you pass in a `gs://` path, e.g.
```python
dataset2 = load_from_disk("gs://my-bucket/squad", fs=fs)
```
Because at this point, `dataset_info_path` is `gs:/my-bucket/squad/dataset_info.json`, gcsfs throws a:
```
Invalid bucket name: 'gs:'
```
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: macOS Big Sur 11.6
- Python version: 3.7.12
- PyArrow version: 5.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3034/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3034/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/721/events
|
https://github.com/huggingface/datasets/issues/721
| 718,647,147
|
MDU6SXNzdWU3MTg2NDcxNDc=
| 721
|
feat(dl_manager): add support for ftp downloads
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
}
|
[] |
closed
| false
| null |
[] | null |
[
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset.\r\n\r\nTo make the download_manager work with a custom downloader, you can call `download_manager.download_custom` instead of `download_manager.download_and_extract`. The expected arguments are the following:\r\n```\r\nurl_or_urls: url or `list`/`dict` of urls to download and extract. Each\r\n url is a `str`.\r\ncustom_download: Callable with signature (src_url: str, dst_path: str) -> Any\r\n as for example `tf.io.gfile.copy`, that lets you download from google storage\r\n```\r\n",
"Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ?",
"Downloading an `ftp` file is as simple as:\r\n```python\r\nimport urllib \r\nurllib.urlretrieve('ftp://server/path/to/file', 'file')\r\n```\r\n\r\nI believe this should be supported by the library, as its not using any dependency and is trivial amount of code.",
"I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722\r\nSo its possible to understand the interaction of the download component with the ftp download ability",
"Awesome ! I'll take a look :)",
"@AmitMY Can you now download the Phoenix2014 Dataset?",
"@hoanganhpham1006 yes.\r\nSee pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.\r\nThere's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption.",
"The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls",
"The dataset loader is not yet ready, because of that issue.\r\nIf you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https)",
"Got it, thank you so much!",
"FTP downloads are supported."
] | 2020-10-10T15:50:20Z
| 2022-02-15T10:44:44Z
| 2022-02-15T10:44:43Z
|
CONTRIBUTOR
| null | null | null |
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/721/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5876
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5876/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5876/events
|
https://github.com/huggingface/datasets/issues/5876
| 1,717,978,985
|
I_kwDODunzps5mZkdp
| 5,876
|
Incompatibility with DataLab
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4",
"events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}",
"followers_url": "https://api.github.com/users/helpmefindaname/followers",
"following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}",
"gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/helpmefindaname",
"id": 26192135,
"login": "helpmefindaname",
"node_id": "MDQ6VXNlcjI2MTkyMTM1",
"organizations_url": "https://api.github.com/users/helpmefindaname/orgs",
"received_events_url": "https://api.github.com/users/helpmefindaname/received_events",
"repos_url": "https://api.github.com/users/helpmefindaname/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions",
"type": "User",
"url": "https://api.github.com/users/helpmefindaname"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?",
"I think we should use clobber and show a warning if it overwrote a registered filesystem indeed ! This way the user can re-register the filesystems if needed. Though they should probably be compatible (and maybe do the exact same thing) so I wouldn't de-register the `datasets` filesystems"
] | 2023-05-20T01:39:11Z
| 2023-05-25T06:42:34Z
| 2023-05-25T06:42:34Z
|
NONE
| null | null | null |
### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.
When running the code below, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module>
from datalabs.arrow_dataset import concatenate_datasets, Dataset
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module>
from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module>
from datalabs.features import (
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module>
from datalabs.features.audio import Audio
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module>
from datalabs.utils.streaming_download_manager import xopen
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module>
from datalabs.filesystems import COMPRESSION_FILESYSTEMS
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module>
fsspec.register_implementation(fs_class.protocol, fs_class)
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation
raise ValueError(
ValueError: Name (bz2) already in the registry and clobber is False
```
I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.
I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.
### Steps to reproduce the bug
1. Run `pip install datalabs==0.4.15 datasets==2.12.0`
2. Run the following python code:
```
import datalabs
import datasets
```
### Expected behavior
It should be possible to import both libraries without getting a Value Error
### Environment info
datalabs==0.4.15
datasets==2.12.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5876/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1282
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1282/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1282/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1282/events
|
https://github.com/huggingface/datasets/pull/1282
| 759,208,335
|
MDExOlB1bGxSZXF1ZXN0NTM0MjQ4NzI5
| 1,282
|
add thaiqa_squad
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cstorm125",
"id": 15519308,
"login": "cstorm125",
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cstorm125"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-08T08:14:38Z
| 2020-12-08T18:36:18Z
| 2020-12-08T18:36:18Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1282",
"merged_at": "2020-12-08T18:36:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1282"
}
|
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers.
`thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1282/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1282/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3960
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3960/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3960/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3960/events
|
https://github.com/huggingface/datasets/issues/3960
| 1,173,148,884
|
I_kwDODunzps5F7NTU
| 3,960
|
Load local dataset error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4",
"events_url": "https://api.github.com/users/TXacs/events{/privacy}",
"followers_url": "https://api.github.com/users/TXacs/followers",
"following_url": "https://api.github.com/users/TXacs/following{/other_user}",
"gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TXacs",
"id": 60869411,
"login": "TXacs",
"node_id": "MDQ6VXNlcjYwODY5NDEx",
"organizations_url": "https://api.github.com/users/TXacs/orgs",
"received_events_url": "https://api.github.com/users/TXacs/received_events",
"repos_url": "https://api.github.com/users/TXacs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TXacs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TXacs"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n```\r\n\r\n\r\nLet us know if that resolves the issue.",
"> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n> ```\r\n> \r\n> Let us know if that resolves the issue.\r\n\r\nSorry, replied late.\r\nThanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\nResolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 437283.97it/s]\r\nResolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 89094.29it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nDownloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...\r\nDownloading data files #0: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82289.56obj/s]\r\nDownloading data files #1: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 73559.11obj/s]\r\nDownloading data files #2: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81600.46obj/s]\r\nDownloading data files #3: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79691.56obj/s]\r\nDownloading data files #4: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82341.37obj/s]\r\nDownloading data files #5: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 75784.46obj/s]\r\nDownloading data files #6: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81466.18obj/s]\r\nDownloading data files #7: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82320.27obj/s]\r\nDownloading data files #8: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 78094.00obj/s]\r\nDownloading data files #9: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84057.59obj/s]\r\nDownloading data files #10: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 83082.31obj/s]\r\nDownloading data files #11: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79944.21obj/s]\r\nDownloading data files #12: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84569.77obj/s]\r\nDownloading data files #13: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84949.63obj/s]\r\nDownloading data files #14: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 80666.53obj/s]\r\nDownloading data files #15: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80072/80072 [00:01<00:00, 76723.20obj/s]\r\n^[[Bloading data files #8: 94%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 75061/80073 [00:00<00:00, 82609.89obj/s]\r\nDownloading data files #9: 85%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 68120/80073 [00:00<00:00, 83868.54obj/s]\r\nDownloading data files #9: 96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 76784/80073 [00:00<00:00, 84722.34obj/s]\r\nDownloading data files #10: 75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 59995/80073 [00:00<00:00, 84148.19obj/s]\r\nDownloading data files #10: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77412/80073 [00:00<00:00, 85724.53obj/s]\r\nDownloading data files #11: 71%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57032/80073 [00:00<00:00, 79930.58obj/s]\r\nDownloading data files #11: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 73277/80073 [00:00<00:00, 78091.27obj/s]\r\nDownloading data files #12: 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 69125/80073 [00:00<00:00, 84723.02obj/s]\r\nDownloading data files #12: 97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77803/80073 [00:00<00:00, 85351.59obj/s]\r\nDownloading data files #13: 75%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 60356/80073 [00:00<00:00, 84833.35obj/s]\r\nDownloading data files #13: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77368/80073 [00:00<00:00, 84475.10obj/s]\r\nDownloading data files #14: 72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57751/80073 [00:00<00:00, 80727.33obj/s]\r\nDownloading data files #14: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 74022/80073 [00:00<00:00, 78703.16obj/s]\r\nDownloading data files #15: 78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 62724/80072 [00:00<00:00, 78387.33obj/s]\r\nDownloading data files #15: 99%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 78933/80072 [00:01<00:00, 79353.63obj/s]\r\n```",
"Wait a long time, it completed. I don't know why it's so slow...",
"You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nThanksοΌIt's worked well.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nI find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.\r\nFirst loading, it costs about 20 min in my servers.\r\n```\r\nreal\t19m23.023s\r\nuser\t21m18.360s\r\nsys\t7m59.080s\r\n```\r\n\r\nSecond reusing, it costs about 15 min in my servers.\r\n```\r\nreal\t15m20.735s\r\nuser\t12m22.979s\r\nsys\t5m46.960s\r\n```\r\n\r\nI think it's too much slow, is there other method to make it faster?",
"And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n```python\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n labels = torch.tensor([example[\"labels\"] for example in examples])\r\n return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n```\r\nHow to know the keys of example?",
"Loading the image files slowly, is it because the multiple processes load files at the same time?",
"Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs. \r\n\r\n> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> \r\n> ```python\r\n> def collate_fn(examples):\r\n> pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> ```\r\n> \r\n> How to know the keys of example?\r\n\r\nWhat do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\n",
"> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.\r\n> \r\n> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> > ```python\r\n> > def collate_fn(examples):\r\n> > pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> > labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> > return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > How to know the keys of example?\r\n> \r\n> What do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\nThanks for your reply!\r\n\r\n1. I did not record the second output, so I run it again. \r\n```\r\n(merak) txacs@master:/dat/txacs/test$ time python test.py \r\nResolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 469497.89it/s]\r\nResolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 70123.73it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nReusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)\r\n100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.37s/it]\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 1281167\r\n })\r\n validation: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 50000\r\n })\r\n})\r\n\r\nreal\t10m10.413s\r\nuser\t9m33.195s\r\nsys\t2m47.528s\r\n```\r\nAlthough it cost less time than the last, but still slowly.\r\n\r\n2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'.",
"Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.",
"> Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.\r\n\r\nSounds good! The main position, which costs long time, is from program starting to `\"Resolving data files\"`. I hope you can solve it early, thanks!",
"I'm getting this problem. Script has been stuck at this part for the past 15 or so minutes:\r\n \r\n`Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββ| 107/107 [00:00<00:00, 472.74it/s]`\r\n\r\nI had everything working fine on an AWS EC2 node with a single GPU. Then I created an image based on the single GPU machine, and spun up a new one with 4 GPUs, so I got all of the training data ready at .cache. \r\n\r\nTurned off all checks with `verification_mode='no_checks'`. Logged in with huggingface-cli again just to be sure.\r\n\r\nInterrupting shows the code is stuck here:\r\n\r\n```\r\nFile \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 66, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n```\r\n\r\nIs it just going to take a while or am I going to run out of money? :sweat_smile: \r\n\r\nedit: ping @mariosasko "
] | 2022-03-18T03:32:49Z
| 2023-08-02T17:12:20Z
| null |
NONE
| null | null | null |
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3960/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3960/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3214
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3214/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3214/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3214/events
|
https://github.com/huggingface/datasets/issues/3214
| 1,044,924,050
|
I_kwDODunzps4-SEaS
| 3,214
|
Add ACAV100M Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
open
| false
| null |
[] | null |
[] | 2021-11-04T15:59:58Z
| 2021-12-08T12:00:30Z
| null |
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** *ACAV100M*
- **Description:** *contains 100 million videos with high audio-visual correspondence, ideal for self-supervised video representation learning.*
- **Paper:** *https://arxiv.org/abs/2101.10803*
- **Data:** *https://github.com/sangho-vision/acav100m*
- **Motivation:** *The largest dataset (to date) for audio-visual learning.*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3214/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3214/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3076
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3076/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3076/events
|
https://github.com/huggingface/datasets/issues/3076
| 1,026,113,484
|
I_kwDODunzps49KT_M
| 3,076
|
Error when loading a metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2021-10-14T08:29:27Z
| 2021-10-14T09:14:55Z
| 2021-10-14T09:14:55Z
|
MEMBER
| null | null | null |
## Describe the bug
As reported by @sgugger, after last release, exception is thrown when loading a metric.
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("squad_v2")
```
## Actual results
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-1-e612a8cab787> in <module>
1 from datasets import load_metric
----> 2 metric = load_metric("squad_v2")
d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs)
1336 )
1337 revision = script_version
-> 1338 metric_module = metric_module_factory(
1339 path, revision=revision, download_config=download_config, download_mode=download_mode
1340 ).module_path
d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs)
1237 if not isinstance(e1, FileNotFoundError):
1238 raise e1 from None
-> 1239 raise FileNotFoundError(
1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. "
1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either."
FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either.
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3076/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1388
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1388/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1388/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1388/events
|
https://github.com/huggingface/datasets/pull/1388
| 760,373,136
|
MDExOlB1bGxSZXF1ZXN0NTM1MjE1Nzk2
| 1,388
|
hind_encorp
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahul-art",
"id": 56379013,
"login": "rahul-art",
"node_id": "MDQ6VXNlcjU2Mzc5MDEz",
"organizations_url": "https://api.github.com/users/rahul-art/orgs",
"received_events_url": "https://api.github.com/users/rahul-art/received_events",
"repos_url": "https://api.github.com/users/rahul-art/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahul-art"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-09T14:22:59Z
| 2020-12-09T14:46:51Z
| 2020-12-09T14:46:37Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1388.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1388",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1388.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1388"
}
|
resubmit of hind_encorp file changes
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1388/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1388/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6284
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6284/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6284/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6284/events
|
https://github.com/huggingface/datasets/issues/6284
| 1,929,551,712
|
I_kwDODunzps5zAp9g
| 6,284
|
Add Belebele multiple-choice machine reading comprehension (MRC) dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajveer43",
"id": 64583161,
"login": "rajveer43",
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajveer43"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"This dataset is already available on the Hub: https://huggingface.co/datasets/facebook/belebele.\r\n"
] | 2023-10-06T06:58:03Z
| 2023-10-06T13:26:51Z
| 2023-10-06T13:26:51Z
|
NONE
| null | null | null |
### Feature request
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
Please refer to paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884).
## Composition
- 900 questions per language variant
- 488 distinct passages, there are 1-2 associated questions for each.
- For each question, there is 4 multiple-choice answers, exactly 1 of which is correct.
- 122 language/language variants (including English).
- 900 x 122 = 109,800 total questions.
### Motivation
official repo https://github.com/facebookresearch/belebele
### Your contribution
-
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6284/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6284/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4898
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4898/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4898/events
|
https://github.com/huggingface/datasets/issues/4898
| 1,351,851,254
|
I_kwDODunzps5Qk5z2
| 4,898
|
Dataset Viewer issue for timit_asr
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4",
"events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}",
"followers_url": "https://api.github.com/users/InayatUllah932/followers",
"following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}",
"gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/InayatUllah932",
"id": 91126978,
"login": "InayatUllah932",
"node_id": "MDQ6VXNlcjkxMTI2OTc4",
"organizations_url": "https://api.github.com/users/InayatUllah932/orgs",
"received_events_url": "https://api.github.com/users/InayatUllah932/received_events",
"repos_url": "https://api.github.com/users/InayatUllah932/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions",
"type": "User",
"url": "https://api.github.com/users/InayatUllah932"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface/datasets ",
"Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https://huggingface.co/datasets/timit_asr\r\n> The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1",
"Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...",
"Yes, ideally something like https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L81\r\n",
"The preview is now disabled (and a descriptive warning is displayed) for datasets requiring manual download. See:\r\n\r\n\r\n"
] | 2022-08-26T07:12:05Z
| 2022-10-03T12:40:28Z
| 2022-10-03T12:40:27Z
|
NONE
| null | null | null |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4898/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3949
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3949/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3949/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3949/events
|
https://github.com/huggingface/datasets/pull/3949
| 1,171,467,981
|
PR_kwDODunzps40jia-
| 3,949
|
Remove GLEU metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-16T19:35:31Z
| 2022-04-12T20:43:26Z
| 2022-04-12T20:37:09Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3949",
"merged_at": "2022-04-12T20:37:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3949"
}
|
Remove the GLEU metric as it is not actually implemented.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 1,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3949/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3949/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4022/events
|
https://github.com/huggingface/datasets/pull/4022
| 1,180,816,682
|
PR_kwDODunzps41BNeA
| 4,022
|
Replace dbpedia_14 data url
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-25T13:47:21Z
| 2022-03-25T15:03:37Z
| 2022-03-25T14:58:49Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4022",
"merged_at": "2022-03-25T14:58:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4022"
}
|
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4022/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4022/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1324
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1324/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1324/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1324/events
|
https://github.com/huggingface/datasets/issues/1324
| 759,587,864
|
MDU6SXNzdWU3NTk1ODc4NjQ=
| 1,324
|
β Sharing ElasticSearch indexed dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] | null |
[
"Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nThe saved dataset can later be retrieved using:\r\n```python\r\n>>> loaded_dataset = datasets.Dataset.load_from_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nAlso, I'd recommend posting your question directly in the issue section of the [elasticsearch repo](https://github.com/elastic/elasticsearch)",
"Hi @SBrandeis,\n\nThanks a lot for picking up my request. \n\nMaybe I can clarify my use-case with a bit of context. Say I have the IMDb dataset. I create an ES index on it. Now I can save and reload the dataset from disk normally. Once I reload the dataset, it is easy to retrieve the ES index on my machine. I was wondering: is there a way I can share the (now) indexed version of the IMDb dataset with my colleagues without requiring them to re-index it?\n\nThanks a lot in advance for your consideration.\n\nBest,\n\nPietro",
"Thanks for the clarification.\r\n\r\nI am not familiar with ElasticSearch, but if I understand well you're trying to migrate your data along with the ES index.\r\nMy advice would be to check out ES documentation, for instance, this might help you: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html\r\n\r\nLet me know if it helps"
] | 2020-12-08T16:25:58Z
| 2020-12-22T07:50:56Z
| null |
NONE
| null | null | null |
Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering
- how can I know where it has been saved?
- how can I share the indexed dataset with others?
I tried to dig into the docs, but could not find anything about that.
Thank you very much for your help.
Best,
Pietro
Edit: apologies for the wrong label
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1324/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1324/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6159
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6159/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6159/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6159/events
|
https://github.com/huggingface/datasets/issues/6159
| 1,855,691,512
|
I_kwDODunzps5um5r4
| 6,159
|
Add `BoundingBox` feature
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2023-08-17T20:49:51Z
| 2023-08-17T20:49:51Z
| null |
CONTRIBUTOR
| null | null | null |
... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explained [here](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/)), so we need to decide which one to support (or maybe all of them).
cc @NielsRogge @severo
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6159/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6159/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1618/events
|
https://github.com/huggingface/datasets/issues/1618
| 772,248,730
|
MDU6SXNzdWU3NzIyNDg3MzA=
| 1,618
|
Can't filter language:EN on https://huggingface.co/datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidefiocco",
"id": 4547987,
"login": "davidefiocco",
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidefiocco"
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc'ing @mapmeld ",
"Full language list is now deployed to https://huggingface.co/datasets ! Recommend close",
"Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime."
] | 2020-12-21T15:23:23Z
| 2020-12-22T17:17:00Z
| 2020-12-22T17:16:09Z
|
NONE
| null | null | null |
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge:

|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1618/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1618/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5357
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5357/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5357/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5357/events
|
https://github.com/huggingface/datasets/pull/5357
| 1,495,029,602
|
PR_kwDODunzps5FXNyR
| 5,357
|
Support torch dataloader without torch formatting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Need some more time to fix the tests, especially with pickle",
"> And I actually don't quite understand the idea - what's the motivation behind making only IterableDataset compatible with torch DataLoader without setting the format explicitly?\r\n\r\nSetting the format to pytorch = set the output types of the dataset to be pytorch tensors. However sometimes your dataset is not made of tensors but you still want to be able to use a pytorch DataLoader",
"A bit more context. \r\n\r\nThe arrow-backed `Dataset` supports `DataLoader(ds)` (even if the format is not \"torch\"), and we want to be able to do the same with `IterableDataset` for consistency. However, this is when the PyTorch internals come into play - an iterable dataset needs to be an instance of `torch.utils.data.IterableDataset` due to [this](https://github.com/pytorch/pytorch/blob/abc54f93145830b502400faa92bec86e05422fbd/torch/utils/data/dataloader.py#L276) check (notice there is no check for the map-style version). Hence the explicit subclassing in this PR.",
"Exactly :) Btw I just took your comments into account @polinaeterna , so feel free to review again",
"@lhoestq just checking, does this change still preserve the fix to the \"data duplicate when setting num_works > 1 with streaming data\" issue from before?\r\n\r\nhttps://github.com/huggingface/datasets/issues/3423",
"Yes :)"
] | 2022-12-13T19:39:24Z
| 2023-01-04T12:45:40Z
| 2022-12-15T19:15:54Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5357",
"merged_at": "2022-12-15T19:15:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5357"
}
|
In https://github.com/huggingface/datasets/pull/5084 we make the torch formatting consistent with the map-style datasets formatting: a torch formatted iterable dataset will yield torch tensors.
The previous behavior of the torch formatting for iterable dataset was simply to make the iterable dataset inherit from `torch.utils.data.Dataset` to make it work in a torch DataLoader. However ideally an unformatted dataset should also work with a DataLoader. To fix that, `datasets.IterableDataset` should inherit from `torch.utils.data.IterableDataset`.
Since we don't want to import torch on startup, I created this PR to dynamically make the `datasets.IterableDataset` class inherit form the torch one when a `datasets.IterableDataset` is instantiated and if PyTorch is available.
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("c4", "en", streaming=True, split="train")
>>> import torch.utils.data
>>> isinstance(ds, torch.utils.data.IterableDataset)
True
>>> dataloader = torch.utils.data.DataLoader(ds, batch_size=32, num_workers=4)
>>> for example in dataloader:
...: ...
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5357/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5357/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3910/events
|
https://github.com/huggingface/datasets/pull/3910
| 1,168,579,694
|
PR_kwDODunzps40aAiX
| 3,910
|
Fix text loader to split only on universal newlines
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3910). All of your documentation changes will be reflected on that endpoint.",
"Looks like the test needs to be updated for windows ^^'",
"I don't think this is the same issue as in https://github.com/oscar-corpus/corpus/issues/18, where the OSCAR metadata has line offsets that use only `\\n` as the newline marker to count lines, not `\\r\\n` or `\\r`.\r\n\r\nIt looks like the OSCAR data loader is opening the data files with `gzip.open` directly and I don't think this text loader is used, but I'm not familiar with a lot of `datasets` internals so I could be mistaken?",
"You are right @adrianeboyd.\r\n\r\nThis PR fixes #3729.\r\n\r\nAdditionally, this PR is somehow related to the OSCAR issue. However, the OSCAR issue have multiple root causes: one is the offset initialization (as you pointed out); other is similar to this case: Unicode newlines are not properly handled.\r\n\r\nI will make a change proposal for OSCAR this afternoon.",
"@lhoestq I'm working on fixing the Windows tests on my Windows machine...",
"I finally changed the approach in order to avoid having \"\\r\\n\" and \"\\r\" line breaks in Python `str` read from files on Windows/old Macintosh machines."
] | 2022-03-14T15:54:58Z
| 2022-03-15T16:16:11Z
| 2022-03-15T16:16:09Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3910",
"merged_at": "2022-03-15T16:16:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3910"
}
|
Currently, `text` loader breaks on a superset of universal newlines, which also contains Unicode line boundaries. See: https://docs.python.org/3/library/stdtypes.html#str.splitlines
However, the expected behavior is to get the lines splitted only on universal newlines: "\n", "\r\n" and "\r".
See: oscar-corpus/corpus#18
Fix #3729.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3910/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3910/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6285
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6285/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6285/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6285/events
|
https://github.com/huggingface/datasets/issues/6285
| 1,932,306,325
|
I_kwDODunzps5zLKeV
| 6,285
|
TypeError: expected str, bytes or os.PathLike object, not dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal"
}
|
[] |
open
| false
| null |
[] | null |
[
"You should be able to load the images by modifying the `load_dataset` call like this:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_dir=\"/content/datasets/PotholeDetectionYOLOv8-1\")\r\n```\r\n\r\nThe `imagefolder` builder expects the image files to be in `path/label/image_file` (e.g. .`.../train/dog/image_1.jpg`), so the solution for the labels in your case is to create metadata files (one for each split; as explained [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder)) that map the images to their labels.",
"> You should be able to load the images by modifying the `load_dataset` call like this:\r\n> \r\n> ```python\r\n> dataset = load_dataset(\"imagefolder\", data_dir=\"/content/datasets/PotholeDetectionYOLOv8-1\")\r\n> ```\r\n> \r\n> The `imagefolder` builder expects the image files to be in `path/label/image_file` (e.g. .`.../train/dog/image_1.jpg`), so the solution for the labels in your case is to create metadata files (one for each split; as explained [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder)) that map the images to their labels.\r\n\r\nI tried like this but only uploads images and not labels, Andyrasika/potholes-dataset",
"As explained in my previous comment, you need to define metadata files to load the labels or update the paths to be in the format `train/label/image` (`train- image /n -labels` is not supported by the loader).",
"I downloaded my file after annotating using roboflow . It gives train-\r\nimages, labels , test- images, labels , valid- images, labels . I hope it\r\ngives you an idea of the dataset . Please advise on this dataset\r\n\r\nOn Tue, Oct 10, 2023 at 18:12 Mario Ε aΕ‘ko ***@***.***> wrote:\r\n\r\n> As explained in my previous comment, you need to define metadata files to\r\n> load the labels or update the paths to be in the format train/label/image\r\n> (train- image /n -labels is not supported by the loader).\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6285#issuecomment-1755335215>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNN56FWWTSBYTSTUWHLX6U7CVAVCNFSM6AAAAAA5YHCSTGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJVGMZTKMRRGU>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 2023-10-09T04:56:26Z
| 2023-10-10T13:17:33Z
| null |
NONE
| null | null | null |
### Describe the bug
my dataset is in form : train- image /n -labels
and tried the code:
```
from datasets import load_dataset
data_files = {
"train": "/content/datasets/PotholeDetectionYOLOv8-1/train/",
"validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/",
"test": "/content/datasets/PotholeDetectionYOLOv8-1/test/"
}
dataset = load_dataset("imagefolder", data_dir=data_files)
dataset
```
got error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-29-2ef1926f73d9>](https://localhost:8080/#) in <cell line: 8>()
6 "test": "/content/datasets/PotholeDetectionYOLOv8-1/test/"
7 }
----> 8 dataset = load_dataset("imagefolder", data_dir=data_files)
9 dataset
6 frames
[/usr/lib/python3.10/pathlib.py](https://localhost:8080/#) in _parse_args(cls, args)
576 parts += a._parts
577 else:
--> 578 a = os.fspath(a)
579 if isinstance(a, str):
580 # Force-cast str subclasses to str (issue #21127)
TypeError: expected str, bytes or os.PathLike object, not dict
```
### Steps to reproduce the bug
as share above
### Expected behavior
load images and labels , but my dataset only uploads images
- https://huggingface.co/datasets/Andyrasika/potholes-dataset
### Environment info
colab pro
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6285/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6285/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3132
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3132/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3132/events
|
https://github.com/huggingface/datasets/issues/3132
| 1,032,505,430
|
I_kwDODunzps49ishW
| 3,132
|
Support Audio feature in streaming mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2021-10-21T13:32:18Z
| 2021-11-12T14:13:04Z
| 2021-11-12T14:13:04Z
|
MEMBER
| null | null | null |
Currently, Audio feature is only supported for non-streaming datasets.
Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3132/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2260
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2260/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2260/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2260/events
|
https://github.com/huggingface/datasets/pull/2260
| 866,961,697
|
MDExOlB1bGxSZXF1ZXN0NjIyNzMwODYx
| 2,260
|
GooAQ dataset added
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for adding this one !\r\nThe download manager does support downloading files on git lfs via their github url. No need for a manual download option ;)"
] | 2021-04-25T09:26:48Z
| 2021-05-07T08:36:17Z
| 2021-05-07T08:36:17Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2260.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2260",
"merged_at": "2021-05-07T08:36:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2260.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2260"
}
|
@lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2260/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2260/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6134
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6134/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6134/events
|
https://github.com/huggingface/datasets/issues/6134
| 1,844,535,142
|
I_kwDODunzps5t8V9m
| 6,134
|
`datasets` cannot be installed alongside `apache-beam`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4",
"events_url": "https://api.github.com/users/boyleconnor/events{/privacy}",
"followers_url": "https://api.github.com/users/boyleconnor/followers",
"following_url": "https://api.github.com/users/boyleconnor/following{/other_user}",
"gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/boyleconnor",
"id": 6520892,
"login": "boyleconnor",
"node_id": "MDQ6VXNlcjY1MjA4OTI=",
"organizations_url": "https://api.github.com/users/boyleconnor/orgs",
"received_events_url": "https://api.github.com/users/boyleconnor/received_events",
"repos_url": "https://api.github.com/users/boyleconnor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/boyleconnor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I noticed that this is actually covered by issue #5613, which for some reason I didn't see when I searched the issues in this repo the first time."
] | 2023-08-10T06:54:32Z
| 2023-09-01T03:19:49Z
| 2023-08-10T15:22:10Z
|
NONE
| null | null | null |
### Describe the bug
If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something such as importing the `load_dataset` method from `datasets` results in a crashing error.
I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions.
### Steps to reproduce the bug
See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug.
In some environments, I have been able to reproduce the bug by running the following in Bash:
```bash
$ pip install datasets apache-beam
```
then the following in a Python shell:
```python
from datasets import load_dataset
```
Here is my stacktrace from running on Google Colab:
<details>
<summary>stacktrace</summary>
```
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.14.4"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
64
65 from . import config
---> 66 from .arrow_reader import ArrowReader
67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
68 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
28 import pyarrow.parquet as pq
29
---> 30 from .download.download_config import DownloadConfig
31 from .naming import _split_re, filenames_for_dataset_split
32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables
[/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module>
7
8 from .download_config import DownloadConfig
----> 9 from .download_manager import DownloadManager, DownloadMode
10 from .streaming_download_manager import StreamingDownloadManager
[/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module>
33 from ..utils.info_utils import get_size_checksum_dict
34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm
---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str
36 from .download_config import DownloadConfig
37
[/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module>
38 import dill
39 import multiprocess
---> 40 import multiprocess.pool
41 import numpy as np
42 from packaging import version
[/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module>
607 #
608
--> 609 class ThreadPool(Pool):
610
611 from .dummy import Process
[/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool()
609 class ThreadPool(Pool):
610
--> 611 from .dummy import Process
612
613 def __init__(self, processes=None, initializer=None, initargs=()):
[/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module>
85 #
86
---> 87 class Condition(threading._Condition):
88 # XXX
89 if sys.version_info < (3, 0):
AttributeError: module 'threading' has no attribute '_Condition'
```
</details>
I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes pip to hang indefinitely.
### Expected behavior
I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`.
### Environment info
Google Colab
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6134/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2509
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2509/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2509/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2509/events
|
https://github.com/huggingface/datasets/pull/2509
| 922,846,035
|
MDExOlB1bGxSZXF1ZXN0NjcxNjcyMzU5
| 2,509
|
Fix fingerprint when moving cache dir
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Windows, why are you doing this to me ?",
"Thanks @lhoestq, I'm starting reviewing this PR.",
"Yea issues on windows are about long paths, not long filenames.\r\nWe can make sure the lock filenames are not too long, but not for the paths",
"Took your suggestions into account @albertvillanova :)"
] | 2021-06-16T16:45:09Z
| 2021-06-21T15:05:04Z
| 2021-06-21T15:05:03Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2509.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2509",
"merged_at": "2021-06-21T15:05:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2509.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2509"
}
|
The fingerprint of a dataset changes if the cache directory is moved.
I fixed that by setting the fingerprint to be the hash of:
- the relative cache dir (dataset_name/version/config_id)
- the requested split
Close #2496
I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255.
We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2509/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2509/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5140
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5140/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5140/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5140/events
|
https://github.com/huggingface/datasets/pull/5140
| 1,415,075,530
|
PR_kwDODunzps5BHTNq
| 5,140
|
Make the KeyHasher FIPS compliant
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22592860?v=4",
"events_url": "https://api.github.com/users/vvalouch/events{/privacy}",
"followers_url": "https://api.github.com/users/vvalouch/followers",
"following_url": "https://api.github.com/users/vvalouch/following{/other_user}",
"gists_url": "https://api.github.com/users/vvalouch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vvalouch",
"id": 22592860,
"login": "vvalouch",
"node_id": "MDQ6VXNlcjIyNTkyODYw",
"organizations_url": "https://api.github.com/users/vvalouch/orgs",
"received_events_url": "https://api.github.com/users/vvalouch/received_events",
"repos_url": "https://api.github.com/users/vvalouch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vvalouch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvalouch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vvalouch"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-10-19T14:25:52Z
| 2022-11-07T16:20:43Z
| 2022-11-07T16:20:43Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5140",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5140"
}
|
MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5140/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5140/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4207
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4207/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4207/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4207/events
|
https://github.com/huggingface/datasets/pull/4207
| 1,213,604,615
|
PR_kwDODunzps42rmbK
| 4,207
|
[Minor edit] Fix typo in class name
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-04-24T09:49:37Z
| 2022-05-05T13:17:47Z
| 2022-05-05T13:17:47Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4207.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4207",
"merged_at": "2022-05-05T13:17:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4207.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4207"
}
|
Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4207/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4207/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/491
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/491/comments
|
https://api.github.com/repos/huggingface/datasets/issues/491/events
|
https://github.com/huggingface/datasets/issues/491
| 676,486,275
|
MDU6SXNzdWU2NzY0ODYyNzU=
| 491
|
No 0.4.0 release on GitHub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I did the release on github, and updated the doc :)\r\nSorry for the delay",
"Thanks!"
] | 2020-08-10T23:59:57Z
| 2020-08-11T16:50:07Z
| 2020-08-11T16:50:07Z
|
CONTRIBUTOR
| null | null | null |
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/491/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5727
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5727/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5727/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5727/events
|
https://github.com/huggingface/datasets/issues/5727
| 1,661,536,363
|
I_kwDODunzps5jCQhr
| 5,727
|
load_dataset fails with FileNotFound error on Windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122648572?v=4",
"events_url": "https://api.github.com/users/joelkowalewski/events{/privacy}",
"followers_url": "https://api.github.com/users/joelkowalewski/followers",
"following_url": "https://api.github.com/users/joelkowalewski/following{/other_user}",
"gists_url": "https://api.github.com/users/joelkowalewski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joelkowalewski",
"id": 122648572,
"login": "joelkowalewski",
"node_id": "U_kgDOB093_A",
"organizations_url": "https://api.github.com/users/joelkowalewski/orgs",
"received_events_url": "https://api.github.com/users/joelkowalewski/received_events",
"repos_url": "https://api.github.com/users/joelkowalewski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joelkowalewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joelkowalewski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joelkowalewski"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! Can you please paste the entire error stack trace, not only the last few lines?",
"`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1762 verification_mode = VerificationMode(\r\n 1763 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS\r\n 1764 )\r\n 1766 # Create a dataset builder\r\n-> 1767 builder_instance = load_dataset_builder(\r\n 1768 path=path,\r\n 1769 name=name,\r\n 1770 data_dir=data_dir,\r\n 1771 data_files=data_files,\r\n 1772 cache_dir=cache_dir,\r\n 1773 features=features,\r\n 1774 download_config=download_config,\r\n 1775 download_mode=download_mode,\r\n 1776 revision=revision,\r\n 1777 use_auth_token=use_auth_token,\r\n 1778 storage_options=storage_options,\r\n 1779 **config_kwargs,\r\n 1780 )\r\n 1782 # Return iterable dataset in case of streaming\r\n 1783 if streaming:\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1498, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, storage_options, **config_kwargs)\r\n 1496 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1497 download_config.use_auth_token = use_auth_token\r\n-> 1498 dataset_module = dataset_module_factory(\r\n 1499 path,\r\n 1500 revision=revision,\r\n 1501 download_config=download_config,\r\n 1502 download_mode=download_mode,\r\n 1503 data_dir=data_dir,\r\n 1504 data_files=data_files,\r\n 1505 )\r\n 1507 # Get dataset builder class from the processing script\r\n 1508 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1211, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1209 raise e1 from None\r\n 1210 if isinstance(e1, FileNotFoundError):\r\n-> 1211 raise FileNotFoundError(\r\n 1212 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1213 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1214 ) from None\r\n 1215 raise e1 from None\r\n 1216 else:`",
"Okay, this is the issue:\r\n```\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: \r\n'C:\\\\Users\\\\...\\\\.cache\\\\huggingface'\r\n``` \r\n\r\nI don't remember seeing this error before.\r\n\r\nI guess it could happen in a multi-process environment if one of the processes deletes the `datasets` cache as the other one is loading a dataset (with `load_dataset`), so make sure that's not the case. Also, you can disable the Windows max path length limit (if enabled), but this is most likely not the problem.",
"Closing due to inactivity."
] | 2023-04-10T23:21:12Z
| 2023-07-21T14:08:20Z
| 2023-07-21T14:08:19Z
|
NONE
| null | null | null |
### Describe the bug
Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps:
(1) create conda environment
(2) activate environment
(3) install with: ``conda` install -c huggingface -c conda-forge datasets`
Then
```
from datasets import load_dataset
# this or any other example from the website fails with the FileNotFoundError
glue = load_dataset("glue", "ax")
```
**Below I have pasted the error omitting the full path**:
```
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified:
'C:\\Users\\...\\.cache\\huggingface'
```
### Steps to reproduce the bug
On Windows 10
1) create a minimal conda environment (with just Python)
(2) activate environment
(3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets`
(4) import load_dataset and follow example usage from any dataset card.
### Expected behavior
The expected behavior is to load the file into the Python session running on my machine without error.
### Environment info
```
# Name Version Build Channel
aiohttp 3.8.4 py311ha68e1ae_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 22.2.0 pyh71513ae_0 conda-forge
aws-c-auth 0.6.26 h1262f0c_1 conda-forge
aws-c-cal 0.5.21 h7cda486_2 conda-forge
aws-c-common 0.8.14 hcfcfb64_0 conda-forge
aws-c-compression 0.2.16 h8a79959_5 conda-forge
aws-c-event-stream 0.2.20 h5f78564_4 conda-forge
aws-c-http 0.7.6 h2545be9_0 conda-forge
aws-c-io 0.13.19 h0d2781e_3 conda-forge
aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge
aws-c-s3 0.2.7 h8113e7b_1 conda-forge
aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge
aws-checksums 0.1.14 h8a79959_5 conda-forge
aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge
aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge
brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
c-ares 1.19.0 h2bbff1b_0
ca-certificates 2023.01.10 haa95532_0
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311h7d9ee11_3 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 40.0.1 py311h28e9c30_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
datasets 2.11.0 py_0 huggingface
dill 0.3.6 pyhd8ed1ab_1 conda-forge
filelock 3.11.0 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py311ha68e1ae_0 conda-forge
fsspec 2023.4.0 pyh1a96a4e_0 conda-forge
gflags 2.2.2 ha925a31_1004 conda-forge
glog 0.6.0 h4797de2_0 conda-forge
huggingface_hub 0.13.4 py_0 huggingface
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.3.0 pyha770c72_0 conda-forge
importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge
intel-openmp 2023.0.0 h57928b3_25922 conda-forge
krb5 1.20.1 heb0366b_0 conda-forge
libabseil 20230125.0 cxx17_h63175ca_1 conda-forge
libarrow 11.0.0 h04c43f8_13_cpu conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge
libbrotlidec 1.0.9 hcfcfb64_8 conda-forge
libbrotlienc 1.0.9 hcfcfb64_8 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libcrc32c 1.1.2 h0e60522_0 conda-forge
libcurl 7.88.1 h68f0423_1 conda-forge
libexpat 2.5.0 h63175ca_1 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge
libgrpc 1.52.1 h32da247_1 conda-forge
libhwloc 2.9.0 h51c2c0f_0 conda-forge
libiconv 1.17 h8ffe710_0 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
libprotobuf 3.21.12 h12be248_0 conda-forge
libsqlite 3.40.0 hcfcfb64_0 conda-forge
libssh2 1.10.0 h9a1e1f7_3 conda-forge
libthrift 0.18.1 h9ce19ad_0 conda-forge
libutf8proc 2.8.0 h82a8f57_0 conda-forge
libxml2 2.10.3 hc3477c8_6 conda-forge
libzlib 1.2.13 hcfcfb64_4 conda-forge
lz4-c 1.9.4 hcfcfb64_0 conda-forge
mkl 2022.1.0 h6a75c08_874 conda-forge
multidict 6.0.4 py311ha68e1ae_0 conda-forge
multiprocess 0.70.14 py311ha68e1ae_3 conda-forge
numpy 1.24.2 py311h0b4df5a_0 conda-forge
openssl 3.1.0 hcfcfb64_0 conda-forge
orc 1.8.3 hada7b9e_0 conda-forge
packaging 23.0 pyhd8ed1ab_0 conda-forge
pandas 2.0.0 py311hf63dbb6_0 conda-forge
parquet-cpp 1.5.1 2 conda-forge
pip 23.0.1 pyhd8ed1ab_0 conda-forge
pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge
pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyh0701188_6 conda-forge
python 3.11.3 h2628c8c_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge
python_abi 3.11 3_cp311 conda-forge
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pyyaml 6.0 py311ha68e1ae_5 conda-forge
re2 2023.02.02 h63175ca_0 conda-forge
requests 2.28.2 pyhd8ed1ab_1 conda-forge
setuptools 67.6.1 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.1.10 hfb803bf_0 conda-forge
tbb 2021.8.0 h91493d7_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
ucrt 10.0.22621.0 h57928b3_0 conda-forge
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
vc 14.3 hb6edc58_10 conda-forge
vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge
xxhash 0.8.1 hcfcfb64_0 conda-forge
xz 5.2.10 h8cc25b3_1
yaml 0.2.5 h8ffe710_2 conda-forge
yarl 1.8.2 py311ha68e1ae_0 conda-forge
zipp 3.15.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 hcfcfb64_4 conda-forge
zstd 1.5.4 hd43e919_0
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5727/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5727/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3689
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3689/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3689/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3689/events
|
https://github.com/huggingface/datasets/pull/3689
| 1,127,422,478
|
PR_kwDODunzps4yPnp7
| 3,689
|
Fix streaming for servers not supporting HTTP range requests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Does it mean that huge files might end up being downloaded? It would go against the purpose of streaming, I think. At least, this fallback should be an option that could be disabled",
"Yes, it is against the purpose of streaming, but streaming is not possible if the server does not allow HTTP range requests.\n\nWe have two options: either we download the file or we throw an error.",
"I think we simply cannot fallback to downloading the file if streaming fails without the user being aware of it. Some options: \r\n- make the fallback optional (using an env var? or a function param)\r\n- use the fallback only if the dataset size is under some threshold (provided we have the data in the DatasetInfo) -> it's the option I use in `datasets-preview-backend` ([here](https://github.com/huggingface/datasets-preview-backend/blob/48ac19e49c19809763e8d640986bf2c3d792faed/src/datasets_preview_backend/models/typed_row.py#L40) and [here](https://github.com/huggingface/datasets-preview-backend/blob/aa86c5493b275c9e2dbae7dab7bd469da5773a41/src/datasets_preview_backend/models/split.py#L31-L37))\r\n- throw an exception and let the user decide what to do\r\n",
"IMO in general we should throw an exception and ask the user to not use streaming mode in that case.\r\n\r\nYour second point is also interesting but I feel like it could be confusing for users sometimes: it doesn't feel natural that the streaming-ability should depend on the size of the file.",
"Sure, I think we should just throw an exception\r\n",
"Current behavior is already throwing an Exception:\r\n```\r\nValueError: Cannot seek streaming HTTP file\r\n```\r\n\r\nWe could customize the exception class and/or the exception message.",
"I'm not sure we really need to change anything. I opened the issue https://github.com/huggingface/datasets/issues/3677 because discovery was streamable and is not anymore (according to my test suite in https://github.com/huggingface/datasets-preview-backend): I was not sure if it was due to some regression in the library, or to some change in the dataset itself.",
"I'm wondering why it worked before and it is no longer working...",
"> We could customize the exception class and/or the exception message.\r\n\r\nYup a message that says that the host doesn't support streaming because it doesn't support HTTP Range requests would be useful !",
"DONE, @lhoestq. "
] | 2022-02-08T15:41:05Z
| 2022-02-10T16:51:25Z
| 2022-02-10T16:51:25Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3689.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3689",
"merged_at": "2022-02-10T16:51:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3689.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3689"
}
|
Some servers do not support HTTP range requests, whereas this is required to stream some file formats (like ZIP).
~~This PR implements a workaround for those cases, by download the files locally in a temporary directory (cleaned up by the OS once the process is finished).~~
This PR raises custom error explaining that streaming is not possible because data host server does not support HTTP range requests.
Fix #3677.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3689/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3689/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3116
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3116/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3116/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3116/events
|
https://github.com/huggingface/datasets/pull/3116
| 1,031,270,611
|
PR_kwDODunzps4tbr6g
| 3,116
|
Update doc links to point to new docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] | null |
[] | 2021-10-20T11:00:47Z
| 2021-10-22T08:29:28Z
| 2021-10-22T08:26:45Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3116.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3116",
"merged_at": "2021-10-22T08:26:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3116.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3116"
}
|
This PR:
* updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template)
* fixes some broken links in the `.rst` files (fixed with the `make linkcheck` tool)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3116/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3116/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6224
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6224/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6224/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6224/events
|
https://github.com/huggingface/datasets/pull/6224
| 1,886,043,692
|
PR_kwDODunzps5Zym3j
| 6,224
|
Ignore `dataset_info.json` in data files resolution
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009450 / 0.011353 (-0.001903) | 0.007339 / 0.011008 (-0.003669) | 0.110150 / 0.038508 (0.071641) | 0.087794 / 0.023109 (0.064685) | 0.472099 / 0.275898 (0.196201) | 0.476622 / 0.323480 (0.153142) | 0.005057 / 0.007986 (-0.002929) | 0.005262 / 0.004328 (0.000933) | 0.103059 / 0.004250 (0.098808) | 0.069815 / 0.037052 (0.032763) | 0.489377 / 0.258489 (0.230888) | 0.547087 / 0.293841 (0.253247) | 0.048883 / 0.128546 (-0.079663) | 0.019192 / 0.075646 (-0.056454) | 0.410865 / 0.419271 (-0.008407) | 0.076215 / 0.043533 (0.032682) | 0.484825 / 0.255139 (0.229686) | 0.519035 / 0.283200 (0.235835) | 0.042030 / 0.141683 (-0.099653) | 1.909630 / 1.452155 (0.457475) | 2.120869 / 1.492716 (0.628153) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267600 / 0.018006 (0.249594) | 0.619135 / 0.000490 (0.618645) | 0.005897 / 0.000200 (0.005697) | 0.000142 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033265 / 0.037411 (-0.004146) | 0.104476 / 0.014526 (0.089950) | 0.129199 / 0.176557 (-0.047358) | 0.196898 / 0.737135 (-0.540238) | 0.118852 / 0.296338 (-0.177487) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.598908 / 0.215209 (0.383699) | 6.263096 / 2.077655 (4.185441) | 2.672134 / 1.504120 (1.168014) | 2.428706 / 1.541195 (0.887511) | 2.431651 / 1.468490 (0.963161) | 0.918465 / 4.584777 (-3.666312) | 5.667857 / 3.745712 (1.922145) | 5.113696 / 5.269862 (-0.156166) | 3.276805 / 4.565676 (-1.288872) | 0.101829 / 0.424275 (-0.322446) | 0.010224 / 0.007607 (0.002617) | 0.741547 / 0.226044 (0.515502) | 7.517002 / 2.268929 (5.248073) | 3.546353 / 55.444624 (-51.898272) | 2.845956 / 6.876477 (-4.030521) | 3.172777 / 2.142072 (1.030705) | 1.153485 / 4.805227 (-3.651742) | 0.225758 / 6.500664 (-6.274906) | 0.084333 / 0.075469 (0.008864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.704645 / 1.841788 (-0.137143) | 27.044110 / 8.074308 (18.969801) | 24.653837 / 10.191392 (14.462445) | 0.235452 / 0.680424 (-0.444971) | 0.029285 / 0.534201 (-0.504916) | 0.576122 / 0.579283 (-0.003161) | 0.626263 / 0.434364 (0.191899) | 0.600201 / 0.540337 (0.059864) | 0.838406 / 1.386936 (-0.548530) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013754 / 0.011353 (0.002401) | 0.005954 / 0.011008 (-0.005054) | 0.089766 / 0.038508 (0.051258) | 0.096126 / 0.023109 (0.073017) | 0.556455 / 0.275898 (0.280557) | 0.579302 / 0.323480 (0.255822) | 0.009222 / 0.007986 (0.001236) | 0.006128 / 0.004328 (0.001800) | 0.099725 / 0.004250 (0.095475) | 0.075642 / 0.037052 (0.038589) | 0.556645 / 0.258489 (0.298156) | 0.615898 / 0.293841 (0.322057) | 0.057728 / 0.128546 (-0.070818) | 0.016746 / 0.075646 (-0.058900) | 0.098053 / 0.419271 (-0.321219) | 0.066676 / 0.043533 (0.023143) | 0.534156 / 0.255139 (0.279017) | 0.590020 / 0.283200 (0.306820) | 0.038782 / 0.141683 (-0.102901) | 1.952301 / 1.452155 (0.500146) | 2.104255 / 1.492716 (0.611539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305945 / 0.018006 (0.287939) | 0.643915 / 0.000490 (0.643426) | 0.006268 / 0.000200 (0.006068) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039891 / 0.037411 (0.002479) | 0.117888 / 0.014526 (0.103363) | 0.134230 / 0.176557 (-0.042326) | 0.212544 / 0.737135 (-0.524591) | 0.128858 / 0.296338 (-0.167480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.718165 / 0.215209 (0.502955) | 7.023867 / 2.077655 (4.946212) | 3.391344 / 1.504120 (1.887224) | 3.021248 / 1.541195 (1.480053) | 3.010217 / 1.468490 (1.541727) | 0.932608 / 4.584777 (-3.652169) | 5.787536 / 3.745712 (2.041824) | 5.221305 / 5.269862 (-0.048557) | 3.282552 / 4.565676 (-1.283125) | 0.105486 / 0.424275 (-0.318789) | 0.009800 / 0.007607 (0.002193) | 0.839358 / 0.226044 (0.613314) | 8.279712 / 2.268929 (6.010784) | 4.118466 / 55.444624 (-51.326158) | 3.407738 / 6.876477 (-3.468739) | 3.632538 / 2.142072 (1.490466) | 1.109673 / 4.805227 (-3.695555) | 0.216541 / 6.500664 (-6.284123) | 0.094031 / 0.075469 (0.018562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.983979 / 1.841788 (0.142191) | 27.125882 / 8.074308 (19.051573) | 24.714002 / 10.191392 (14.522610) | 0.264417 / 0.680424 (-0.416007) | 0.034783 / 0.534201 (-0.499418) | 0.533304 / 0.579283 (-0.045979) | 0.647798 / 0.434364 (0.213434) | 0.588680 / 0.540337 (0.048343) | 0.854250 / 1.386936 (-0.532686) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006664 / 0.011353 (-0.004689) | 0.004164 / 0.011008 (-0.006844) | 0.085192 / 0.038508 (0.046684) | 0.073578 / 0.023109 (0.050469) | 0.356379 / 0.275898 (0.080481) | 0.389381 / 0.323480 (0.065902) | 0.005527 / 0.007986 (-0.002459) | 0.003488 / 0.004328 (-0.000840) | 0.065640 / 0.004250 (0.061390) | 0.055013 / 0.037052 (0.017960) | 0.358002 / 0.258489 (0.099513) | 0.400663 / 0.293841 (0.106822) | 0.030937 / 0.128546 (-0.097609) | 0.008838 / 0.075646 (-0.066808) | 0.287488 / 0.419271 (-0.131784) | 0.051503 / 0.043533 (0.007971) | 0.353945 / 0.255139 (0.098806) | 0.388778 / 0.283200 (0.105579) | 0.023346 / 0.141683 (-0.118337) | 1.479621 / 1.452155 (0.027466) | 1.559164 / 1.492716 (0.066448) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245160 / 0.018006 (0.227154) | 0.561890 / 0.000490 (0.561400) | 0.004339 / 0.000200 (0.004139) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028460 / 0.037411 (-0.008952) | 0.082046 / 0.014526 (0.067520) | 0.098005 / 0.176557 (-0.078552) | 0.154171 / 0.737135 (-0.582965) | 0.097632 / 0.296338 (-0.198707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.389993 / 0.215209 (0.174784) | 3.893287 / 2.077655 (1.815632) | 1.885668 / 1.504120 (0.381549) | 1.715055 / 1.541195 (0.173860) | 1.778008 / 1.468490 (0.309518) | 0.482818 / 4.584777 (-4.101959) | 3.572153 / 3.745712 (-0.173559) | 3.267666 / 5.269862 (-2.002196) | 2.088394 / 4.565676 (-2.477282) | 0.056961 / 0.424275 (-0.367314) | 0.007784 / 0.007607 (0.000177) | 0.466586 / 0.226044 (0.240542) | 4.652505 / 2.268929 (2.383576) | 2.491392 / 55.444624 (-52.953233) | 2.127600 / 6.876477 (-4.748877) | 2.296778 / 2.142072 (0.154705) | 0.582332 / 4.805227 (-4.222895) | 0.134372 / 6.500664 (-6.366292) | 0.061737 / 0.075469 (-0.013732) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253647 / 1.841788 (-0.588140) | 19.802353 / 8.074308 (11.728045) | 14.262815 / 10.191392 (4.071423) | 0.169489 / 0.680424 (-0.510935) | 0.018108 / 0.534201 (-0.516093) | 0.391711 / 0.579283 (-0.187572) | 0.406169 / 0.434364 (-0.028195) | 0.456728 / 0.540337 (-0.083609) | 0.633538 / 1.386936 (-0.753398) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006661 / 0.011353 (-0.004692) | 0.004181 / 0.011008 (-0.006827) | 0.064945 / 0.038508 (0.026437) | 0.073965 / 0.023109 (0.050856) | 0.406549 / 0.275898 (0.130651) | 0.441568 / 0.323480 (0.118089) | 0.005579 / 0.007986 (-0.002407) | 0.003523 / 0.004328 (-0.000805) | 0.065270 / 0.004250 (0.061019) | 0.055596 / 0.037052 (0.018544) | 0.407701 / 0.258489 (0.149212) | 0.444609 / 0.293841 (0.150768) | 0.031749 / 0.128546 (-0.096797) | 0.008680 / 0.075646 (-0.066966) | 0.071154 / 0.419271 (-0.348117) | 0.047376 / 0.043533 (0.003843) | 0.406409 / 0.255139 (0.151270) | 0.420477 / 0.283200 (0.137278) | 0.023707 / 0.141683 (-0.117976) | 1.484516 / 1.452155 (0.032361) | 1.568493 / 1.492716 (0.075777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266534 / 0.018006 (0.248528) | 0.573806 / 0.000490 (0.573316) | 0.006247 / 0.000200 (0.006048) | 0.000165 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033436 / 0.037411 (-0.003976) | 0.091947 / 0.014526 (0.077421) | 0.105556 / 0.176557 (-0.071000) | 0.162094 / 0.737135 (-0.575041) | 0.107879 / 0.296338 (-0.188459) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429126 / 0.215209 (0.213917) | 4.281329 / 2.077655 (2.203675) | 2.295406 / 1.504120 (0.791286) | 2.123336 / 1.541195 (0.582141) | 2.190804 / 1.468490 (0.722314) | 0.492972 / 4.584777 (-4.091805) | 3.638485 / 3.745712 (-0.107227) | 3.304576 / 5.269862 (-1.965285) | 2.063694 / 4.565676 (-2.501983) | 0.058549 / 0.424275 (-0.365726) | 0.007591 / 0.007607 (-0.000016) | 0.504268 / 0.226044 (0.278223) | 5.031990 / 2.268929 (2.763061) | 2.773173 / 55.444624 (-52.671451) | 2.430789 / 6.876477 (-4.445688) | 2.699900 / 2.142072 (0.557828) | 0.593220 / 4.805227 (-4.212007) | 0.133710 / 6.500664 (-6.366954) | 0.059840 / 0.075469 (-0.015629) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351158 / 1.841788 (-0.490629) | 20.176310 / 8.074308 (12.102002) | 14.933202 / 10.191392 (4.741810) | 0.169920 / 0.680424 (-0.510503) | 0.020156 / 0.534201 (-0.514045) | 0.397440 / 0.579283 (-0.181843) | 0.409395 / 0.434364 (-0.024969) | 0.471066 / 0.540337 (-0.069271) | 0.642670 / 1.386936 (-0.744266) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-07T14:43:51Z
| 2023-09-07T15:46:10Z
| 2023-09-07T15:37:20Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6224.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6224",
"merged_at": "2023-09-07T15:37:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6224.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6224"
}
|
`save_to_disk` creates this file, but also [`HugginFaceDatasetSever`](https://github.com/gradio-app/gradio/blob/26fef8c7f85a006c7e25cdbed1792df19c512d02/gradio/flagging.py#L214), so this is needed to avoid issues such as [this one](https://discord.com/channels/879548962464493619/1149295819938349107/1149295819938349107).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6224/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6224/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4176
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4176/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4176/events
|
https://github.com/huggingface/datasets/issues/4176
| 1,206,515,563
|
I_kwDODunzps5H6fdr
| 4,176
|
Very slow between two operations
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yananchen1989",
"id": 26405281,
"login": "yananchen1989",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yananchen1989"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-04-17T23:52:29Z
| 2022-04-18T00:03:00Z
| 2022-04-18T00:03:00Z
|
NONE
| null | null | null |
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores.
Also, there is a significant lag between them. Am I missing something ?
```
raw_datasets = raw_datasets.map(split_func,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc = "running split para ==>")\
.filter(lambda example: example['text1']!='' and example['text2']!='',
num_proc=args.preprocessing_num_workers, desc="filtering ==>")
processed_datasets = raw_datasets.map(
preprocess_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset===>",
)
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4176/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5917
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5917/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5917/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5917/events
|
https://github.com/huggingface/datasets/pull/5917
| 1,733,661,588
|
PR_kwDODunzps5RwoRU
| 5,917
|
Refactor extensions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005673 / 0.011008 (-0.005335) | 0.124034 / 0.038508 (0.085526) | 0.037550 / 0.023109 (0.014441) | 0.331301 / 0.275898 (0.055403) | 0.383542 / 0.323480 (0.060062) | 0.006940 / 0.007986 (-0.001046) | 0.005959 / 0.004328 (0.001631) | 0.084670 / 0.004250 (0.080419) | 0.054214 / 0.037052 (0.017162) | 0.359897 / 0.258489 (0.101408) | 0.383260 / 0.293841 (0.089419) | 0.047642 / 0.128546 (-0.080904) | 0.013902 / 0.075646 (-0.061744) | 0.380232 / 0.419271 (-0.039040) | 0.077790 / 0.043533 (0.034257) | 0.376648 / 0.255139 (0.121509) | 0.387536 / 0.283200 (0.104336) | 0.104644 / 0.141683 (-0.037038) | 1.618560 / 1.452155 (0.166406) | 1.742569 / 1.492716 (0.249853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257218 / 0.018006 (0.239212) | 0.636801 / 0.000490 (0.636311) | 0.000634 / 0.000200 (0.000434) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037874 / 0.037411 (0.000462) | 0.107454 / 0.014526 (0.092928) | 0.117855 / 0.176557 (-0.058702) | 0.204067 / 0.737135 (-0.533068) | 0.134029 / 0.296338 (-0.162310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583657 / 0.215209 (0.368447) | 5.761289 / 2.077655 (3.683635) | 2.280201 / 1.504120 (0.776081) | 2.033442 / 1.541195 (0.492247) | 2.035343 / 1.468490 (0.566853) | 0.868122 / 4.584777 (-3.716655) | 5.352591 / 3.745712 (1.606879) | 2.432814 / 5.269862 (-2.837047) | 1.560765 / 4.565676 (-3.004911) | 0.098793 / 0.424275 (-0.325482) | 0.017327 / 0.007607 (0.009720) | 0.734676 / 0.226044 (0.508631) | 7.070318 / 2.268929 (4.801390) | 2.972701 / 55.444624 (-52.471924) | 2.442189 / 6.876477 (-4.434288) | 2.604379 / 2.142072 (0.462307) | 1.028853 / 4.805227 (-3.776374) | 0.210390 / 6.500664 (-6.290274) | 0.069329 / 0.075469 (-0.006140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.469586 / 1.841788 (-0.372202) | 16.570305 / 8.074308 (8.495997) | 19.187845 / 10.191392 (8.996453) | 0.219162 / 0.680424 (-0.461262) | 0.026356 / 0.534201 (-0.507845) | 0.447370 / 0.579283 (-0.131913) | 0.555893 / 0.434364 (0.121529) | 0.574958 / 0.540337 (0.034621) | 0.639166 / 1.386936 (-0.747770) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008166 / 0.011353 (-0.003187) | 0.005577 / 0.011008 (-0.005431) | 0.103578 / 0.038508 (0.065070) | 0.040563 / 0.023109 (0.017454) | 0.441996 / 0.275898 (0.166098) | 0.483594 / 0.323480 (0.160114) | 0.007329 / 0.007986 (-0.000657) | 0.004546 / 0.004328 (0.000218) | 0.090471 / 0.004250 (0.086220) | 0.052740 / 0.037052 (0.015688) | 0.442197 / 0.258489 (0.183708) | 0.524310 / 0.293841 (0.230469) | 0.042487 / 0.128546 (-0.086060) | 0.012917 / 0.075646 (-0.062730) | 0.103992 / 0.419271 (-0.315280) | 0.060570 / 0.043533 (0.017037) | 0.441956 / 0.255139 (0.186817) | 0.477084 / 0.283200 (0.193885) | 0.103815 / 0.141683 (-0.037868) | 1.696963 / 1.452155 (0.244809) | 1.747849 / 1.492716 (0.255132) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292465 / 0.018006 (0.274458) | 0.571518 / 0.000490 (0.571028) | 0.000476 / 0.000200 (0.000276) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028697 / 0.037411 (-0.008714) | 0.111671 / 0.014526 (0.097145) | 0.138826 / 0.176557 (-0.037731) | 0.189697 / 0.737135 (-0.547439) | 0.125454 / 0.296338 (-0.170884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.619273 / 0.215209 (0.404064) | 6.138669 / 2.077655 (4.061015) | 2.558622 / 1.504120 (1.054502) | 2.201550 / 1.541195 (0.660356) | 2.279034 / 1.468490 (0.810544) | 0.850752 / 4.584777 (-3.734025) | 5.438185 / 3.745712 (1.692473) | 2.529343 / 5.269862 (-2.740518) | 1.572178 / 4.565676 (-2.993499) | 0.100768 / 0.424275 (-0.323507) | 0.013902 / 0.007607 (0.006295) | 0.726660 / 0.226044 (0.500616) | 7.794918 / 2.268929 (5.525990) | 3.311695 / 55.444624 (-52.132930) | 2.729167 / 6.876477 (-4.147310) | 2.630984 / 2.142072 (0.488911) | 1.018534 / 4.805227 (-3.786693) | 0.194602 / 6.500664 (-6.306062) | 0.070876 / 0.075469 (-0.004593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573005 / 1.841788 (-0.268783) | 17.042710 / 8.074308 (8.968401) | 19.615320 / 10.191392 (9.423928) | 0.229405 / 0.680424 (-0.451019) | 0.027560 / 0.534201 (-0.506641) | 0.447984 / 0.579283 (-0.131299) | 0.598392 / 0.434364 (0.164028) | 0.571769 / 0.540337 (0.031431) | 0.653025 / 1.386936 (-0.733911) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-31T08:33:02Z
| 2023-05-31T13:34:35Z
| 2023-05-31T13:25:57Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5917.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5917",
"merged_at": "2023-05-31T13:25:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5917.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5917"
}
|
Related to:
- #5850
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5917/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5917/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2822
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2822/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2822/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2822/events
|
https://github.com/huggingface/datasets/pull/2822
| 975,744,463
|
MDExOlB1bGxSZXF1ZXN0NzE2ODUxMTAy
| 2,822
|
Add url prefix convention for many compression formats
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for the feedback :) I will also complete the documentation to explain this convention",
"I just added some documentation about how streaming works with chained URLs.\r\n\r\nI will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentation already and to avoid having to resolve conflicts.",
"Merging this one now, next step is resolve the conflicts in #2662 and update the docs for URL chaining :)\r\n\r\nThere is also the glob feature of zip files that I need to add, to be able to do this for example:\r\n```python\r\nload_dataset(\"json\", data_files=\"zip://*::https://foo.bar/archive.zip\")\r\n```"
] | 2021-08-20T16:11:23Z
| 2021-08-23T15:59:16Z
| 2021-08-23T15:59:14Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2822.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2822",
"merged_at": "2021-08-23T15:59:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2822.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2822"
}
|
## Intro
When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.
In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:
- `gz://file.txt::https://foo.bar/file.txt.gz`
- `bz2://file.txt::https://foo.bar/file.txt.bz2`
- `zip://::https://foo.bar/archive.zip`
- `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)
This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining
This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing
```python
def _generate_examples(self, urlpath):
with open(urlpath) as f:
....
```
## What it changes
This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.
## Additional notes
This PR should close https://github.com/huggingface/datasets/issues/2813
It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore
Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:
```python
load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip")
```
This is the exact same convention as fsspec and it removes all ambiguities
cc @albertvillanova @lewtun
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2822/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2822/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3822
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3822/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3822/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3822/events
|
https://github.com/huggingface/datasets/issues/3822
| 1,159,395,728
|
I_kwDODunzps5FGvmQ
| 3,822
|
Add Biwi Kinect Head Pose Database
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
}
] | null |
[
"Official dataset location : https://icu.ee.ethz.ch/research/datsets.html\r\nIn the \"Biwi Kinect Head Pose Database\" section, I do not find any information regarding \"Downloading the dataset.\" . Do we mail the authors regarding this ?\r\n\r\nI found the dataset on Kaggle : [Link](https://www.kaggle.com/kmader/biwi-kinect-head-pose-database) , but since π€ does not host any of the datasets, this would require the user to provide their Kaggle username and API key to download. \r\n\r\nAny inputs on how we could proceed ? Thank you.\r\n[ Need your inputs here, @lhoestq or @mariosasko ]",
"Hi @dnaveenr! Thanks for tackling this issue. This link should work: https://data.vision.ee.ethz.ch/cvl/gfanelli/kinect_head_pose_db.tgz",
"#self-assign",
"Added in https://github.com/huggingface/datasets/pull/3903, thanks @dnaveenr !"
] | 2022-03-04T08:48:39Z
| 2022-06-01T13:00:47Z
| 2022-06-01T13:00:47Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** Biwi Kinect Head Pose Database
- **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles.
- **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html)
- **Motivation:** Useful pose estimation dataset
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3822/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3822/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1737
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1737/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1737/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1737/events
|
https://github.com/huggingface/datasets/pull/1737
| 785,606,286
|
MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5
| 1,737
|
update link in TLC to be github links
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4",
"events_url": "https://api.github.com/users/chameleonTK/events{/privacy}",
"followers_url": "https://api.github.com/users/chameleonTK/followers",
"following_url": "https://api.github.com/users/chameleonTK/following{/other_user}",
"gists_url": "https://api.github.com/users/chameleonTK/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chameleonTK",
"id": 6429850,
"login": "chameleonTK",
"node_id": "MDQ6VXNlcjY0Mjk4NTA=",
"organizations_url": "https://api.github.com/users/chameleonTK/orgs",
"received_events_url": "https://api.github.com/users/chameleonTK/received_events",
"repos_url": "https://api.github.com/users/chameleonTK/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chameleonTK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chameleonTK/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chameleonTK"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for updating this!"
] | 2021-01-14T02:49:21Z
| 2021-01-14T10:25:24Z
| 2021-01-14T10:25:24Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1737.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1737",
"merged_at": "2021-01-14T10:25:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1737.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1737"
}
|
Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1737/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1737/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3207
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3207/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3207/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3207/events
|
https://github.com/huggingface/datasets/issues/3207
| 1,044,496,389
|
I_kwDODunzps4-QcAF
| 3,207
|
CI error: Another metric with the same name already exists in Keras 2.7.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2021-11-04T09:04:11Z
| 2021-11-04T09:30:54Z
| 2021-11-04T09:30:54Z
|
MEMBER
| null | null | null |
## Describe the bug
Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See:
- keras-team/keras#15579
This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3207/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3207/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6175
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6175/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6175/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6175/events
|
https://github.com/huggingface/datasets/pull/6175
| 1,863,592,678
|
PR_kwDODunzps5YnKlx
| 6,175
|
PyArrow 13 CI fixes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006095 / 0.011353 (-0.005258) | 0.003580 / 0.011008 (-0.007429) | 0.080146 / 0.038508 (0.041638) | 0.063445 / 0.023109 (0.040336) | 0.321930 / 0.275898 (0.046032) | 0.397933 / 0.323480 (0.074453) | 0.003455 / 0.007986 (-0.004531) | 0.002856 / 0.004328 (-0.001472) | 0.062938 / 0.004250 (0.058687) | 0.048896 / 0.037052 (0.011843) | 0.333070 / 0.258489 (0.074581) | 0.404485 / 0.293841 (0.110644) | 0.027156 / 0.128546 (-0.101390) | 0.007974 / 0.075646 (-0.067672) | 0.261505 / 0.419271 (-0.157766) | 0.045328 / 0.043533 (0.001795) | 0.311203 / 0.255139 (0.056064) | 0.390006 / 0.283200 (0.106806) | 0.023650 / 0.141683 (-0.118033) | 1.468856 / 1.452155 (0.016701) | 1.503867 / 1.492716 (0.011151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202110 / 0.018006 (0.184103) | 0.436433 / 0.000490 (0.435944) | 0.002278 / 0.000200 (0.002078) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024575 / 0.037411 (-0.012836) | 0.073005 / 0.014526 (0.058479) | 0.083609 / 0.176557 (-0.092947) | 0.144881 / 0.737135 (-0.592254) | 0.083495 / 0.296338 (-0.212844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398911 / 0.215209 (0.183702) | 3.994035 / 2.077655 (1.916381) | 2.056768 / 1.504120 (0.552649) | 1.913242 / 1.541195 (0.372047) | 1.932934 / 1.468490 (0.464444) | 0.498953 / 4.584777 (-4.085824) | 3.031107 / 3.745712 (-0.714605) | 2.817165 / 5.269862 (-2.452696) | 1.858886 / 4.565676 (-2.706790) | 0.056977 / 0.424275 (-0.367299) | 0.006634 / 0.007607 (-0.000973) | 0.472580 / 0.226044 (0.246536) | 4.738301 / 2.268929 (2.469372) | 2.373938 / 55.444624 (-53.070686) | 2.021057 / 6.876477 (-4.855420) | 2.195419 / 2.142072 (0.053346) | 0.585182 / 4.805227 (-4.220045) | 0.124260 / 6.500664 (-6.376405) | 0.060250 / 0.075469 (-0.015219) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227350 / 1.841788 (-0.614438) | 18.496525 / 8.074308 (10.422216) | 13.946658 / 10.191392 (3.755266) | 0.140024 / 0.680424 (-0.540399) | 0.017077 / 0.534201 (-0.517124) | 0.334415 / 0.579283 (-0.244868) | 0.351118 / 0.434364 (-0.083246) | 0.379556 / 0.540337 (-0.160782) | 0.525064 / 1.386936 (-0.861872) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006176 / 0.011353 (-0.005177) | 0.003648 / 0.011008 (-0.007360) | 0.063461 / 0.038508 (0.024953) | 0.062770 / 0.023109 (0.039660) | 0.448786 / 0.275898 (0.172888) | 0.486490 / 0.323480 (0.163010) | 0.005527 / 0.007986 (-0.002458) | 0.002860 / 0.004328 (-0.001469) | 0.063803 / 0.004250 (0.059553) | 0.049657 / 0.037052 (0.012604) | 0.449625 / 0.258489 (0.191136) | 0.489378 / 0.293841 (0.195537) | 0.028406 / 0.128546 (-0.100140) | 0.008062 / 0.075646 (-0.067584) | 0.068417 / 0.419271 (-0.350854) | 0.040854 / 0.043533 (-0.002678) | 0.461670 / 0.255139 (0.206531) | 0.481622 / 0.283200 (0.198423) | 0.021018 / 0.141683 (-0.120665) | 1.450328 / 1.452155 (-0.001826) | 1.501283 / 1.492716 (0.008567) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269824 / 0.018006 (0.251817) | 0.412296 / 0.000490 (0.411807) | 0.039582 / 0.000200 (0.039382) | 0.000266 / 0.000054 (0.000211) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026436 / 0.037411 (-0.010976) | 0.080633 / 0.014526 (0.066107) | 0.089786 / 0.176557 (-0.086770) | 0.145020 / 0.737135 (-0.592115) | 0.092327 / 0.296338 (-0.204012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464349 / 0.215209 (0.249140) | 4.630631 / 2.077655 (2.552976) | 2.560527 / 1.504120 (1.056407) | 2.374195 / 1.541195 (0.833000) | 2.424774 / 1.468490 (0.956284) | 0.510428 / 4.584777 (-4.074349) | 3.099805 / 3.745712 (-0.645907) | 2.781096 / 5.269862 (-2.488765) | 1.854276 / 4.565676 (-2.711400) | 0.058102 / 0.424275 (-0.366173) | 0.006365 / 0.007607 (-0.001242) | 0.534082 / 0.226044 (0.308038) | 5.355003 / 2.268929 (3.086074) | 3.012546 / 55.444624 (-52.432078) | 2.665222 / 6.876477 (-4.211255) | 2.821014 / 2.142072 (0.678942) | 0.597733 / 4.805227 (-4.207494) | 0.125433 / 6.500664 (-6.375231) | 0.060802 / 0.075469 (-0.014667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345699 / 1.841788 (-0.496088) | 18.836083 / 8.074308 (10.761774) | 14.895458 / 10.191392 (4.704066) | 0.146843 / 0.680424 (-0.533581) | 0.018082 / 0.534201 (-0.516119) | 0.335729 / 0.579283 (-0.243554) | 0.351013 / 0.434364 (-0.083351) | 0.388435 / 0.540337 (-0.151902) | 0.543826 / 1.386936 (-0.843110) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006593 / 0.011353 (-0.004760) | 0.004089 / 0.011008 (-0.006919) | 0.084753 / 0.038508 (0.046245) | 0.079899 / 0.023109 (0.056790) | 0.311528 / 0.275898 (0.035630) | 0.349722 / 0.323480 (0.026243) | 0.004288 / 0.007986 (-0.003698) | 0.004552 / 0.004328 (0.000224) | 0.065896 / 0.004250 (0.061646) | 0.053813 / 0.037052 (0.016760) | 0.316958 / 0.258489 (0.058469) | 0.367011 / 0.293841 (0.073170) | 0.031082 / 0.128546 (-0.097464) | 0.008684 / 0.075646 (-0.066963) | 0.288003 / 0.419271 (-0.131268) | 0.052560 / 0.043533 (0.009027) | 0.305589 / 0.255139 (0.050450) | 0.349656 / 0.283200 (0.066457) | 0.023857 / 0.141683 (-0.117826) | 1.462360 / 1.452155 (0.010205) | 1.568170 / 1.492716 (0.075454) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272342 / 0.018006 (0.254336) | 0.585108 / 0.000490 (0.584618) | 0.003427 / 0.000200 (0.003227) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030347 / 0.037411 (-0.007064) | 0.086325 / 0.014526 (0.071799) | 0.100958 / 0.176557 (-0.075598) | 0.156534 / 0.737135 (-0.580601) | 0.102506 / 0.296338 (-0.193832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406625 / 0.215209 (0.191416) | 4.065957 / 2.077655 (1.988302) | 2.075867 / 1.504120 (0.571747) | 1.914390 / 1.541195 (0.373196) | 2.013321 / 1.468490 (0.544831) | 0.486832 / 4.584777 (-4.097945) | 3.545940 / 3.745712 (-0.199772) | 3.323226 / 5.269862 (-1.946635) | 2.067742 / 4.565676 (-2.497934) | 0.057884 / 0.424275 (-0.366391) | 0.007751 / 0.007607 (0.000144) | 0.484923 / 0.226044 (0.258878) | 4.844885 / 2.268929 (2.575956) | 2.569828 / 55.444624 (-52.874796) | 2.224058 / 6.876477 (-4.652419) | 2.485587 / 2.142072 (0.343515) | 0.584311 / 4.805227 (-4.220916) | 0.134984 / 6.500664 (-6.365680) | 0.062164 / 0.075469 (-0.013305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247182 / 1.841788 (-0.594605) | 20.107500 / 8.074308 (12.033192) | 14.194444 / 10.191392 (4.003052) | 0.147134 / 0.680424 (-0.533290) | 0.018062 / 0.534201 (-0.516138) | 0.392029 / 0.579283 (-0.187254) | 0.402991 / 0.434364 (-0.031373) | 0.457600 / 0.540337 (-0.082737) | 0.632553 / 1.386936 (-0.754383) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006920 / 0.011353 (-0.004433) | 0.004257 / 0.011008 (-0.006751) | 0.065233 / 0.038508 (0.026725) | 0.078151 / 0.023109 (0.055042) | 0.389141 / 0.275898 (0.113243) | 0.431518 / 0.323480 (0.108038) | 0.005752 / 0.007986 (-0.002234) | 0.003584 / 0.004328 (-0.000745) | 0.065173 / 0.004250 (0.060922) | 0.059113 / 0.037052 (0.022060) | 0.398225 / 0.258489 (0.139736) | 0.430980 / 0.293841 (0.137139) | 0.032802 / 0.128546 (-0.095744) | 0.008702 / 0.075646 (-0.066945) | 0.071345 / 0.419271 (-0.347926) | 0.048269 / 0.043533 (0.004736) | 0.389264 / 0.255139 (0.134125) | 0.416008 / 0.283200 (0.132809) | 0.024845 / 0.141683 (-0.116838) | 1.499100 / 1.452155 (0.046945) | 1.576397 / 1.492716 (0.083681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296674 / 0.018006 (0.278668) | 0.540108 / 0.000490 (0.539619) | 0.004293 / 0.000200 (0.004093) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034108 / 0.037411 (-0.003303) | 0.092747 / 0.014526 (0.078221) | 0.112203 / 0.176557 (-0.064354) | 0.162728 / 0.737135 (-0.574407) | 0.109955 / 0.296338 (-0.186383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432006 / 0.215209 (0.216797) | 4.297591 / 2.077655 (2.219937) | 2.379645 / 1.504120 (0.875525) | 2.218680 / 1.541195 (0.677485) | 2.314608 / 1.468490 (0.846117) | 0.495562 / 4.584777 (-4.089215) | 3.589787 / 3.745712 (-0.155925) | 3.349593 / 5.269862 (-1.920268) | 2.119893 / 4.565676 (-2.445783) | 0.057976 / 0.424275 (-0.366299) | 0.007612 / 0.007607 (0.000005) | 0.509422 / 0.226044 (0.283378) | 5.101444 / 2.268929 (2.832515) | 2.794532 / 55.444624 (-52.650092) | 2.459033 / 6.876477 (-4.417444) | 2.714424 / 2.142072 (0.572352) | 0.588444 / 4.805227 (-4.216784) | 0.135763 / 6.500664 (-6.364901) | 0.062593 / 0.075469 (-0.012876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.361415 / 1.841788 (-0.480372) | 20.940684 / 8.074308 (12.866376) | 15.161364 / 10.191392 (4.969972) | 0.154243 / 0.680424 (-0.526181) | 0.020305 / 0.534201 (-0.513896) | 0.397438 / 0.579283 (-0.181845) | 0.415047 / 0.434364 (-0.019317) | 0.473250 / 0.540337 (-0.067088) | 0.740681 / 1.386936 (-0.646255) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-23T15:45:53Z
| 2023-08-25T13:15:59Z
| 2023-08-25T13:06:52Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6175.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6175",
"merged_at": "2023-08-25T13:06:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6175.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6175"
}
|
Fixes:
* bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed)
* aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always return `datetime [ns]` objects)
Fix #6173
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6175/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6175/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3605
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3605/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3605/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3605/events
|
https://github.com/huggingface/datasets/pull/3605
| 1,108,738,561
|
PR_kwDODunzps4xS9rX
| 3,605
|
Adding Turkic X-WMT evaluation set for machine translation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26018417?v=4",
"events_url": "https://api.github.com/users/mirzakhalov/events{/privacy}",
"followers_url": "https://api.github.com/users/mirzakhalov/followers",
"following_url": "https://api.github.com/users/mirzakhalov/following{/other_user}",
"gists_url": "https://api.github.com/users/mirzakhalov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mirzakhalov",
"id": 26018417,
"login": "mirzakhalov",
"node_id": "MDQ6VXNlcjI2MDE4NDE3",
"organizations_url": "https://api.github.com/users/mirzakhalov/orgs",
"received_events_url": "https://api.github.com/users/mirzakhalov/received_events",
"repos_url": "https://api.github.com/users/mirzakhalov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mirzakhalov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mirzakhalov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mirzakhalov"
}
|
[] |
closed
| false
| null |
[] | null |
[
"hi! Thank you for all the comments! I believe I addressed them all. Let me know if there is anything else",
"Hi there! I was wondering if there is anything else to change before this can be merged",
"@lhoestq Hi! Just a gentle reminder about the steps to merge this one! ",
"Thanks for the heads up ! I think I fixed the last issue with the YAML tags",
"The CI failure is unrelated to this PR and fixed on master, let's merge :)\r\n\r\nThanks a lot !"
] | 2022-01-20T01:40:29Z
| 2022-01-31T09:50:57Z
| 2022-01-31T09:50:57Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3605",
"merged_at": "2022-01-31T09:50:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3605"
}
|
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are:
Azerbaijani (az)
Bashkir (ba)
English (en)
Karakalpak (kaa)
Kazakh (kk)
Kirghiz (ky)
Russian (ru)
Turkish (tr)
Sakha (sah)
Uzbek (uz)
More info about the corpus is here: [https://github.com/turkic-interlingua/til-mt/tree/master/xwmt](https://github.com/turkic-interlingua/til-mt/tree/master/xwmt)
A paper describing the test set is here: [https://arxiv.org/abs/2109.04593](https://arxiv.org/abs/2109.04593)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3605/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3605/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4717
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4717/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4717/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4717/events
|
https://github.com/huggingface/datasets/issues/4717
| 1,309,512,483
|
I_kwDODunzps5ODZMj
| 4,717
|
Dataset Viewer issue for LawalAfeez/englishreview-ds-mini
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/69974956?v=4",
"events_url": "https://api.github.com/users/lawalAfeez820/events{/privacy}",
"followers_url": "https://api.github.com/users/lawalAfeez820/followers",
"following_url": "https://api.github.com/users/lawalAfeez820/following{/other_user}",
"gists_url": "https://api.github.com/users/lawalAfeez820/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lawalAfeez820",
"id": 69974956,
"login": "lawalAfeez820",
"node_id": "MDQ6VXNlcjY5OTc0OTU2",
"organizations_url": "https://api.github.com/users/lawalAfeez820/orgs",
"received_events_url": "https://api.github.com/users/lawalAfeez820/received_events",
"repos_url": "https://api.github.com/users/lawalAfeez820/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lawalAfeez820/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lawalAfeez820/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lawalAfeez820"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null |
[
"It's currently working, as far as I understand\r\n\r\nhttps://huggingface.co/datasets/LawalAfeez/englishreview-ds-mini/viewer/LawalAfeez--englishreview-ds-mini/train\r\n\r\n<img width=\"1556\" alt=\"Capture dβeΜcran 2022-07-19 aΜ 09 24 01\" src=\"https://user-images.githubusercontent.com/1676121/179761130-2d7980b9-c0f6-4093-8b1d-f0a3872fef3f.png\">\r\n\r\n---\r\n\r\nWhat was your issue?"
] | 2022-07-19T13:19:39Z
| 2022-07-20T08:32:57Z
| 2022-07-20T08:32:57Z
|
NONE
| null | null | null |
### Link
_No response_
### Description
Unable to view the split data
### Owner
_No response_
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4717/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4717/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/256
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/256/comments
|
https://api.github.com/repos/huggingface/datasets/issues/256/events
|
https://github.com/huggingface/datasets/issues/256
| 635,596,295
|
MDU6SXNzdWU2MzU1OTYyOTU=
| 256
|
[Feature request] Add a feature to dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)",
"Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prior to performing map.\r\n\r\nE.g. \r\n```\r\nnew_info = list of length dataset['train']\r\n\r\ndataset['train'] = dataset['train'].map(lambda x: some_function(x, new_info[index of x]))\r\n\r\ndef some_function(x, new_info_x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x\r\n return x\r\n```\r\nI was thinking to instead create a new field in the arrow dataset so that instance x contains all the necessary information when map function is applied (since I don't have index information to pass to map function).",
"This is what I have so far: \r\n\r\n```\r\nimport pyarrow as pa\r\nfrom nlp.arrow_dataset import Dataset\r\n\r\naug_dataset = dataset['train'][:]\r\naug_dataset['new_info'] = new_info\r\n\r\n#reformat as arrow-table\r\nschema = dataset['train'].schema\r\n\r\n# this line doesn't work:\r\nschema.append(pa.field('new_info', pa.int32()))\r\n\r\ntable = pa.Table.from_pydict(\r\n aug_dataset,\r\n schema=schema\r\n)\r\ndataset['train'] = Dataset(table) \r\n```",
"Maybe you can use `with_indices`?\r\n\r\n```python\r\nnew_info = list of length dataset['train']\r\n\r\ndef some_function(indice, x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x[indice]\r\n return x\r\n\r\ndataset['train'] = dataset['train'].map(some_function, with_indices=True)\r\n```",
"Oh great. That should work. I missed that in the documentation- thanks :) "
] | 2020-06-09T16:38:12Z
| 2020-06-09T16:51:42Z
| 2020-06-09T16:51:42Z
|
NONE
| null | null | null |
Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/256/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/256/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/319
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/319/comments
|
https://api.github.com/repos/huggingface/datasets/issues/319/events
|
https://github.com/huggingface/datasets/issues/319
| 646,792,487
|
MDU6SXNzdWU2NDY3OTI0ODc=
| 319
|
Nested sequences with dicts
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define the list in the feature with a simple list or a tuple (which is also simpler to write).\r\nIn your case, the features could be as follow:\r\n``` python\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": [[{\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n }]],\r\n ...\r\n }),\r\n...\r\n```"
] | 2020-06-27T23:45:17Z
| 2020-07-03T10:22:00Z
| 2020-07-03T10:22:00Z
|
CONTRIBUTOR
| null | null | null |
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`.
The original data is in this format:
```python
{
'title': "Title of wiki page",
'vertexSet': [
[
{ 'name': "mention_name",
'sent_id': "mention in which sentence",
'pos': ["postion of mention in a sentence"],
'type': "NER_type"},
{another mention}
],
[another entity]
]
...
}
```
So to represent this I've attempted to write:
```
...
features=nlp.Features({
"title": nlp.Value("string"),
"vertexSet": nlp.features.Sequence(nlp.features.Sequence({
"name": nlp.Value("string"),
"sent_id": nlp.Value("int32"),
"pos": nlp.features.Sequence(nlp.Value("int32")),
"type": nlp.Value("string"),
})),
...
}),
...
```
This is giving me the error:
```
pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict.
If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/319/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2030
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2030/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2030/events
|
https://github.com/huggingface/datasets/pull/2030
| 829,110,803
|
MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4
| 2,030
|
Implement Dataset from text
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..."
] | 2021-03-11T12:34:50Z
| 2021-03-18T13:29:29Z
| 2021-03-18T13:29:29Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2030",
"merged_at": "2021-03-18T13:29:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2030"
}
|
Implement `Dataset.from_text`.
Analogue to #1943, #1946.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2030/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3369
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3369/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3369/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3369/events
|
https://github.com/huggingface/datasets/issues/3369
| 1,069,587,674
|
I_kwDODunzps4_wJza
| 3,369
|
[Audio] Allow resampling for audio datasets in streaming mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"This requires implementing `cast_column` for iterable datasets, it could be a very nice addition !\r\n\r\n<s>It can also be useful to be able to disable the audio/image decoding for the dataset viewer (see PR https://github.com/huggingface/datasets/pull/3430) cc @severo </s>\r\nEDIT: actually following https://github.com/huggingface/datasets/issues/3145 the dataset viewer might not need it anymore",
"Just to clarify a bit. This feature is **always** needed when using the common voice dataset in streaming mode. So I think it's quite important"
] | 2021-12-02T14:04:57Z
| 2021-12-16T15:55:19Z
| 2021-12-16T15:55:19Z
|
MEMBER
| null | null | null |
Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
However in streaming mode it fails currently:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test", streaming=True)
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
with the following error:
```
AttributeError: 'IterableDataset' object has no attribute 'cast_column'
```
It would be great if we could add such a feature (I'm not 100% sure though how complex this would be)
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3369/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3369/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/519
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/519/comments
|
https://api.github.com/repos/huggingface/datasets/issues/519/events
|
https://github.com/huggingface/datasets/issues/519
| 682,193,882
|
MDU6SXNzdWU2ODIxOTM4ODI=
| 519
|
[BUG] Metrics throwing new error on master since 0.4.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric",
"Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 "
] | 2020-08-19T21:29:15Z
| 2022-06-02T16:41:01Z
| 2020-08-19T22:04:40Z
|
CONTRIBUTOR
| null | null | null |
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch
batch = self.info.features.encode_batch(batch)
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp>
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example
raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/519/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/519/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6065
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6065/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6065/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6065/events
|
https://github.com/huggingface/datasets/pull/6065
| 1,819,334,932
|
PR_kwDODunzps5WR8jI
| 6,065
|
Add column type guessing from map return function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for working on this. However, having thought about this issue a bit more, supporting this doesn't seem like a good idea - it's better to be explicit than implicit, according to the Zen of Python π. Also, I don't think many users would use this, so this raises the question of whether this is something we want to maintain.\r\n\r\ncc @lhoestq for the 2nd opinion",
"@mariosasko I was going to quote the Zen of Python in the other direction :) To me, this actually is much more explicit than the current behavior of guessing pyarrow types based on the raw dictionary return values. Explicit typehinting is increasingly the de facto way to deal with this dynamic type serialization - plus it feels like a clearer fit to me than separating out the mapper function from the feature column definition in the call to the actual `.map()`. Another benefit is providing typehinting support for clients that use mypy or other static typecheckers to detect return mismatches.\r\n\r\nBut will leave it to you and @lhoestq to see if it's something you'd like in core versus a support package.",
"I meant that explicitly specifying the target features (the `features` param) is cleaner (easier to track) than relying on type hints.",
"Passing features= to `map()` is richer and more explicit. Also I don't think users would guess that such API exist.\r\n\r\nOther libraries like dask also infer the type from the output or requires the typing to be specified using the `meta` argument",
"Point about discoverability is a fair one, would certainly need some docs around it. All good! Will close this out and keep in our extension utilities."
] | 2023-07-25T00:34:17Z
| 2023-07-26T15:13:45Z
| 2023-07-26T15:13:44Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6065.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6065",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6065.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6065"
}
|
As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset.
This PR provides an alternative approach, which is functionally equivalent to specifying features but a bit cleaner within a larger mapping pipeline. It allows clients to typehint the return variable coming from the mapper function - if we find one of these type annotations specified, and no explicit features have been passed in, we'll try to convert it into a Features map. If the map function runs and casting is unable to succeed, it will raise a DatasetTransformationNotAllowedError that indicates the typehint may be to blame. It works for batched and non-batched mapping functions.
Currently supported column types:
- builtins primitives: string, int, float, bool
- dictionaries, lists (nested and one-deep)
- Optional types and None-Unions (synonymous with optional types)
It's used like:
```python
class DatasetTyped(TypedDict):
texts: list[str]
def dataset_typed_map(batch) -> DatasetTyped:
return {"texts": [text.split() for text in batch["raw_text"]]}
dataset = {"raw_text": ["", "This is a test", "This is another test"]}
with Dataset.from_dict(dataset) as dset:
new_dataset = dset.map(
dataset_typed_map,
batched=True,
batch_size=1,
num_proc=1,
)
```
Open questions:
- Should logging indicate we have automatically guessed these types? Or proceed quietly until we hit an error (as is the current implementation).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6065/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6065/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/686
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/686/comments
|
https://api.github.com/repos/huggingface/datasets/issues/686/events
|
https://github.com/huggingface/datasets/issues/686
| 711,385,739
|
MDU6SXNzdWU3MTEzODU3Mzk=
| 686
|
Dataset browser url is still https://huggingface.co/nlp/viewer/
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)",
"This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!"
] | 2020-09-29T19:21:52Z
| 2021-01-08T18:29:26Z
| 2021-01-08T18:29:26Z
|
CONTRIBUTOR
| null | null | null |
Might be worth updating to https://huggingface.co/datasets/viewer/
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/686/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1252
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1252/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1252/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1252/events
|
https://github.com/huggingface/datasets/pull/1252
| 758,511,388
|
MDExOlB1bGxSZXF1ZXN0NTMzNjczMDcx
| 1,252
|
Add Naver sentiment movie corpus
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jaketae",
"id": 25360440,
"login": "jaketae",
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"repos_url": "https://api.github.com/users/jaketae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jaketae"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-07T13:33:45Z
| 2020-12-08T14:32:33Z
| 2020-12-08T14:21:37Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1252",
"merged_at": "2020-12-08T14:21:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1252"
}
|
Supersedes #1168
> This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1252/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1252/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5850/events
|
https://github.com/huggingface/datasets/pull/5850
| 1,707,678,911
|
PR_kwDODunzps5QZALv
| 5,850
|
Make packaged builders skip non-supported file formats
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5850). All of your documentation changes will be reflected on that endpoint.",
"Good idea. @mariosasko!!!\r\n\r\nPlease note that before this PR, the files are not evenly distributed for archives: `_generate_examples` gets a list of iterators, one for each archive (uncompressed to a directory).",
"This change could create silent problems when loading files with extensions that are not listed here. For example\r\n\r\n```python\r\nload_dataset(\"text\", data_files=[\"20230515.log\"])\r\n```\r\n\r\nwouldn't even log anything to say that the file was ignored.\r\n\r\nMaybe it's possible to do this at data files patterns resolution ?\r\n\r\ne.g. in get_data_patterns_in_dataset_repository / get_data_patterns_locally we could return patterns that include the most common extension",
"@lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nThe solution is to add the .log extension (besides the .txt) as supported by text, independently of where we perform the skip (at pattern resolution or in the builder itself).\r\n\r\nAdditionally, at the time we call for pattern resolution, we do not know the builder class yet, so that we cannot pass specific file extensions. First we call data files pattern resolution, and afterwards we call `infer_module_for_data_files` and then know the builder class.",
"> @lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nNo I simply think it's a bad breaking change to not support\r\n\r\n```python\r\nload_dataset(\"<builder_name>\", data_files=[\"path/to/file_with_unknown_or_no_extension\"])\r\n# or\r\nload_dataset(\"<builder_name>\", data_files=[\"https://url.to/file_with_unknown_or_no_extension\"])\r\n```\r\n\r\nIdk if it's the easiest solution, but maybe it's possible to do the change only when inferring the patterns of dataset repositories. This should avoid this breaking change.\r\n\r\nFor example it could do something like that in `get_data_patterns_locally`\r\n\r\n```python\r\n Input:\r\n\r\n my_dataset_repository/\r\n βββ README.md\r\n βββ banner.png\r\n βββ data0.csv\r\n βββ data1.csv\r\n βββ data2.csv\r\n\r\n Output:\r\n\r\n {\"train\": [\"**.csv\"]}\r\n```\r\n\r\ninstead of \r\n\r\n```python\r\n Output:\r\n\r\n {\"train\": [\"**\"]}\r\n```",
"I agree with @lhoestq - it should still be possible to request parsing a file with a specific builder even if the file's extension is \"invalid\" for the builder, and only ignore non-supported file formats when inferring the patterns.",
"Therefore, if I understand correctly, what you suggest is:\r\n- if the user passes a packaged builder to `load_dataset` (e.g. `load_dataset(\"csv\",...`), then the *passed* `data_files` should not be filtered to remove unsupported extensions. No breaking change in this case\r\n- if the user passes a no-script repo/folder to `load_dataset` (e.g. `load_dataset(\"my_dataset_repository\",...`), then the *inferred* data files should be filtered to remove the extensions that are not supported by the inferred module name builder\r\n - if the user passes `data_files` as well, then I guess these should not be filtered, to avoid any breaking change as in the first case above",
"Yes that would be ideal imo !",
"I think this now fulfills all the requirements.",
"I find it a bit confusing to still be able to pass data_files that are going to be silently ignored based on the value of `only_supported_extensions`. My suggestion was to have the right data files pattern, not to filter a posteriori (sorry if my last message was confusing).\r\n\r\nHaving the right data files pattern would also allow users to inspect what's actually being loaded with\r\n```\r\nload_dataset_builder(...).config.data_files\r\n```\r\nand it would list exactly what data files are used."
] | 2023-05-12T13:52:34Z
| 2023-06-07T12:26:38Z
| null |
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5850",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5850"
}
|
This PR makes packaged builders skip non-supported file formats:
- Csv builder skips non-CSV files
- Analogously for the other builders
Fix #5849.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5850/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5850/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1037
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1037/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1037/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1037/events
|
https://github.com/huggingface/datasets/pull/1037
| 755,975,586
|
MDExOlB1bGxSZXF1ZXN0NTMxNTk2NDkx
| 1,037
|
Fix docs indentation issues
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"is this an issue ?",
"Yes @lhoestq, look at the docs site. For example, in https://huggingface.co/docs/datasets/add_dataset.html, look at the indentation in the code block under the sentence:\r\n> Here are the features of the SQuAD dataset for instance, which is taken from the squad dataset loading script:"
] | 2020-12-03T08:21:34Z
| 2020-12-22T16:01:15Z
| 2020-12-22T16:01:15Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1037.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1037",
"merged_at": "2020-12-22T16:01:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1037.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1037"
}
|
Replace tabs with spaces.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1037/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1037/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6421
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6421/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6421/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6421/events
|
https://github.com/huggingface/datasets/pull/6421
| 1,994,451,553
|
PR_kwDODunzps5fgG1h
| 6,421
|
Add pyarrow-hotfix to release docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004755 / 0.011353 (-0.006598) | 0.002683 / 0.011008 (-0.008325) | 0.061701 / 0.038508 (0.023193) | 0.030123 / 0.023109 (0.007013) | 0.238186 / 0.275898 (-0.037712) | 0.266570 / 0.323480 (-0.056910) | 0.002898 / 0.007986 (-0.005088) | 0.002381 / 0.004328 (-0.001948) | 0.048033 / 0.004250 (0.043782) | 0.044529 / 0.037052 (0.007477) | 0.246728 / 0.258489 (-0.011761) | 0.302066 / 0.293841 (0.008225) | 0.024008 / 0.128546 (-0.104539) | 0.006626 / 0.075646 (-0.069020) | 0.202000 / 0.419271 (-0.217272) | 0.056492 / 0.043533 (0.012959) | 0.243417 / 0.255139 (-0.011722) | 0.263947 / 0.283200 (-0.019253) | 0.020481 / 0.141683 (-0.121202) | 1.130635 / 1.452155 (-0.321520) | 1.180570 / 1.492716 (-0.312146) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095541 / 0.018006 (0.077535) | 0.306152 / 0.000490 (0.305662) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018593 / 0.037411 (-0.018818) | 0.063029 / 0.014526 (0.048503) | 0.074312 / 0.176557 (-0.102245) | 0.119882 / 0.737135 (-0.617254) | 0.074066 / 0.296338 (-0.222273) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275409 / 0.215209 (0.060200) | 2.727061 / 2.077655 (0.649407) | 1.415632 / 1.504120 (-0.088488) | 1.294922 / 1.541195 (-0.246273) | 1.341636 / 1.468490 (-0.126854) | 0.403250 / 4.584777 (-4.181527) | 2.384657 / 3.745712 (-1.361055) | 2.604131 / 5.269862 (-2.665731) | 1.558888 / 4.565676 (-3.006789) | 0.046008 / 0.424275 (-0.378267) | 0.004819 / 0.007607 (-0.002789) | 0.331046 / 0.226044 (0.105002) | 3.340950 / 2.268929 (1.072021) | 1.801077 / 55.444624 (-53.643548) | 1.479162 / 6.876477 (-5.397315) | 1.503713 / 2.142072 (-0.638359) | 0.474931 / 4.805227 (-4.330296) | 0.101869 / 6.500664 (-6.398795) | 0.041946 / 0.075469 (-0.033523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955641 / 1.841788 (-0.886147) | 11.441032 / 8.074308 (3.366724) | 10.267731 / 10.191392 (0.076339) | 0.128735 / 0.680424 (-0.551689) | 0.013942 / 0.534201 (-0.520259) | 0.266620 / 0.579283 (-0.312663) | 0.262334 / 0.434364 (-0.172029) | 0.302713 / 0.540337 (-0.237624) | 0.430323 / 1.386936 (-0.956613) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004670 / 0.011353 (-0.006683) | 0.002671 / 0.011008 (-0.008338) | 0.048949 / 0.038508 (0.010441) | 0.052520 / 0.023109 (0.029411) | 0.272614 / 0.275898 (-0.003284) | 0.292618 / 0.323480 (-0.030862) | 0.004016 / 0.007986 (-0.003969) | 0.002430 / 0.004328 (-0.001899) | 0.048313 / 0.004250 (0.044063) | 0.038647 / 0.037052 (0.001595) | 0.279893 / 0.258489 (0.021404) | 0.305371 / 0.293841 (0.011530) | 0.023710 / 0.128546 (-0.104836) | 0.006999 / 0.075646 (-0.068648) | 0.053315 / 0.419271 (-0.365956) | 0.032417 / 0.043533 (-0.011115) | 0.272066 / 0.255139 (0.016927) | 0.291717 / 0.283200 (0.008518) | 0.018127 / 0.141683 (-0.123556) | 1.173611 / 1.452155 (-0.278544) | 1.183659 / 1.492716 (-0.309057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094831 / 0.018006 (0.076824) | 0.304911 / 0.000490 (0.304421) | 0.000225 / 0.000200 (0.000025) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020948 / 0.037411 (-0.016463) | 0.070255 / 0.014526 (0.055729) | 0.081371 / 0.176557 (-0.095186) | 0.118932 / 0.737135 (-0.618203) | 0.082207 / 0.296338 (-0.214132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294067 / 0.215209 (0.078858) | 2.856981 / 2.077655 (0.779326) | 1.598392 / 1.504120 (0.094273) | 1.479093 / 1.541195 (-0.062102) | 1.509495 / 1.468490 (0.041005) | 0.396303 / 4.584777 (-4.188473) | 2.429077 / 3.745712 (-1.316635) | 2.525037 / 5.269862 (-2.744824) | 1.503332 / 4.565676 (-3.062345) | 0.046191 / 0.424275 (-0.378084) | 0.004858 / 0.007607 (-0.002750) | 0.349528 / 0.226044 (0.123484) | 3.401451 / 2.268929 (1.132522) | 1.989613 / 55.444624 (-53.455012) | 1.664528 / 6.876477 (-5.211949) | 1.669076 / 2.142072 (-0.472997) | 0.467090 / 4.805227 (-4.338137) | 0.098137 / 6.500664 (-6.402527) | 0.040448 / 0.075469 (-0.035021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969578 / 1.841788 (-0.872210) | 12.064705 / 8.074308 (3.990396) | 10.991438 / 10.191392 (0.800046) | 0.130149 / 0.680424 (-0.550275) | 0.015357 / 0.534201 (-0.518844) | 0.266567 / 0.579283 (-0.312717) | 0.270619 / 0.434364 (-0.163744) | 0.305978 / 0.540337 (-0.234359) | 0.411164 / 1.386936 (-0.975772) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009810 / 0.011353 (-0.001543) | 0.005411 / 0.011008 (-0.005598) | 0.111670 / 0.038508 (0.073162) | 0.050288 / 0.023109 (0.027179) | 0.415625 / 0.275898 (0.139727) | 0.479382 / 0.323480 (0.155902) | 0.005104 / 0.007986 (-0.002882) | 0.007122 / 0.004328 (0.002793) | 0.079626 / 0.004250 (0.075375) | 0.079421 / 0.037052 (0.042369) | 0.406722 / 0.258489 (0.148233) | 0.461511 / 0.293841 (0.167670) | 0.053812 / 0.128546 (-0.074734) | 0.014315 / 0.075646 (-0.061331) | 0.389636 / 0.419271 (-0.029636) | 0.111859 / 0.043533 (0.068326) | 0.411703 / 0.255139 (0.156564) | 0.457072 / 0.283200 (0.173872) | 0.039807 / 0.141683 (-0.101876) | 1.744064 / 1.452155 (0.291909) | 1.968321 / 1.492716 (0.475604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.341839 / 0.018006 (0.323833) | 0.628083 / 0.000490 (0.627593) | 0.023787 / 0.000200 (0.023587) | 0.000601 / 0.000054 (0.000547) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034170 / 0.037411 (-0.003241) | 0.091159 / 0.014526 (0.076633) | 0.108993 / 0.176557 (-0.067563) | 0.186906 / 0.737135 (-0.550229) | 0.109753 / 0.296338 (-0.186586) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684138 / 0.215209 (0.468929) | 6.634852 / 2.077655 (4.557198) | 3.102870 / 1.504120 (1.598750) | 2.831023 / 1.541195 (1.289828) | 2.831597 / 1.468490 (1.363107) | 0.903584 / 4.584777 (-3.681193) | 5.503341 / 3.745712 (1.757629) | 4.970283 / 5.269862 (-0.299579) | 3.139413 / 4.565676 (-1.426264) | 0.109848 / 0.424275 (-0.314427) | 0.008501 / 0.007607 (0.000894) | 0.823815 / 0.226044 (0.597770) | 7.963355 / 2.268929 (5.694426) | 4.002010 / 55.444624 (-51.442614) | 3.229390 / 6.876477 (-3.647087) | 3.166413 / 2.142072 (1.024341) | 1.030313 / 4.805227 (-3.774914) | 0.219394 / 6.500664 (-6.281270) | 0.077760 / 0.075469 (0.002291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580309 / 1.841788 (-0.261479) | 24.279185 / 8.074308 (16.204877) | 22.305293 / 10.191392 (12.113901) | 0.235711 / 0.680424 (-0.444713) | 0.030342 / 0.534201 (-0.503859) | 0.498137 / 0.579283 (-0.081146) | 0.619173 / 0.434364 (0.184809) | 0.529904 / 0.540337 (-0.010434) | 0.822547 / 1.386936 (-0.564389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009375 / 0.011353 (-0.001978) | 0.006009 / 0.011008 (-0.004999) | 0.074080 / 0.038508 (0.035572) | 0.089454 / 0.023109 (0.066345) | 0.473458 / 0.275898 (0.197560) | 0.462558 / 0.323480 (0.139078) | 0.006415 / 0.007986 (-0.001571) | 0.004777 / 0.004328 (0.000448) | 0.076563 / 0.004250 (0.072313) | 0.062793 / 0.037052 (0.025741) | 0.455860 / 0.258489 (0.197371) | 0.485281 / 0.293841 (0.191440) | 0.052966 / 0.128546 (-0.075580) | 0.021600 / 0.075646 (-0.054046) | 0.090407 / 0.419271 (-0.328864) | 0.063951 / 0.043533 (0.020418) | 0.487561 / 0.255139 (0.232422) | 0.479958 / 0.283200 (0.196758) | 0.039263 / 0.141683 (-0.102420) | 1.727215 / 1.452155 (0.275061) | 1.962039 / 1.492716 (0.469323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296267 / 0.018006 (0.278261) | 0.604982 / 0.000490 (0.604493) | 0.007842 / 0.000200 (0.007642) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.097796 / 0.014526 (0.083270) | 0.126034 / 0.176557 (-0.050522) | 0.180873 / 0.737135 (-0.556262) | 0.125410 / 0.296338 (-0.170928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.608278 / 0.215209 (0.393069) | 6.154006 / 2.077655 (4.076351) | 2.822342 / 1.504120 (1.318222) | 2.568263 / 1.541195 (1.027068) | 2.518545 / 1.468490 (1.050055) | 0.863186 / 4.584777 (-3.721591) | 5.367969 / 3.745712 (1.622257) | 4.737691 / 5.269862 (-0.532170) | 2.917620 / 4.565676 (-1.648056) | 0.100731 / 0.424275 (-0.323544) | 0.008611 / 0.007607 (0.001004) | 0.735523 / 0.226044 (0.509479) | 7.552790 / 2.268929 (5.283862) | 3.821835 / 55.444624 (-51.622789) | 2.878259 / 6.876477 (-3.998217) | 2.957686 / 2.142072 (0.815613) | 0.964630 / 4.805227 (-3.840598) | 0.207098 / 6.500664 (-6.293566) | 0.084215 / 0.075469 (0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.711020 / 1.841788 (-0.130768) | 24.034122 / 8.074308 (15.959814) | 21.378504 / 10.191392 (11.187112) | 0.233433 / 0.680424 (-0.446990) | 0.037214 / 0.534201 (-0.496987) | 0.511952 / 0.579283 (-0.067332) | 0.591486 / 0.434364 (0.157123) | 0.606549 / 0.540337 (0.066211) | 0.833773 / 1.386936 (-0.553163) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-15T10:06:44Z
| 2023-11-15T13:49:55Z
| 2023-11-15T13:38:22Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6421",
"merged_at": "2023-11-15T13:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6421"
}
|
Add `pyarrow-hotfix` to release docs.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6421/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6421/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4660
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4660/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4660/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4660/events
|
https://github.com/huggingface/datasets/pull/4660
| 1,297,128,387
|
PR_kwDODunzps47AYDq
| 4,660
|
Fix _resolve_single_pattern_locally on Windows with multiple drives
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch ! Sorry I forgot (again) about windows paths when writing this x)"
] | 2022-07-07T09:57:30Z
| 2022-07-07T17:03:36Z
| 2022-07-07T16:52:07Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4660",
"merged_at": "2022-07-07T16:52:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4660"
}
|
Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init__
**kwargs,
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\builder.py:287: in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:761: in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:723: in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:321: in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:239: in _resolve_single_pattern_locally
for filepath in glob_iter
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:242: in <listcomp>
os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pytest-of-runneradmin\\pytest-0\\popen-gw0\\data6\\dataset.parquet'
start = '/'
...
E ValueError: path is on mount 'C:', start on mount 'D:'
```
This PR makes sure that `base_path` is in the same drive as `pattern`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4660/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4660/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2538/events
|
https://github.com/huggingface/datasets/issues/2538
| 927,940,691
|
MDU6SXNzdWU5Mjc5NDA2OTE=
| 2,538
|
Loading partial dataset when debugging
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9061913?v=4",
"events_url": "https://api.github.com/users/reachtarunhere/events{/privacy}",
"followers_url": "https://api.github.com/users/reachtarunhere/followers",
"following_url": "https://api.github.com/users/reachtarunhere/following{/other_user}",
"gists_url": "https://api.github.com/users/reachtarunhere/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/reachtarunhere",
"id": 9061913,
"login": "reachtarunhere",
"node_id": "MDQ6VXNlcjkwNjE5MTM=",
"organizations_url": "https://api.github.com/users/reachtarunhere/orgs",
"received_events_url": "https://api.github.com/users/reachtarunhere/received_events",
"repos_url": "https://api.github.com/users/reachtarunhere/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/reachtarunhere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reachtarunhere/subscriptions",
"type": "User",
"url": "https://api.github.com/users/reachtarunhere"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.\r\nThen when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `\"train[:10]\"`), then it will load the 10 first rows of the train split that you have on disk.\r\n\r\nTherefore, as long as you don't delete your cache, all your calls to `load_dataset` will be very fast. Except the first call that downloads the dataset of course ^^",
"Thatβs a use case for the new streaming feature, no?",
"Hi @reachtarunhere.\r\n\r\nBesides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download/preprocess/cache only some specific split(s), which will definitely speed up your workflow.\r\n\r\nIf this feature is interesting for you, I can ping you once it will be merged into the master branch.",
"Thanks all for responding.\r\n\r\nHey @albertvillanova \r\n\r\nThanks. Yes, I would be interested.\r\n\r\n@lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated downloading so it seems that the cache is working.\r\n\r\nI am not aware of the streaming feature @thomwolf mentioned. So I might need to read up on it.",
"@reshinthadithyan I use the .select function to have a fraction of indices.",
"If I want to create a dataset, containing only the 10 elements of a given dataset (slice it), how do I do that?",
"```python \r\nsmall_ds = ds.select(range(10))\r\n```",
"\r\n\r\n> ```python\r\n> small_ds = ds.select(range(10))\r\n> ```\r\n\r\nThanks, but this doesn't help me to save time during initial loading, right?",
"Indeed by default load_dataset would download and prepare everything as Arrow files. And passing `split=train[:10]` memory maps only the beginning of the full dataset that has been prepared on disk.\r\n\r\nIf you don't want to download everything, you can use streaming : \r\n```python \r\nids = load_dataset(..., streaming=True)\r\nfirst_samples = list(ids[\"train\"].take(10))\r\n```\r\n\r\nTo get a Dataset you can use \r\n```python \r\nds = Dataset.from_generator(ids.take(10).__iter__)\r\n```\r\n\r\nedit: fixed small bug",
"Thanks @lhoestq, but I don't think it is 100% accurate, as it doesn't keep the dataset structure exactly the same.\r\nTo load the full dataset, I do:\r\n```\r\ndata = load_dataset(\"json\", data_files=\"a.json\")\r\ntrain_data = data[\"train\"].shuffle()\r\n```\r\n\r\nBut when I am changing it as per your instructions: \r\n```\r\nids = load_dataset(\"json\", data_files=\"a.json\", streaming=True)\r\ndata = Dataset.from_generator(ids[\"train\"].take(1).__iter__)\r\ntrain_data = data[\"train\"].shuffle()\r\n```\r\nIt throws KeyError.\r\nI need a simple way, like you suggested, to have a subset of a Dataset, which exactly the same attributes.\r\n",
"Whoops I fixed my code sorry\r\n```diff\r\n- ds = Dataset.from_generator(ids[\"train\"].take(10).__iter__)\r\n+ ds = Dataset.from_generator(ids.take(10).__iter__)\r\n```\r\n\r\nin your case that means running\r\n```python\r\ntrain_data = data.shuffle()\r\n```\r\n\r\nwithout `[\"train\"]`"
] | 2021-06-23T07:19:52Z
| 2023-04-19T11:05:38Z
| null |
NONE
| null | null | null |
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow.
Something like a debug mode would really help. Thanks!
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2538/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5901
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5901/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5901/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5901/events
|
https://github.com/huggingface/datasets/pull/5901
| 1,727,179,016
|
PR_kwDODunzps5Rarux
| 5,901
|
Make prepare_split more robust if errors in metadata dataset_info splits
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008809 / 0.011353 (-0.002544) | 0.005641 / 0.011008 (-0.005367) | 0.124986 / 0.038508 (0.086477) | 0.037311 / 0.023109 (0.014202) | 0.388915 / 0.275898 (0.113017) | 0.430123 / 0.323480 (0.106643) | 0.007447 / 0.007986 (-0.000538) | 0.009593 / 0.004328 (0.005264) | 0.099148 / 0.004250 (0.094898) | 0.052393 / 0.037052 (0.015341) | 0.399779 / 0.258489 (0.141290) | 0.439109 / 0.293841 (0.145268) | 0.043409 / 0.128546 (-0.085137) | 0.016286 / 0.075646 (-0.059360) | 0.431198 / 0.419271 (0.011927) | 0.064932 / 0.043533 (0.021400) | 0.390650 / 0.255139 (0.135511) | 0.432883 / 0.283200 (0.149684) | 0.110978 / 0.141683 (-0.030705) | 1.796121 / 1.452155 (0.343967) | 1.960097 / 1.492716 (0.467381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286292 / 0.018006 (0.268286) | 0.659495 / 0.000490 (0.659005) | 0.008294 / 0.000200 (0.008094) | 0.000485 / 0.000054 (0.000431) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029325 / 0.037411 (-0.008086) | 0.125454 / 0.014526 (0.110928) | 0.136459 / 0.176557 (-0.040097) | 0.221075 / 0.737135 (-0.516060) | 0.140281 / 0.296338 (-0.156058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602401 / 0.215209 (0.387192) | 6.124553 / 2.077655 (4.046898) | 2.453141 / 1.504120 (0.949021) | 2.038611 / 1.541195 (0.497416) | 2.073611 / 1.468490 (0.605121) | 0.938040 / 4.584777 (-3.646737) | 5.755972 / 3.745712 (2.010260) | 4.450935 / 5.269862 (-0.818926) | 2.337219 / 4.565676 (-2.228457) | 0.107118 / 0.424275 (-0.317157) | 0.015201 / 0.007607 (0.007594) | 0.785833 / 0.226044 (0.559788) | 7.732984 / 2.268929 (5.464055) | 3.236892 / 55.444624 (-52.207733) | 2.696402 / 6.876477 (-4.180074) | 2.805036 / 2.142072 (0.662964) | 1.108612 / 4.805227 (-3.696616) | 0.221067 / 6.500664 (-6.279597) | 0.085538 / 0.075469 (0.010068) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.600311 / 1.841788 (-0.241476) | 18.528118 / 8.074308 (10.453810) | 21.107199 / 10.191392 (10.915807) | 0.219489 / 0.680424 (-0.460934) | 0.028927 / 0.534201 (-0.505274) | 0.503446 / 0.579283 (-0.075837) | 0.619833 / 0.434364 (0.185469) | 0.582454 / 0.540337 (0.042117) | 0.709154 / 1.386936 (-0.677782) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008516 / 0.011353 (-0.002837) | 0.006090 / 0.011008 (-0.004918) | 0.104574 / 0.038508 (0.066066) | 0.042676 / 0.023109 (0.019566) | 0.458623 / 0.275898 (0.182725) | 0.568479 / 0.323480 (0.244999) | 0.008374 / 0.007986 (0.000389) | 0.004677 / 0.004328 (0.000349) | 0.105946 / 0.004250 (0.101695) | 0.055256 / 0.037052 (0.018204) | 0.511036 / 0.258489 (0.252547) | 0.598383 / 0.293841 (0.304542) | 0.043612 / 0.128546 (-0.084934) | 0.014707 / 0.075646 (-0.060940) | 0.116350 / 0.419271 (-0.302921) | 0.061413 / 0.043533 (0.017880) | 0.477785 / 0.255139 (0.222646) | 0.542643 / 0.283200 (0.259443) | 0.120431 / 0.141683 (-0.021252) | 1.994083 / 1.452155 (0.541928) | 2.100600 / 1.492716 (0.607883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298480 / 0.018006 (0.280474) | 0.601921 / 0.000490 (0.601432) | 0.000445 / 0.000200 (0.000245) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034784 / 0.037411 (-0.002627) | 0.133555 / 0.014526 (0.119029) | 0.138541 / 0.176557 (-0.038015) | 0.203114 / 0.737135 (-0.534021) | 0.153477 / 0.296338 (-0.142861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.780484 / 0.215209 (0.565275) | 7.150876 / 2.077655 (5.073222) | 3.168590 / 1.504120 (1.664470) | 2.698746 / 1.541195 (1.157552) | 2.695678 / 1.468490 (1.227188) | 1.037706 / 4.584777 (-3.547071) | 5.672631 / 3.745712 (1.926918) | 2.798137 / 5.269862 (-2.471725) | 1.738588 / 4.565676 (-2.827088) | 0.111160 / 0.424275 (-0.313115) | 0.013878 / 0.007607 (0.006271) | 0.800191 / 0.226044 (0.574146) | 8.546676 / 2.268929 (6.277748) | 4.116852 / 55.444624 (-51.327773) | 3.331271 / 6.876477 (-3.545206) | 3.307410 / 2.142072 (1.165337) | 1.191019 / 4.805227 (-3.614208) | 0.248953 / 6.500664 (-6.251711) | 0.086632 / 0.075469 (0.011162) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.795057 / 1.841788 (-0.046730) | 18.038785 / 8.074308 (9.964476) | 21.865566 / 10.191392 (11.674174) | 0.211058 / 0.680424 (-0.469366) | 0.026956 / 0.534201 (-0.507245) | 0.518855 / 0.579283 (-0.060428) | 0.618105 / 0.434364 (0.183741) | 0.569227 / 0.540337 (0.028889) | 0.705431 / 1.386936 (-0.681505) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008900 / 0.011353 (-0.002453) | 0.005726 / 0.011008 (-0.005283) | 0.131747 / 0.038508 (0.093239) | 0.040585 / 0.023109 (0.017476) | 0.420531 / 0.275898 (0.144633) | 0.459430 / 0.323480 (0.135950) | 0.007642 / 0.007986 (-0.000344) | 0.006750 / 0.004328 (0.002421) | 0.099147 / 0.004250 (0.094897) | 0.055852 / 0.037052 (0.018799) | 0.423653 / 0.258489 (0.165164) | 0.453304 / 0.293841 (0.159463) | 0.045247 / 0.128546 (-0.083300) | 0.016034 / 0.075646 (-0.059612) | 0.443115 / 0.419271 (0.023843) | 0.078853 / 0.043533 (0.035320) | 0.417508 / 0.255139 (0.162369) | 0.440936 / 0.283200 (0.157736) | 0.115603 / 0.141683 (-0.026080) | 1.844610 / 1.452155 (0.392456) | 1.998497 / 1.492716 (0.505781) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272622 / 0.018006 (0.254616) | 0.598045 / 0.000490 (0.597556) | 0.007088 / 0.000200 (0.006888) | 0.000159 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032976 / 0.037411 (-0.004436) | 0.143970 / 0.014526 (0.129444) | 0.142172 / 0.176557 (-0.034384) | 0.216747 / 0.737135 (-0.520389) | 0.146004 / 0.296338 (-0.150334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.687507 / 0.215209 (0.472298) | 6.549524 / 2.077655 (4.471870) | 2.924142 / 1.504120 (1.420022) | 2.504471 / 1.541195 (0.963277) | 2.496280 / 1.468490 (1.027790) | 0.959054 / 4.584777 (-3.625723) | 5.851742 / 3.745712 (2.106030) | 4.983357 / 5.269862 (-0.286504) | 2.627403 / 4.565676 (-1.938274) | 0.112955 / 0.424275 (-0.311320) | 0.016206 / 0.007607 (0.008599) | 0.819158 / 0.226044 (0.593114) | 8.416949 / 2.268929 (6.148020) | 3.776765 / 55.444624 (-51.667859) | 3.002397 / 6.876477 (-3.874080) | 3.158852 / 2.142072 (1.016779) | 1.197099 / 4.805227 (-3.608129) | 0.280654 / 6.500664 (-6.220010) | 0.099471 / 0.075469 (0.024002) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687007 / 1.841788 (-0.154781) | 19.411976 / 8.074308 (11.337668) | 22.053482 / 10.191392 (11.862090) | 0.228038 / 0.680424 (-0.452386) | 0.028226 / 0.534201 (-0.505975) | 0.527695 / 0.579283 (-0.051588) | 0.635911 / 0.434364 (0.201547) | 0.618205 / 0.540337 (0.077868) | 0.735164 / 1.386936 (-0.651772) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009450 / 0.011353 (-0.001903) | 0.006566 / 0.011008 (-0.004442) | 0.108919 / 0.038508 (0.070411) | 0.050010 / 0.023109 (0.026900) | 0.505168 / 0.275898 (0.229270) | 0.552190 / 0.323480 (0.228710) | 0.007569 / 0.007986 (-0.000417) | 0.006807 / 0.004328 (0.002478) | 0.116621 / 0.004250 (0.112371) | 0.060374 / 0.037052 (0.023321) | 0.515165 / 0.258489 (0.256676) | 0.572125 / 0.293841 (0.278284) | 0.046561 / 0.128546 (-0.081986) | 0.016159 / 0.075646 (-0.059487) | 0.114568 / 0.419271 (-0.304704) | 0.064689 / 0.043533 (0.021157) | 0.497870 / 0.255139 (0.242731) | 0.567332 / 0.283200 (0.284132) | 0.126254 / 0.141683 (-0.015429) | 1.954074 / 1.452155 (0.501919) | 2.057682 / 1.492716 (0.564966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.013857 / 0.018006 (-0.004149) | 0.601561 / 0.000490 (0.601071) | 0.002897 / 0.000200 (0.002697) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038480 / 0.037411 (0.001069) | 0.142480 / 0.014526 (0.127954) | 0.160479 / 0.176557 (-0.016077) | 0.217942 / 0.737135 (-0.519194) | 0.159908 / 0.296338 (-0.136431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.697926 / 0.215209 (0.482717) | 6.869754 / 2.077655 (4.792100) | 3.125463 / 1.504120 (1.621343) | 2.729123 / 1.541195 (1.187928) | 2.855747 / 1.468490 (1.387257) | 1.015345 / 4.584777 (-3.569432) | 5.839176 / 3.745712 (2.093463) | 5.019678 / 5.269862 (-0.250184) | 2.080489 / 4.565676 (-2.485187) | 0.118884 / 0.424275 (-0.305391) | 0.021381 / 0.007607 (0.013774) | 0.877847 / 0.226044 (0.651803) | 8.714561 / 2.268929 (6.445633) | 3.933399 / 55.444624 (-51.511226) | 3.281809 / 6.876477 (-3.594668) | 3.330342 / 2.142072 (1.188269) | 1.235005 / 4.805227 (-3.570222) | 0.239686 / 6.500664 (-6.260978) | 0.093546 / 0.075469 (0.018077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.787916 / 1.841788 (-0.053872) | 20.094828 / 8.074308 (12.020520) | 22.902101 / 10.191392 (12.710709) | 0.249315 / 0.680424 (-0.431109) | 0.028058 / 0.534201 (-0.506143) | 0.524960 / 0.579283 (-0.054323) | 0.643881 / 0.434364 (0.209517) | 0.621203 / 0.540337 (0.080866) | 0.723337 / 1.386936 (-0.663599) |\n\n</details>\n</details>\n\n\n"
] | 2023-05-26T08:48:22Z
| 2023-06-02T06:06:38Z
| 2023-06-01T13:39:40Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5901.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5901",
"merged_at": "2023-06-01T13:39:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5901.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5901"
}
|
This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits).
Please note that `split_info` is only used by the logger.
Fix #5895 if passed `verification_mode="no_checks"`:
```python
ds = load_dataset(
"ArmelR/stack-exchange-instruction",
data_dir="data/finetune",
split="train",
verification_mode="no_checks",
revision="c609f1caade5cfbf3b9fe9cfa17d7cb000b457bd",
)
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5901/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5901/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4779
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4779/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4779/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4779/events
|
https://github.com/huggingface/datasets/issues/4779
| 1,325,997,225
|
I_kwDODunzps5PCRyp
| 4,779
|
Loading natural_questions requires apache_beam even with existing preprocessed data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-08-02T15:06:57Z
| 2022-08-02T16:03:18Z
| 2022-08-02T16:03:18Z
|
MEMBER
| null | null | null |
## Describe the bug
When loading "natural_questions", the package "apache_beam" is required:
```
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
This requirement is unnecessary, once there exists preprocessed data and the script just needs to download it.
## Steps to reproduce the bug
```python
load_dataset("natural_questions", "dev", split="validation", revision="main")
```
## Expected results
No ImportError raised.
## Actual results
```
ImportError Traceback (most recent call last)
[<ipython-input-3-c938e7c05d02>](https://localhost:8080/#) in <module>()
----> 1 from datasets import load_dataset; ds = load_dataset("natural_questions", "dev", split="validation", revision="main")
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1732 revision=revision,
1733 use_auth_token=use_auth_token,
-> 1734 **config_kwargs,
1735 )
1736
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1504 download_mode=download_mode,
1505 data_dir=data_dir,
-> 1506 data_files=data_files,
1507 )
1508
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1246 ) from None
-> 1247 raise e1 from None
1248 else:
1249 raise FileNotFoundError(
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1180 download_config=download_config,
1181 download_mode=download_mode,
-> 1182 dynamic_modules_path=dynamic_modules_path,
1183 ).get_module()
1184 elif path.count("/") == 1: # community dataset on the Hub
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self)
490 base_path=hf_github_url(path=self.name, name="", revision=revision),
491 imports=imports,
--> 492 download_config=self.download_config,
493 )
494 additional_files = [(config.DATASETDICT_INFOS_FILENAME, dataset_infos_path)] if dataset_infos_path else []
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in _download_additional_modules(name, base_path, imports, download_config)
214 _them_str = "them" if len(needs_to_be_installed) > 1 else "it"
215 raise ImportError(
--> 216 f"To be able to use {name}, you need to install the following {_depencencies_str}: "
217 f"{', '.join(needs_to_be_installed)}.\nPlease install {_them_str} using 'pip install "
218 f"{' '.join(needs_to_be_installed.values())}' for instance'"
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
## Environment info
Colab notebook.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4779/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4779/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6365
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6365/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6365/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6365/events
|
https://github.com/huggingface/datasets/issues/6365
| 1,970,140,392
|
I_kwDODunzps51bfTo
| 6,365
|
Parquet size grows exponential for categorical data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseganti",
"id": 82567957,
"login": "aseganti",
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"repos_url": "https://api.github.com/users/aseganti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseganti"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Wrong repo."
] | 2023-10-31T10:29:02Z
| 2023-10-31T10:49:17Z
| 2023-10-31T10:49:17Z
|
NONE
| null | null | null |
### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories are not present in the original data.
### Steps to reproduce the bug
To reproduce the bug, it is enough to run this script:
```
import pandas as pd
import os
if __name__ == "__main__":
for n in [10, 1e2, 1e3, 1e4, 1e5]:
for n_col in [1, 10, 100, 1000, 10000]:
input = pd.DataFrame([{"{i}": f"{i}_cat" for col in range(n_col)} for i in range(int(n))])
input.iloc[0:100].to_parquet("a.parquet")
for col in input.columns:
input[col] = input[col].astype("category")
input.iloc[0:100].to_parquet("b.parquet")
a_size_mb = os.stat("a.parquet").st_size / (1024 * 1024)
b_size_mb = os.stat("b.parquet").st_size / (1024 * 1024)
print(f"{n} {n_col} {a_size_mb} {b_size_mb} {100*b_size_mb/a_size_mb:.2f}")
```
That produces this output:
<img width="464" alt="Screenshot 2023-10-31 at 11 25 25" src="https://github.com/huggingface/datasets/assets/82567957/2b8a9284-7f9e-4c10-a006-0a27236ebd15">
### Expected behavior
In my opinion either:
1. The two file should have (almost) the same size
2. There should be warning telling the user that such difference in size is possible
### Environment info
Python 3.8.18
pandas==2.0.3
numpy==1.24.4
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6365/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6365/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/huggingface/datasets/issues/580
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/580/comments
|
https://api.github.com/repos/huggingface/datasets/issues/580/events
|
https://github.com/huggingface/datasets/issues/580
| 694,954,551
|
MDU6SXNzdWU2OTQ5NTQ1NTE=
| 580
|
nlp re-creates already-there caches when using a script, but not within a shell
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] | 2020-09-07T10:23:50Z
| 2020-09-07T15:19:09Z
| 2020-09-07T14:26:41Z
|
CONTRIBUTOR
| null | null | null |
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1)
```
twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache.
As observed with @lhoestq.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/580/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/580/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/667
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/667/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/667/comments
|
https://api.github.com/repos/huggingface/datasets/issues/667/events
|
https://github.com/huggingface/datasets/issues/667
| 708,258,392
|
MDU6SXNzdWU3MDgyNTgzOTI=
| 667
|
Loss not decrease with Datasets and Transformers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4",
"events_url": "https://api.github.com/users/wangcongcong123/events{/privacy}",
"followers_url": "https://api.github.com/users/wangcongcong123/followers",
"following_url": "https://api.github.com/users/wangcongcong123/following{/other_user}",
"gists_url": "https://api.github.com/users/wangcongcong123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangcongcong123",
"id": 23032865,
"login": "wangcongcong123",
"node_id": "MDQ6VXNlcjIzMDMyODY1",
"organizations_url": "https://api.github.com/users/wangcongcong123/orgs",
"received_events_url": "https://api.github.com/users/wangcongcong123/received_events",
"repos_url": "https://api.github.com/users/wangcongcong123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangcongcong123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangcongcong123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangcongcong123"
}
|
[] |
closed
| false
| null |
[] | null |
[
"And I tested it on T5ForConditionalGeneration, that works no problem.",
"Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread"
] | 2020-09-24T15:14:43Z
| 2021-01-01T20:01:25Z
| 2021-01-01T20:01:25Z
|
NONE
| null | null | null |
HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss?
```python
import torch
from datasets import load_dataset
from transformers import BertForSequenceClassification
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
dataset = load_dataset("glue", 'sst2')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
del dataset["test"] # let's remove it in this demo
# Tokenize our training dataset
def convert_to_features(example_batch):
encodings = tokenizer(example_batch["sentence"])
encodings.update({"labels": example_batch["label"]})
return encodings
encoded_dataset = dataset.map(convert_to_features, batched=True)
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels']
encoded_dataset.set_format(type='torch', columns=columns)
# Instantiate a PyTorch Dataloader around our dataset
# Let's do dynamic batching (pad on the fly with our own collate_fn)
def collate_fn(examples):
return tokenizer.pad(examples, return_tensors='pt')
dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8)
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Let's load a pretrained Bert model and a simple optimizer
model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
```
In case needed.
- datasets == 1.0.2
- transformers == 3.2.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/667/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/667/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1753
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1753/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1753/events
|
https://github.com/huggingface/datasets/pull/1753
| 789,867,685
|
MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx
| 1,753
|
fix comet citations
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ricardorei",
"id": 17256847,
"login": "ricardorei",
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ricardorei"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-01-20T10:52:38Z
| 2021-01-20T14:39:30Z
| 2021-01-20T14:39:30Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"merged_at": "2021-01-20T14:39:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1753"
}
|
I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1753/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2019
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2019/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2019/events
|
https://github.com/huggingface/datasets/pull/2019
| 826,625,706
|
MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy
| 2,019
|
Replace print with logging in dataset scripts
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq Maybe a script or even a test in `test_dataset_common.py` that verifies that a dataset script meets some set of quality standards (print calls and todos from the dataset script template are not present, etc.) could be added?",
"Yes definitely !"
] | 2021-03-09T20:59:34Z
| 2021-03-12T10:09:01Z
| 2021-03-11T16:14:19Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2019",
"merged_at": "2021-03-11T16:14:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2019"
}
|
Replaces `print(...)` in the dataset scripts with the library logger.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2019/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3914/events
|
https://github.com/huggingface/datasets/pull/3914
| 1,168,777,880
|
PR_kwDODunzps40aq2r
| 3,914
|
Use templates for doc-builidng jobs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3914). All of your documentation changes will be reflected on that endpoint.",
"You can ignore the CI failures btw, they're unrelated to this PR"
] | 2022-03-14T18:53:06Z
| 2022-03-17T15:02:59Z
| 2022-03-17T15:02:58Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3914",
"merged_at": "2022-03-17T15:02:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3914"
}
|
This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-)
Note: all libraries must share the same docker image for those doc-building jobs. For now, all the one used (`huggingface/transformers-doc-builder`) contains all extra steps of the datasets install for docbuling (mainly libsndfile) but if in the future some additional steps are necessary on top of `pip install -e .[dev]`, this docker image will need to be updated with the extra deps.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3914/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3914/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/240
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/240/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/240/comments
|
https://api.github.com/repos/huggingface/datasets/issues/240/events
|
https://github.com/huggingface/datasets/issues/240
| 631,434,677
|
MDU6SXNzdWU2MzE0MzQ2Nzc=
| 240
|
Deterministic dataset loading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Yes good point !",
"I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok with it",
"I'm pretty sure it would solve the problem too.\r\n\r\nThe only other dataset that is not deterministic right now is `blog_authorship_corpus` (see #215) but this is a problem related to string encodings.",
"I think we should do the same also for `os.list_dir`"
] | 2020-06-05T09:03:26Z
| 2020-06-08T09:18:14Z
| 2020-06-08T09:18:14Z
|
MEMBER
| null | null | null |
When calling:
```python
import nlp
dataset = nlp.load_dataset("trivia_qa", split="validation[:1%]")
```
the resulting dataset is not deterministic over different google colabs.
After talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line:
https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/datasets/trivia_qa/trivia_qa.py#L180
which seems to return an ordering of files that depends on the filesystem:
https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered
I think we should go through all the dataset scripts and make sure to have deterministic behavior.
A simple solution for `glob.glob()` would be to just replace it with `sorted(glob.glob())` to have everything sorted by name.
What do you think @lhoestq?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/240/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/240/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3897/events
|
https://github.com/huggingface/datasets/pull/3897
| 1,166,715,104
|
PR_kwDODunzps40UJH4
| 3,897
|
Align tqdm control/cache control with Transformers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3897). All of your documentation changes will be reflected on that endpoint."
] | 2022-03-11T18:12:22Z
| 2022-03-14T15:01:10Z
| 2022-03-14T15:01:08Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3897",
"merged_at": "2022-03-14T15:01:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3897"
}
|
This PR:
* aligns the `tqdm` logic with Transformers (follows https://github.com/huggingface/transformers/pull/15167) by moving the code to `utils/logging.py`, adding `enable_progres_bar`/`disable_progres_bar` and removing `set_progress_bar_enabled` (a note for @lhoestq: I'm not adding `logging.tqdm` to the public namespace in this PR to avoid the situation where `from datasets import *; tqdm` would overshadow the standard `tqdm`
* aligns the cache control with the new `tqdm` logic by adding `enable_caching`/`disable_caching` to the public namespace and deprecating `set_caching_enabled` (not fully removing it because it's used more often than `set_progress_bar_enabled` and has a dedicated example in the old docs)
Fix #3586
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3897/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3897/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/555
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/555/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/555/comments
|
https://api.github.com/repos/huggingface/datasets/issues/555/events
|
https://github.com/huggingface/datasets/pull/555
| 690,197,725
|
MDExOlB1bGxSZXF1ZXN0NDc3MTI2OTIy
| 555
|
Upgrade pip in benchmark github action
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-09-01T14:37:26Z
| 2020-09-01T15:26:16Z
| 2020-09-01T15:26:15Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/555.diff",
"html_url": "https://github.com/huggingface/datasets/pull/555",
"merged_at": "2020-09-01T15:26:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/555.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/555"
}
|
It looks like it fixes the `import nlp` issue we have
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/555/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/555/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1096
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1096/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1096/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1096/events
|
https://github.com/huggingface/datasets/pull/1096
| 756,952,461
|
MDExOlB1bGxSZXF1ZXN0NTMyNDA5MDIx
| 1,096
|
FIX matinf link in ADD_NEW_DATASET.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/moussaKam",
"id": 28675016,
"login": "moussaKam",
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/moussaKam"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-04T09:33:25Z
| 2020-12-04T14:25:35Z
| 2020-12-04T14:25:35Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1096.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1096",
"merged_at": "2020-12-04T14:25:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1096.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1096"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1096/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1096/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/5114
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5114/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5114/events
|
https://github.com/huggingface/datasets/issues/5114
| 1,409,236,738
|
I_kwDODunzps5T_z8C
| 5,114
|
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hubert-Bonisseur",
"id": 48770768,
"login": "Hubert-Bonisseur",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hubert-Bonisseur"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.",
"What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```"
] | 2022-10-14T11:54:53Z
| 2022-11-19T07:13:10Z
| null |
CONTRIBUTOR
| null | null | null |
## Describe the bug
The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py:
```python
if is_remote_filesystem(fs):
src_dataset_path = extract_path_from_uri(dataset_path)
dataset_path = Dataset._build_local_temp_path(src_dataset_path)
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
```
If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train`
Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice)
Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right:
```python
fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True)
```
## Steps to reproduce the bug
```python
fs = gcsfs.GCSFileSystem(**storage_options)
dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine
dataset.save_to_disk(output_dir, fs=fs) #works fine
dataset = load_from_disk(output_dir, fs=fs) # crashes
```
## Expected results
The dataset is loaded
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.6.1.dev0
- Platform: mac os monterey 12.5.1
- Python version: 3.8.13
- PyArrow version:pyarrow==9.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5114/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3330
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3330/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3330/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3330/events
|
https://github.com/huggingface/datasets/pull/3330
| 1,065,176,619
|
PR_kwDODunzps4vFtF7
| 3,330
|
Change TriviaQA license (#3313)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avinashsai",
"id": 22453634,
"login": "avinashsai",
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avinashsai"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-11-28T03:26:45Z
| 2021-11-29T11:24:21Z
| 2021-11-29T11:24:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3330.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3330",
"merged_at": "2021-11-29T11:24:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3330.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3330"
}
|
Fixes (#3313)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3330/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3330/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5719
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5719/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5719/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5719/events
|
https://github.com/huggingface/datasets/issues/5719
| 1,659,203,222
|
I_kwDODunzps5i5W6W
| 5,719
|
Array2D feature creates a list of list instead of a numpy array
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/off99555",
"id": 15215732,
"login": "off99555",
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"repos_url": "https://api.github.com/users/off99555/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"type": "User",
"url": "https://api.github.com/users/off99555"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! \r\n\r\nYou need to set the format to `np` before indexing the dataset to get NumPy arrays:\r\n```python\r\nfeatures = Features(dict(seq=Array2D((2,2), 'float32'))) \r\nds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)\r\nds.set_format(\"np\")\r\na = ds[0]['seq']\r\n```\r\n\r\n> I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?\r\n\r\nThe same dataset can have examples in different types (Numpy arrays, Torch tensors, Pandas series, etc.), so recovering them all would be slow and impractical. Instead, the design of our formatting API is similar to Arrow's (the lib we use internally to store data on disk/ in RAM), which allows converting a batch of data to Python/Numpy/Pandas in a single call (and uses C++ to do so to make it faster).\r\n\r\n> Also if I change the first dimension of the Array2D shape to None, it's returning array correctly.\r\n\r\nSetting the first dimension to `None` makes it variable-length (allows passing arrays with the first dimensions of differing lengths).\r\n",
"Current behavior when indexing the dataset:\r\n- Using `Array((2,2))` returns a list of lists.\r\n- Using `Array((None,2))` returns a numpy array.\r\n\r\nDon't you think this is kind of unexpected behavior from end-user perspective? \r\nAs a user, I expect that when I use `Array2D`, the behavior needs to be consistent even if I specify None or not. It should either return a list or an array. It needs to choose one. Let's say if it always return a list, then I will call `ds.set_format('np')` no problem.\r\n\r\nThe consistency can be in any of these aspects:\r\n1. preserves the type of the input data (in this case, a numpy array)\r\n2. ensure the output type is always the same (it can be either list or array, but it needs to be one of them)\r\n\r\nRight now the API doesn't conform to any of these aspects. But I think it needs to conform to one.",
"I thought we made this consistent by returning lists in both scenarios...",
"Fixed in #5751 "
] | 2023-04-07T21:04:08Z
| 2023-04-20T15:34:41Z
| 2023-04-20T15:34:41Z
|
NONE
| null | null | null |
### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?
Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly.
### Steps to reproduce the bug
Run this code:
```py
from datasets import Dataset, Features, Array2D
import numpy as np
# you have to change the first dimension of the shape to None to make it return an array
features = Features(dict(seq=Array2D((2,2), 'float32')))
ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)
a = ds[0]['seq']
print(a)
print(type(a))
```
The following will be printed in stdout:
```
[[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]]
<class 'list'>
```
### Expected behavior
Each indexed item should be a list or numpy array. Currently, `Array((2,2))` yields a list but `Array((None,2))` yields an array.
### Environment info
- `datasets` version: 2.11.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.4
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5719/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5719/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5908
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5908/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5908/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5908/events
|
https://github.com/huggingface/datasets/issues/5908
| 1,728,653,935
|
I_kwDODunzps5nCSpv
| 5,908
|
Unbearably slow sorting on big mapped datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4",
"events_url": "https://api.github.com/users/maximxlss/events{/privacy}",
"followers_url": "https://api.github.com/users/maximxlss/followers",
"following_url": "https://api.github.com/users/maximxlss/following{/other_user}",
"gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maximxlss",
"id": 29152154,
"login": "maximxlss",
"node_id": "MDQ6VXNlcjI5MTUyMTU0",
"organizations_url": "https://api.github.com/users/maximxlss/orgs",
"received_events_url": "https://api.github.com/users/maximxlss/received_events",
"repos_url": "https://api.github.com/users/maximxlss/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maximxlss"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! `shard` currently returns a slow dataset by default, with examples evenly distributed in the dataset.\r\n\r\nYou can get a fast dataset using `contiguous=True` (which should be the default imo):\r\n\r\n```python\r\ndataset = dataset.shard(10, 0, contiguous=True)\r\n```\r\n\r\nThis way you don't need to flatten_indices() and sort should be fast as well",
"@lhoestq \r\n\r\n> contiguous=True (which should be the default imo)\r\n\r\nFor `IterableDataset`, it's not possible to implement contiguous sharding without knowing the number of examples in advance, so setting the default value to `contiguous=True` would result in an inconsistency between `Dataset` and `IterableDataset` (when we add `IterableDataset.shard`)",
"Actually sharded iterable datasets are made of sub iterables that generally yield contiguous data no ? So in a way it's possible to shard an iterable dataset contiguously.\r\n\r\nIf the dataset is made of one shard it's indeed not possible to shard it contiguously though",
"> Actually sharded iterable datasets are made of sub iterables that generally yield contiguous data no ? So in a way it's possible to shard an iterable dataset contiguously.\r\n\r\nBut sharding an iterable dataset by sharding its `gen_kwargs` would still yield approximate shards(not equal to `Dataset.shard`), no? ",
"Yes indeed !",
"I understand the issue doesn't exist with non-mapped datasets, but if flattening is so much more efficient than sorting the indices, that's an issue in itself.\n\nThere are plenty of issues people posted for which the root cause turns out to be the same. It seems like mapped datasets are terribly inefficient. I think I saw some issue like that somewhere (about the mapped datasets in general), but can't find it now.\n\nMaybe indices should be flattened before any additional processing, then."
] | 2023-05-27T11:08:32Z
| 2023-06-13T17:45:10Z
| null |
CONTRIBUTOR
| null | null | null |
### Describe the bug
For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute).
### Steps to reproduce the bug
```Python
from datasets import load_dataset
import time
dataset = load_dataset("xnli", "en", split="train")
dataset = dataset.shard(10, 0)
print(len(dataset))
t = time.time()
# dataset = dataset.flatten_indices() # uncomment this line and it's fast
dataset = dataset.sort("label", reverse=True, load_from_cache_file=False)
print(f"finished in {time.time() - t:.4f} seconds")
```
### Expected behavior
Expect sorting to take the same or less time than flattening and then sorting.
### Environment info
- `datasets` version: 2.12.1.dev0 (same with 2.12.0 too)
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5908/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5908/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/252
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/252/comments
|
https://api.github.com/repos/huggingface/datasets/issues/252/events
|
https://github.com/huggingface/datasets/issues/252
| 634,563,239
|
MDU6SXNzdWU2MzQ1NjMyMzk=
| 252
|
NonMatchingSplitsSizesError error when reading the IMDB dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antmarakis",
"id": 17463361,
"login": "antmarakis",
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antmarakis"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?",
"I updated it, that was it, thanks!",
"Hello, I am facing the same problem... how do you clear the huggingface cache?",
"Hi ! The cache is at ~/.cache/huggingface\r\nYou can just delete this folder if needed :)"
] | 2020-06-08T12:26:24Z
| 2021-08-27T15:20:58Z
| 2020-06-08T14:01:26Z
|
NONE
| null | null | null |
Hi!
I am trying to load the `imdb` dataset with this line:
`dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')`
but I am getting the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset
save_infos=save_infos,
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
Am I overlooking something? Thanks!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/252/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/963/events
|
https://github.com/huggingface/datasets/pull/963
| 754,451,234
|
MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4
| 963
|
add CODAH dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-01T14:37:05Z
| 2020-12-02T13:45:58Z
| 2020-12-02T13:21:25Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/963.diff",
"html_url": "https://github.com/huggingface/datasets/pull/963",
"merged_at": "2020-12-02T13:21:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/963.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/963"
}
|
Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/963/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/963/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6333
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6333/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6333/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6333/events
|
https://github.com/huggingface/datasets/issues/6333
| 1,956,714,423
|
I_kwDODunzps50oRe3
| 6,333
|
Support fsspec 2023.10.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2023-10-23T09:14:53Z
| 2023-10-23T09:15:10Z
| null |
MEMBER
| null | null | null |
Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6333/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6333/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/662
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/662/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/662/comments
|
https://api.github.com/repos/huggingface/datasets/issues/662/events
|
https://github.com/huggingface/datasets/pull/662
| 706,689,866
|
MDExOlB1bGxSZXF1ZXN0NDkxMTkyNTM3
| 662
|
Created dataset card snli.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
}
|
[
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] |
closed
| false
| null |
[] | null |
[
"Resubmitting on a new fork"
] | 2020-09-22T21:00:17Z
| 2023-09-24T09:50:16Z
| 2020-09-22T21:26:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/662",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/662"
}
|
First draft of a dataset card using the SNLI corpus as an example
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/662/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/662/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3734
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3734/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3734/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3734/events
|
https://github.com/huggingface/datasets/pull/3734
| 1,140,050,336
|
PR_kwDODunzps4y7ZU2
| 3,734
|
Fix bugs in NewsQA dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-02-16T13:51:28Z
| 2022-02-17T07:54:26Z
| 2022-02-17T07:54:25Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3734.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3734",
"merged_at": "2022-02-17T07:54:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3734.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3734"
}
|
Fix #3733.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3734/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3734/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6154
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6154/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6154/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6154/events
|
https://github.com/huggingface/datasets/pull/6154
| 1,854,595,943
|
PR_kwDODunzps5YItlH
| 6,154
|
Use yaml instead of get data patterns when possible
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006829 / 0.011353 (-0.004524) | 0.004535 / 0.011008 (-0.006473) | 0.085255 / 0.038508 (0.046747) | 0.080861 / 0.023109 (0.057752) | 0.366023 / 0.275898 (0.090125) | 0.403095 / 0.323480 (0.079615) | 0.005615 / 0.007986 (-0.002370) | 0.003830 / 0.004328 (-0.000498) | 0.064502 / 0.004250 (0.060251) | 0.053916 / 0.037052 (0.016863) | 0.366010 / 0.258489 (0.107521) | 0.414565 / 0.293841 (0.120724) | 0.031500 / 0.128546 (-0.097046) | 0.009252 / 0.075646 (-0.066394) | 0.289584 / 0.419271 (-0.129688) | 0.052984 / 0.043533 (0.009451) | 0.352626 / 0.255139 (0.097487) | 0.390964 / 0.283200 (0.107764) | 0.025118 / 0.141683 (-0.116565) | 1.462316 / 1.452155 (0.010161) | 1.565682 / 1.492716 (0.072966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294432 / 0.018006 (0.276426) | 0.618366 / 0.000490 (0.617876) | 0.003270 / 0.000200 (0.003071) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031194 / 0.037411 (-0.006217) | 0.088892 / 0.014526 (0.074366) | 0.102580 / 0.176557 (-0.073977) | 0.159449 / 0.737135 (-0.577686) | 0.104434 / 0.296338 (-0.191905) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385690 / 0.215209 (0.170481) | 3.832782 / 2.077655 (1.755128) | 1.862521 / 1.504120 (0.358401) | 1.685674 / 1.541195 (0.144479) | 1.724984 / 1.468490 (0.256494) | 0.483700 / 4.584777 (-4.101077) | 3.664154 / 3.745712 (-0.081558) | 3.323023 / 5.269862 (-1.946839) | 2.055958 / 4.565676 (-2.509718) | 0.056990 / 0.424275 (-0.367285) | 0.007674 / 0.007607 (0.000067) | 0.460642 / 0.226044 (0.234598) | 4.609964 / 2.268929 (2.341036) | 2.434868 / 55.444624 (-53.009756) | 2.003347 / 6.876477 (-4.873130) | 2.209520 / 2.142072 (0.067448) | 0.629363 / 4.805227 (-4.175864) | 0.135434 / 6.500664 (-6.365230) | 0.060498 / 0.075469 (-0.014971) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253917 / 1.841788 (-0.587870) | 19.988953 / 8.074308 (11.914645) | 14.353739 / 10.191392 (4.162347) | 0.165987 / 0.680424 (-0.514437) | 0.018299 / 0.534201 (-0.515902) | 0.395532 / 0.579283 (-0.183751) | 0.418708 / 0.434364 (-0.015656) | 0.460865 / 0.540337 (-0.079472) | 0.633925 / 1.386936 (-0.753011) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006631 / 0.011353 (-0.004722) | 0.004109 / 0.011008 (-0.006899) | 0.065003 / 0.038508 (0.026495) | 0.080407 / 0.023109 (0.057297) | 0.362966 / 0.275898 (0.087068) | 0.389727 / 0.323480 (0.066247) | 0.005588 / 0.007986 (-0.002397) | 0.003517 / 0.004328 (-0.000812) | 0.065821 / 0.004250 (0.061570) | 0.057614 / 0.037052 (0.020561) | 0.367422 / 0.258489 (0.108932) | 0.400706 / 0.293841 (0.106865) | 0.031560 / 0.128546 (-0.096986) | 0.008659 / 0.075646 (-0.066987) | 0.070756 / 0.419271 (-0.348516) | 0.049821 / 0.043533 (0.006288) | 0.360836 / 0.255139 (0.105697) | 0.383981 / 0.283200 (0.100781) | 0.023719 / 0.141683 (-0.117963) | 1.485197 / 1.452155 (0.033043) | 1.544899 / 1.492716 (0.052182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336480 / 0.018006 (0.318474) | 0.532839 / 0.000490 (0.532349) | 0.003767 / 0.000200 (0.003567) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034132 / 0.037411 (-0.003280) | 0.090131 / 0.014526 (0.075605) | 0.104086 / 0.176557 (-0.072471) | 0.158385 / 0.737135 (-0.578751) | 0.106417 / 0.296338 (-0.189922) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416462 / 0.215209 (0.201253) | 4.160409 / 2.077655 (2.082755) | 2.195355 / 1.504120 (0.691235) | 2.051234 / 1.541195 (0.510040) | 2.012116 / 1.468490 (0.543626) | 0.477414 / 4.584777 (-4.107363) | 3.590326 / 3.745712 (-0.155386) | 3.318490 / 5.269862 (-1.951371) | 2.064124 / 4.565676 (-2.501553) | 0.057040 / 0.424275 (-0.367235) | 0.007283 / 0.007607 (-0.000324) | 0.480490 / 0.226044 (0.254445) | 4.804013 / 2.268929 (2.535084) | 2.625940 / 55.444624 (-52.818685) | 2.231537 / 6.876477 (-4.644939) | 2.441649 / 2.142072 (0.299576) | 0.573207 / 4.805227 (-4.232020) | 0.131685 / 6.500664 (-6.368979) | 0.060112 / 0.075469 (-0.015357) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.358587 / 1.841788 (-0.483200) | 20.457562 / 8.074308 (12.383254) | 14.236304 / 10.191392 (4.044912) | 0.152860 / 0.680424 (-0.527563) | 0.018466 / 0.534201 (-0.515735) | 0.401391 / 0.579283 (-0.177893) | 0.410252 / 0.434364 (-0.024111) | 0.484335 / 0.540337 (-0.056002) | 0.663818 / 1.386936 (-0.723118) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007725 / 0.011353 (-0.003628) | 0.004448 / 0.011008 (-0.006560) | 0.098689 / 0.038508 (0.060180) | 0.082919 / 0.023109 (0.059809) | 0.380707 / 0.275898 (0.104809) | 0.452977 / 0.323480 (0.129497) | 0.004430 / 0.007986 (-0.003555) | 0.003712 / 0.004328 (-0.000616) | 0.076675 / 0.004250 (0.072425) | 0.062281 / 0.037052 (0.025228) | 0.403370 / 0.258489 (0.144881) | 0.464557 / 0.293841 (0.170716) | 0.035646 / 0.128546 (-0.092900) | 0.009776 / 0.075646 (-0.065870) | 0.341955 / 0.419271 (-0.077316) | 0.059515 / 0.043533 (0.015983) | 0.388421 / 0.255139 (0.133282) | 0.439496 / 0.283200 (0.156296) | 0.029090 / 0.141683 (-0.112593) | 1.727473 / 1.452155 (0.275319) | 1.810448 / 1.492716 (0.317732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221215 / 0.018006 (0.203208) | 0.486660 / 0.000490 (0.486171) | 0.005467 / 0.000200 (0.005267) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032491 / 0.037411 (-0.004920) | 0.094446 / 0.014526 (0.079920) | 0.110339 / 0.176557 (-0.066217) | 0.175004 / 0.737135 (-0.562131) | 0.109209 / 0.296338 (-0.187129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453966 / 0.215209 (0.238757) | 4.515842 / 2.077655 (2.438187) | 2.240512 / 1.504120 (0.736392) | 2.059911 / 1.541195 (0.518717) | 2.150635 / 1.468490 (0.682145) | 0.564509 / 4.584777 (-4.020268) | 4.055208 / 3.745712 (0.309496) | 3.614084 / 5.269862 (-1.655778) | 2.295760 / 4.565676 (-2.269917) | 0.066507 / 0.424275 (-0.357768) | 0.008909 / 0.007607 (0.001302) | 0.542604 / 0.226044 (0.316560) | 5.412162 / 2.268929 (3.143233) | 2.758757 / 55.444624 (-52.685867) | 2.430693 / 6.876477 (-4.445784) | 2.669866 / 2.142072 (0.527793) | 0.681756 / 4.805227 (-4.123471) | 0.156524 / 6.500664 (-6.344140) | 0.069499 / 0.075469 (-0.005970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.571591 / 1.841788 (-0.270197) | 22.543437 / 8.074308 (14.469129) | 16.068426 / 10.191392 (5.877034) | 0.169860 / 0.680424 (-0.510564) | 0.021216 / 0.534201 (-0.512985) | 0.468745 / 0.579283 (-0.110538) | 0.475924 / 0.434364 (0.041560) | 0.535574 / 0.540337 (-0.004763) | 0.733823 / 1.386936 (-0.653113) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008038 / 0.011353 (-0.003315) | 0.004565 / 0.011008 (-0.006443) | 0.076892 / 0.038508 (0.038384) | 0.089559 / 0.023109 (0.066450) | 0.456752 / 0.275898 (0.180854) | 0.497282 / 0.323480 (0.173802) | 0.005991 / 0.007986 (-0.001995) | 0.003784 / 0.004328 (-0.000545) | 0.076339 / 0.004250 (0.072089) | 0.066050 / 0.037052 (0.028998) | 0.462708 / 0.258489 (0.204219) | 0.503711 / 0.293841 (0.209870) | 0.037098 / 0.128546 (-0.091448) | 0.009869 / 0.075646 (-0.065777) | 0.083678 / 0.419271 (-0.335594) | 0.058166 / 0.043533 (0.014633) | 0.461839 / 0.255139 (0.206700) | 0.481546 / 0.283200 (0.198347) | 0.027755 / 0.141683 (-0.113928) | 1.738490 / 1.452155 (0.286335) | 1.832276 / 1.492716 (0.339560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329935 / 0.018006 (0.311929) | 0.497438 / 0.000490 (0.496949) | 0.034644 / 0.000200 (0.034444) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035427 / 0.037411 (-0.001984) | 0.105689 / 0.014526 (0.091163) | 0.117706 / 0.176557 (-0.058850) | 0.177862 / 0.737135 (-0.559273) | 0.116791 / 0.296338 (-0.179547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.484851 / 0.215209 (0.269642) | 4.804346 / 2.077655 (2.726691) | 2.494801 / 1.504120 (0.990681) | 2.320185 / 1.541195 (0.778990) | 2.374090 / 1.468490 (0.905600) | 0.567397 / 4.584777 (-4.017380) | 4.087402 / 3.745712 (0.341690) | 3.794245 / 5.269862 (-1.475616) | 2.378481 / 4.565676 (-2.187195) | 0.068228 / 0.424275 (-0.356047) | 0.008740 / 0.007607 (0.001133) | 0.574876 / 0.226044 (0.348832) | 5.742644 / 2.268929 (3.473716) | 3.047661 / 55.444624 (-52.396963) | 2.729742 / 6.876477 (-4.146735) | 2.852510 / 2.142072 (0.710438) | 0.679450 / 4.805227 (-4.125777) | 0.156162 / 6.500664 (-6.344502) | 0.074051 / 0.075469 (-0.001418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.576182 / 1.841788 (-0.265605) | 23.298147 / 8.074308 (15.223839) | 16.344621 / 10.191392 (6.153229) | 0.167571 / 0.680424 (-0.512852) | 0.021423 / 0.534201 (-0.512778) | 0.464511 / 0.579283 (-0.114772) | 0.453257 / 0.434364 (0.018893) | 0.563439 / 0.540337 (0.023102) | 0.764759 / 1.386936 (-0.622177) |\n\n</details>\n</details>\n\n\n",
"This should also fix https://github.com/huggingface/datasets/issues/6140, so please link it with this PR before merging.",
"Done !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006719 / 0.011353 (-0.004634) | 0.004299 / 0.011008 (-0.006709) | 0.085296 / 0.038508 (0.046788) | 0.085144 / 0.023109 (0.062035) | 0.361703 / 0.275898 (0.085805) | 0.397721 / 0.323480 (0.074241) | 0.005920 / 0.007986 (-0.002065) | 0.003853 / 0.004328 (-0.000476) | 0.065633 / 0.004250 (0.061383) | 0.057000 / 0.037052 (0.019947) | 0.379981 / 0.258489 (0.121492) | 0.419041 / 0.293841 (0.125200) | 0.031225 / 0.128546 (-0.097322) | 0.008868 / 0.075646 (-0.066779) | 0.288808 / 0.419271 (-0.130463) | 0.052391 / 0.043533 (0.008859) | 0.362349 / 0.255139 (0.107210) | 0.399858 / 0.283200 (0.116658) | 0.025843 / 0.141683 (-0.115840) | 1.498988 / 1.452155 (0.046834) | 1.547290 / 1.492716 (0.054574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278091 / 0.018006 (0.260085) | 0.621794 / 0.000490 (0.621305) | 0.003770 / 0.000200 (0.003570) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029128 / 0.037411 (-0.008283) | 0.082061 / 0.014526 (0.067536) | 0.101758 / 0.176557 (-0.074799) | 0.155724 / 0.737135 (-0.581411) | 0.102173 / 0.296338 (-0.194165) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387145 / 0.215209 (0.171935) | 3.868262 / 2.077655 (1.790607) | 1.886440 / 1.504120 (0.382320) | 1.723305 / 1.541195 (0.182111) | 1.805411 / 1.468490 (0.336921) | 0.485024 / 4.584777 (-4.099753) | 3.637859 / 3.745712 (-0.107853) | 3.319593 / 5.269862 (-1.950269) | 2.087860 / 4.565676 (-2.477817) | 0.056992 / 0.424275 (-0.367283) | 0.007623 / 0.007607 (0.000016) | 0.468182 / 0.226044 (0.242138) | 4.681112 / 2.268929 (2.412183) | 2.407010 / 55.444624 (-53.037614) | 2.026604 / 6.876477 (-4.849872) | 2.298158 / 2.142072 (0.156086) | 0.581839 / 4.805227 (-4.223388) | 0.132101 / 6.500664 (-6.368563) | 0.060472 / 0.075469 (-0.014997) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236422 / 1.841788 (-0.605365) | 20.505168 / 8.074308 (12.430860) | 14.356081 / 10.191392 (4.164689) | 0.148808 / 0.680424 (-0.531616) | 0.018433 / 0.534201 (-0.515768) | 0.391323 / 0.579283 (-0.187960) | 0.413142 / 0.434364 (-0.021222) | 0.453484 / 0.540337 (-0.086853) | 0.620771 / 1.386936 (-0.766165) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007030 / 0.011353 (-0.004323) | 0.004430 / 0.011008 (-0.006578) | 0.065578 / 0.038508 (0.027070) | 0.090751 / 0.023109 (0.067642) | 0.389121 / 0.275898 (0.113223) | 0.424657 / 0.323480 (0.101177) | 0.006575 / 0.007986 (-0.001410) | 0.003855 / 0.004328 (-0.000473) | 0.066175 / 0.004250 (0.061925) | 0.063255 / 0.037052 (0.026202) | 0.397161 / 0.258489 (0.138672) | 0.435291 / 0.293841 (0.141450) | 0.031622 / 0.128546 (-0.096925) | 0.008900 / 0.075646 (-0.066747) | 0.071694 / 0.419271 (-0.347577) | 0.049161 / 0.043533 (0.005628) | 0.386214 / 0.255139 (0.131075) | 0.404571 / 0.283200 (0.121372) | 0.024821 / 0.141683 (-0.116862) | 1.489514 / 1.452155 (0.037359) | 1.576139 / 1.492716 (0.083423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289884 / 0.018006 (0.271878) | 0.629342 / 0.000490 (0.628852) | 0.004799 / 0.000200 (0.004599) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032081 / 0.037411 (-0.005331) | 0.088152 / 0.014526 (0.073626) | 0.107289 / 0.176557 (-0.069267) | 0.164598 / 0.737135 (-0.572537) | 0.108395 / 0.296338 (-0.187944) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426723 / 0.215209 (0.211514) | 4.267719 / 2.077655 (2.190064) | 2.289657 / 1.504120 (0.785537) | 2.117435 / 1.541195 (0.576240) | 2.187292 / 1.468490 (0.718802) | 0.478387 / 4.584777 (-4.106390) | 3.625096 / 3.745712 (-0.120616) | 3.408036 / 5.269862 (-1.861826) | 2.124117 / 4.565676 (-2.441559) | 0.056537 / 0.424275 (-0.367738) | 0.007489 / 0.007607 (-0.000118) | 0.502434 / 0.226044 (0.276389) | 5.025357 / 2.268929 (2.756428) | 2.740554 / 55.444624 (-52.704070) | 2.418841 / 6.876477 (-4.457635) | 2.730764 / 2.142072 (0.588691) | 0.600013 / 4.805227 (-4.205214) | 0.133039 / 6.500664 (-6.367625) | 0.061466 / 0.075469 (-0.014003) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330211 / 1.841788 (-0.511577) | 21.092100 / 8.074308 (13.017792) | 14.463054 / 10.191392 (4.271662) | 0.154149 / 0.680424 (-0.526274) | 0.018891 / 0.534201 (-0.515310) | 0.393078 / 0.579283 (-0.186205) | 0.415279 / 0.434364 (-0.019085) | 0.479469 / 0.540337 (-0.060868) | 0.659953 / 1.386936 (-0.726983) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-17T09:17:05Z
| 2023-08-17T20:46:25Z
| 2023-08-17T20:37:19Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6154",
"merged_at": "2023-08-17T20:37:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6154"
}
|
This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use.
fix https://github.com/huggingface/datasets/issues/6140
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6154/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6154/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2534
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2534/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2534/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2534/events
|
https://github.com/huggingface/datasets/pull/2534
| 927,201,435
|
MDExOlB1bGxSZXF1ZXN0Njc1MzkzODg0
| 2,534
|
Sync with transformers disabling NOTSET
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?",
"Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..."
] | 2021-06-22T12:54:21Z
| 2021-06-24T14:42:47Z
| 2021-06-24T14:42:47Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2534.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2534",
"merged_at": "2021-06-24T14:42:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2534.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2534"
}
|
Close #2528.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2534/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2534/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6064
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6064/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6064/events
|
https://github.com/huggingface/datasets/pull/6064
| 1,818,703,725
|
PR_kwDODunzps5WPzAv
| 6,064
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006704 / 0.011353 (-0.004649) | 0.004208 / 0.011008 (-0.006800) | 0.085895 / 0.038508 (0.047387) | 0.079303 / 0.023109 (0.056193) | 0.353430 / 0.275898 (0.077532) | 0.390814 / 0.323480 (0.067334) | 0.006565 / 0.007986 (-0.001420) | 0.003588 / 0.004328 (-0.000740) | 0.065249 / 0.004250 (0.060999) | 0.059772 / 0.037052 (0.022720) | 0.356315 / 0.258489 (0.097826) | 0.404812 / 0.293841 (0.110971) | 0.031127 / 0.128546 (-0.097419) | 0.008656 / 0.075646 (-0.066991) | 0.288734 / 0.419271 (-0.130537) | 0.053157 / 0.043533 (0.009625) | 0.354651 / 0.255139 (0.099512) | 0.370590 / 0.283200 (0.087391) | 0.024944 / 0.141683 (-0.116738) | 1.472393 / 1.452155 (0.020238) | 1.548946 / 1.492716 (0.056229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223430 / 0.018006 (0.205424) | 0.567359 / 0.000490 (0.566870) | 0.006744 / 0.000200 (0.006544) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030174 / 0.037411 (-0.007237) | 0.084865 / 0.014526 (0.070339) | 0.098986 / 0.176557 (-0.077571) | 0.161458 / 0.737135 (-0.575678) | 0.099198 / 0.296338 (-0.197141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404324 / 0.215209 (0.189115) | 4.043744 / 2.077655 (1.966090) | 1.972834 / 1.504120 (0.468714) | 1.801634 / 1.541195 (0.260439) | 1.891198 / 1.468490 (0.422708) | 0.488511 / 4.584777 (-4.096266) | 3.566890 / 3.745712 (-0.178823) | 3.369415 / 5.269862 (-1.900447) | 2.054995 / 4.565676 (-2.510682) | 0.057225 / 0.424275 (-0.367050) | 0.007360 / 0.007607 (-0.000247) | 0.471813 / 0.226044 (0.245769) | 4.734397 / 2.268929 (2.465468) | 2.526585 / 55.444624 (-52.918039) | 2.230535 / 6.876477 (-4.645942) | 2.434403 / 2.142072 (0.292330) | 0.630090 / 4.805227 (-4.175137) | 0.138544 / 6.500664 (-6.362120) | 0.060099 / 0.075469 (-0.015370) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260951 / 1.841788 (-0.580837) | 20.051513 / 8.074308 (11.977204) | 14.675938 / 10.191392 (4.484546) | 0.169535 / 0.680424 (-0.510889) | 0.018574 / 0.534201 (-0.515627) | 0.394255 / 0.579283 (-0.185028) | 0.412713 / 0.434364 (-0.021651) | 0.475891 / 0.540337 (-0.064446) | 0.658223 / 1.386936 (-0.728713) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006969 / 0.011353 (-0.004384) | 0.004417 / 0.011008 (-0.006591) | 0.064399 / 0.038508 (0.025891) | 0.082928 / 0.023109 (0.059819) | 0.402285 / 0.275898 (0.126387) | 0.440032 / 0.323480 (0.116552) | 0.005896 / 0.007986 (-0.002090) | 0.003580 / 0.004328 (-0.000749) | 0.065340 / 0.004250 (0.061090) | 0.060363 / 0.037052 (0.023311) | 0.417413 / 0.258489 (0.158924) | 0.448527 / 0.293841 (0.154686) | 0.032238 / 0.128546 (-0.096308) | 0.008820 / 0.075646 (-0.066826) | 0.071516 / 0.419271 (-0.347755) | 0.050614 / 0.043533 (0.007081) | 0.406565 / 0.255139 (0.151426) | 0.422527 / 0.283200 (0.139328) | 0.025866 / 0.141683 (-0.115817) | 1.512256 / 1.452155 (0.060101) | 1.568433 / 1.492716 (0.075717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266521 / 0.018006 (0.248515) | 0.564524 / 0.000490 (0.564034) | 0.005236 / 0.000200 (0.005036) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031998 / 0.037411 (-0.005413) | 0.090754 / 0.014526 (0.076229) | 0.105954 / 0.176557 (-0.070602) | 0.164506 / 0.737135 (-0.572629) | 0.108792 / 0.296338 (-0.187546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422044 / 0.215209 (0.206835) | 4.204449 / 2.077655 (2.126795) | 2.232060 / 1.504120 (0.727940) | 2.060389 / 1.541195 (0.519194) | 2.152723 / 1.468490 (0.684233) | 0.488456 / 4.584777 (-4.096321) | 3.591102 / 3.745712 (-0.154611) | 5.250401 / 5.269862 (-0.019461) | 3.060259 / 4.565676 (-1.505417) | 0.057558 / 0.424275 (-0.366717) | 0.007881 / 0.007607 (0.000274) | 0.508631 / 0.226044 (0.282587) | 5.064857 / 2.268929 (2.795928) | 2.719068 / 55.444624 (-52.725556) | 2.389992 / 6.876477 (-4.486485) | 2.595073 / 2.142072 (0.453000) | 0.590179 / 4.805227 (-4.215048) | 0.136149 / 6.500664 (-6.364515) | 0.062546 / 0.075469 (-0.012923) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369252 / 1.841788 (-0.472535) | 20.637580 / 8.074308 (12.563272) | 14.217129 / 10.191392 (4.025737) | 0.195464 / 0.680424 (-0.484960) | 0.018452 / 0.534201 (-0.515749) | 0.397044 / 0.579283 (-0.182239) | 0.401127 / 0.434364 (-0.033237) | 0.465033 / 0.540337 (-0.075305) | 0.613484 / 1.386936 (-0.773452) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004559) | 0.004374 / 0.011008 (-0.006635) | 0.084958 / 0.038508 (0.046450) | 0.080440 / 0.023109 (0.057331) | 0.317951 / 0.275898 (0.042053) | 0.376133 / 0.323480 (0.052653) | 0.005775 / 0.007986 (-0.002211) | 0.003644 / 0.004328 (-0.000684) | 0.064823 / 0.004250 (0.060573) | 0.059442 / 0.037052 (0.022390) | 0.319636 / 0.258489 (0.061147) | 0.389668 / 0.293841 (0.095827) | 0.031181 / 0.128546 (-0.097365) | 0.008725 / 0.075646 (-0.066921) | 0.288514 / 0.419271 (-0.130757) | 0.053466 / 0.043533 (0.009933) | 0.323131 / 0.255139 (0.067992) | 0.345276 / 0.283200 (0.062076) | 0.025046 / 0.141683 (-0.116637) | 1.491659 / 1.452155 (0.039504) | 1.562105 / 1.492716 (0.069389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286325 / 0.018006 (0.268319) | 0.578021 / 0.000490 (0.577531) | 0.007240 / 0.000200 (0.007040) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030163 / 0.037411 (-0.007248) | 0.082100 / 0.014526 (0.067574) | 0.098331 / 0.176557 (-0.078225) | 0.160517 / 0.737135 (-0.576618) | 0.098479 / 0.296338 (-0.197859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401782 / 0.215209 (0.186573) | 4.006330 / 2.077655 (1.928675) | 2.033841 / 1.504120 (0.529721) | 1.853248 / 1.541195 (0.312053) | 1.980046 / 1.468490 (0.511556) | 0.480636 / 4.584777 (-4.104141) | 3.684482 / 3.745712 (-0.061231) | 5.601940 / 5.269862 (0.332079) | 3.369683 / 4.565676 (-1.195993) | 0.057105 / 0.424275 (-0.367170) | 0.007462 / 0.007607 (-0.000145) | 0.474860 / 0.226044 (0.248815) | 4.749624 / 2.268929 (2.480695) | 2.492084 / 55.444624 (-52.952540) | 2.157985 / 6.876477 (-4.718491) | 2.420997 / 2.142072 (0.278925) | 0.574718 / 4.805227 (-4.230509) | 0.134672 / 6.500664 (-6.365992) | 0.061677 / 0.075469 (-0.013792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284151 / 1.841788 (-0.557637) | 20.186823 / 8.074308 (12.112515) | 14.247024 / 10.191392 (4.055632) | 0.171606 / 0.680424 (-0.508818) | 0.018619 / 0.534201 (-0.515582) | 0.394156 / 0.579283 (-0.185127) | 0.424684 / 0.434364 (-0.009679) | 0.476056 / 0.540337 (-0.064281) | 0.668751 / 1.386936 (-0.718185) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006807 / 0.011353 (-0.004546) | 0.004142 / 0.011008 (-0.006867) | 0.065503 / 0.038508 (0.026995) | 0.083232 / 0.023109 (0.060122) | 0.378278 / 0.275898 (0.102380) | 0.410191 / 0.323480 (0.086711) | 0.005660 / 0.007986 (-0.002326) | 0.003486 / 0.004328 (-0.000842) | 0.066109 / 0.004250 (0.061859) | 0.059654 / 0.037052 (0.022601) | 0.375965 / 0.258489 (0.117476) | 0.420046 / 0.293841 (0.126205) | 0.031587 / 0.128546 (-0.096959) | 0.008693 / 0.075646 (-0.066953) | 0.071121 / 0.419271 (-0.348151) | 0.049468 / 0.043533 (0.005935) | 0.373785 / 0.255139 (0.118646) | 0.395577 / 0.283200 (0.112377) | 0.024138 / 0.141683 (-0.117545) | 1.465451 / 1.452155 (0.013297) | 1.547565 / 1.492716 (0.054849) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325241 / 0.018006 (0.307234) | 0.532415 / 0.000490 (0.531925) | 0.004755 / 0.000200 (0.004555) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033472 / 0.037411 (-0.003939) | 0.090574 / 0.014526 (0.076048) | 0.106712 / 0.176557 (-0.069845) | 0.164353 / 0.737135 (-0.572783) | 0.109344 / 0.296338 (-0.186994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420161 / 0.215209 (0.204952) | 4.192334 / 2.077655 (2.114679) | 2.178181 / 1.504120 (0.674061) | 2.017405 / 1.541195 (0.476211) | 2.182783 / 1.468490 (0.714293) | 0.484037 / 4.584777 (-4.100740) | 3.641911 / 3.745712 (-0.103801) | 5.543874 / 5.269862 (0.274013) | 3.440084 / 4.565676 (-1.125593) | 0.056662 / 0.424275 (-0.367614) | 0.007773 / 0.007607 (0.000166) | 0.498357 / 0.226044 (0.272313) | 4.951315 / 2.268929 (2.682386) | 2.656732 / 55.444624 (-52.787892) | 2.370566 / 6.876477 (-4.505910) | 2.682289 / 2.142072 (0.540217) | 0.598479 / 4.805227 (-4.206749) | 0.151546 / 6.500664 (-6.349118) | 0.063278 / 0.075469 (-0.012191) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.385897 / 1.841788 (-0.455891) | 20.961851 / 8.074308 (12.887543) | 14.465688 / 10.191392 (4.274296) | 0.166156 / 0.680424 (-0.514268) | 0.018848 / 0.534201 (-0.515353) | 0.401712 / 0.579283 (-0.177571) | 0.416674 / 0.434364 (-0.017690) | 0.471834 / 0.540337 (-0.068503) | 0.622463 / 1.386936 (-0.764473) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-24T15:56:00Z
| 2023-07-24T16:05:19Z
| 2023-07-24T15:56:10Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6064",
"merged_at": "2023-07-24T15:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6064"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6064/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4823
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4823/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4823/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4823/events
|
https://github.com/huggingface/datasets/pull/4823
| 1,335,687,033
|
PR_kwDODunzps49A0O_
| 4,823
|
Update data URL in mkqa dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-11T09:16:13Z
| 2022-08-11T09:51:50Z
| 2022-08-11T09:37:52Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4823.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4823",
"merged_at": "2022-08-11T09:37:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4823.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4823"
}
|
Update data URL in mkqa dataset.
Fix #4817.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4823/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4823/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3479
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3479/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3479/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3479/events
|
https://github.com/huggingface/datasets/issues/3479
| 1,088,232,880
|
I_kwDODunzps5A3R2w
| 3,479
|
Dataset preview is not available (I think for all Hugging Face datasets)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/66887439?v=4",
"events_url": "https://api.github.com/users/Abirate/events{/privacy}",
"followers_url": "https://api.github.com/users/Abirate/followers",
"following_url": "https://api.github.com/users/Abirate/following{/other_user}",
"gists_url": "https://api.github.com/users/Abirate/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Abirate",
"id": 66887439,
"login": "Abirate",
"node_id": "MDQ6VXNlcjY2ODg3NDM5",
"organizations_url": "https://api.github.com/users/Abirate/orgs",
"received_events_url": "https://api.github.com/users/Abirate/received_events",
"repos_url": "https://api.github.com/users/Abirate/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Abirate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abirate/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Abirate"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null |
[
"You're right, we have an issue today with the datasets preview. We're investigating.",
"It should be fixed now. Thanks for reporting.",
"Down again. ",
"Fixed for good."
] | 2021-12-24T08:18:48Z
| 2021-12-24T14:27:46Z
| 2021-12-24T14:27:46Z
|
NONE
| null | null | null |
## Dataset viewer issue for '*french_book_reviews*'
**Link:** https://huggingface.co/datasets/Abirate/french_book_reviews
**short description of the issue**
For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...)
And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)).
**Am I the one who added this dataset** : Yes
**Note**: here a screenshot showing the issue

**And here for glue dataset :**

|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3479/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3479/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/612
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/612/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/612/comments
|
https://api.github.com/repos/huggingface/datasets/issues/612/events
|
https://github.com/huggingface/datasets/pull/612
| 699,008,644
|
MDExOlB1bGxSZXF1ZXN0NDg0Nzk2Mjg5
| 612
|
add multi-proc to dataset dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-09-11T08:18:13Z
| 2020-09-11T10:20:13Z
| 2020-09-11T10:20:11Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/612.diff",
"html_url": "https://github.com/huggingface/datasets/pull/612",
"merged_at": "2020-09-11T10:20:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/612.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/612"
}
|
Add multi-proc to `DatasetDict`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/612/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/612/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4973
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4973/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4973/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4973/events
|
https://github.com/huggingface/datasets/pull/4973
| 1,371,600,074
|
PR_kwDODunzps4-33JW
| 4,973
|
[GH->HF] Load datasets from the Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate of:\r\n- #4059"
] | 2022-09-13T15:01:41Z
| 2023-09-24T10:06:02Z
| 2022-09-15T15:24:26Z
|
MEMBER
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4973.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4973",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4973.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4973"
}
|
Currently datasets with no namespace (e.g. squad, glue) are loaded from github.
In this PR I changed this logic to use the Hugging Face Hub instead.
This is the first step in removing all the dataset scripts in this repository
related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4973/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4973/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/768
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/768/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/768/comments
|
https://api.github.com/repos/huggingface/datasets/issues/768/events
|
https://github.com/huggingface/datasets/issues/768
| 730,908,060
|
MDU6SXNzdWU3MzA5MDgwNjA=
| 768
|
Add a `lazy_map` method to `Dataset` and `DatasetDict`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"This is cool! I think some aspects to think about and decide in terms of API are:\r\n- do we allow several methods (chained i guess)\r\n- how do we inspect the currently set method(s)\r\n- how do we control/reset them"
] | 2020-10-27T22:33:03Z
| 2020-10-28T08:58:13Z
| null |
CONTRIBUTOR
| null | null | null |
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/768/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/768/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2203
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2203/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2203/events
|
https://github.com/huggingface/datasets/pull/2203
| 855,053,595
|
MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5
| 2,203
|
updated banking77 train and test data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4",
"events_url": "https://api.github.com/users/hsali/events{/privacy}",
"followers_url": "https://api.github.com/users/hsali/followers",
"following_url": "https://api.github.com/users/hsali/following{/other_user}",
"gists_url": "https://api.github.com/users/hsali/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hsali",
"id": 6765330,
"login": "hsali",
"node_id": "MDQ6VXNlcjY3NjUzMzA=",
"organizations_url": "https://api.github.com/users/hsali/orgs",
"received_events_url": "https://api.github.com/users/hsali/received_events",
"repos_url": "https://api.github.com/users/hsali/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsali/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hsali"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?",
"Closing for inactivity. Feel free to re-open if you want to push this change"
] | 2021-04-10T12:10:10Z
| 2021-04-23T14:33:39Z
| 2021-04-23T14:33:39Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2203",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2203"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2203/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/1034
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1034/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1034/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1034/events
|
https://github.com/huggingface/datasets/pull/1034
| 755,936,327
|
MDExOlB1bGxSZXF1ZXN0NTMxNTY0MjA0
| 1,034
|
add scb_mt_enth_2020
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cstorm125",
"id": 15519308,
"login": "cstorm125",
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cstorm125"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-03T07:13:49Z
| 2020-12-03T16:57:23Z
| 2020-12-03T16:57:23Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1034",
"merged_at": "2020-12-03T16:57:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1034"
}
|
## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents.
Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner.
We train machine translation models based on this dataset. Our models' performance are comparable to that of
Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is
included in the training data for both Thai-English and English-Thai translation.
The dataset, pre-trained models, and source code to reproduce our work are available for public use.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1034/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1034/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6066
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6066/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6066/events
|
https://github.com/huggingface/datasets/issues/6066
| 1,819,717,542
|
I_kwDODunzps5sdq-m
| 6,066
|
AttributeError: '_tqdm_cls' object has no attribute '_lock'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! I opened https://github.com/huggingface/datasets/pull/6067 to add the missing `_lock`\r\n\r\nWe'll do a patch release soon, but feel free to install `datasets` from source in the meantime",
"I have tested the latest main, it does not work.\r\n\r\nI add more logs to reproduce this issue, it looks like a multi threading bug:\r\n\r\n```python\r\n@contextmanager\r\ndef ensure_lock(tqdm_class, lock_name=\"\"):\r\n \"\"\"get (create if necessary) and then restore `tqdm_class`'s lock\"\"\"\r\n import os\r\n import threading\r\n print(os.getpid(), threading.get_ident(), \"ensure_lock\", tqdm_class, lock_name)\r\n old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock\r\n lock = old_lock or tqdm_class.get_lock() # maybe create a new lock\r\n lock = getattr(lock, lock_name, lock) # maybe subtype\r\n tqdm_class.set_lock(lock)\r\n print(os.getpid(), threading.get_ident(), \"set_lock\")\r\n yield lock\r\n if old_lock is None:\r\n print(os.getpid(), threading.get_ident(), \"del tqdm_class\")\r\n del tqdm_class._lock\r\n else:\r\n tqdm_class.set_lock(old_lock)\r\n```\r\noutput\r\n```\r\n64943 8424758784 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 8424758784 set_lock\r\n64943 8424758784 del tqdm_class\r\n64943 8424758784 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 8424758784 set_lock\r\n64943 8424758784 del tqdm_class\r\n64943 11638370304 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11638370304 set_lock\r\n64943 11568967680 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11568967680 set_lock\r\n64943 11638370304 del tqdm_class\r\n64943 11638370304 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11638370304 set_lock\r\n64943 11638370304 del tqdm_class\r\n64943 11568967680 del tqdm_class\r\n```\r\n\r\nThread `11638370304` del the _lock from tqdm_class first, then thread `11568967680` del _lock failed.",
"Maybe it is a bug of tqdm? I think simply use `try ... except AttributeError ...` wraps `del tqdm_class._lock` should work.",
"Yes it looks like a bug on their end indeed, do you want to open a PR on tqdm ?\r\n\r\nLet me see if I can find a workaround in the meantime",
"I opened https://github.com/huggingface/datasets/pull/6068 if you want to try it out",
"> I opened #6068 if you want to try it out\r\n\r\nThis fix works! Thanks.",
"Awesome ! closing this then :)\r\nWe'll do a patch release today or tomorrow"
] | 2023-07-25T07:24:36Z
| 2023-07-26T10:56:25Z
| 2023-07-26T10:56:24Z
|
NONE
| null | null | null |
### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns
DataFilesList.from_patterns(
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map
with ensure_lock(tqdm_class, lock_name=lock_name) as lk:
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock
del tqdm_class._lock
^^^^^^^^^^^^^^^^
AttributeError: '_tqdm_cls' object has no attribute '_lock'
```
### Steps to reproduce the bug
Happens ocasionally.
### Expected behavior
I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print.
According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24
```python
@contextmanager
def ensure_lock(tqdm_class, lock_name=""):
"""get (create if necessary) and then restore `tqdm_class`'s lock"""
print("ensure_lock", tqdm_class, lock_name)
old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock
lock = old_lock or tqdm_class.get_lock() # maybe create a new lock
lock = getattr(lock, lock_name, lock) # maybe subtype
tqdm_class.set_lock(lock)
yield lock
if old_lock is None:
del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class.
else:
tqdm_class.set_lock(old_lock)
```
But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205
```python
class _tqdm_cls:
def __call__(self, *args, disable=False, **kwargs):
if _tqdm_active and not disable:
return tqdm_lib.tqdm(*args, **kwargs)
else:
return EmptyTqdm(*args, **kwargs)
def set_lock(self, *args, **kwargs):
self._lock = None
if _tqdm_active:
return tqdm_lib.tqdm.set_lock(*args, **kwargs)
def get_lock(self):
if _tqdm_active:
return tqdm_lib.tqdm.get_lock()
```
### Environment info
Python 3.11.4
tqdm '4.65.0'
datasets master
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6066/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6406
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6406/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6406/events
|
https://github.com/huggingface/datasets/issues/6406
| 1,990,469,045
|
I_kwDODunzps52pCW1
| 6,406
|
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-11-13T11:36:10Z
| 2023-11-14T10:05:36Z
| 2023-11-14T10:05:36Z
|
MEMBER
| null | null | null |
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6406/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6036
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6036/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6036/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6036/events
|
https://github.com/huggingface/datasets/pull/6036
| 1,805,138,898
|
PR_kwDODunzps5ViKc4
| 6,036
|
Deprecate search API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
open
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005746 / 0.011353 (-0.005607) | 0.003461 / 0.011008 (-0.007548) | 0.078672 / 0.038508 (0.040164) | 0.056800 / 0.023109 (0.033691) | 0.312853 / 0.275898 (0.036955) | 0.346715 / 0.323480 (0.023235) | 0.004516 / 0.007986 (-0.003469) | 0.002872 / 0.004328 (-0.001457) | 0.061264 / 0.004250 (0.057013) | 0.046606 / 0.037052 (0.009553) | 0.320080 / 0.258489 (0.061591) | 0.350390 / 0.293841 (0.056550) | 0.026445 / 0.128546 (-0.102101) | 0.007710 / 0.075646 (-0.067936) | 0.259519 / 0.419271 (-0.159752) | 0.043935 / 0.043533 (0.000402) | 0.320015 / 0.255139 (0.064876) | 0.339799 / 0.283200 (0.056599) | 0.018638 / 0.141683 (-0.123044) | 1.463393 / 1.452155 (0.011239) | 1.496977 / 1.492716 (0.004261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185175 / 0.018006 (0.167168) | 0.420734 / 0.000490 (0.420245) | 0.002569 / 0.000200 (0.002369) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022335 / 0.037411 (-0.015077) | 0.071686 / 0.014526 (0.057161) | 0.079906 / 0.176557 (-0.096650) | 0.140386 / 0.737135 (-0.596749) | 0.079712 / 0.296338 (-0.216627) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392643 / 0.215209 (0.177434) | 3.917934 / 2.077655 (1.840279) | 1.906808 / 1.504120 (0.402688) | 1.729564 / 1.541195 (0.188369) | 1.751533 / 1.468490 (0.283043) | 0.496810 / 4.584777 (-4.087967) | 3.047405 / 3.745712 (-0.698307) | 4.361766 / 5.269862 (-0.908095) | 2.660845 / 4.565676 (-1.904832) | 0.056951 / 0.424275 (-0.367324) | 0.006277 / 0.007607 (-0.001330) | 0.466357 / 0.226044 (0.240312) | 4.660457 / 2.268929 (2.391529) | 2.328590 / 55.444624 (-53.116034) | 1.986140 / 6.876477 (-4.890337) | 2.096182 / 2.142072 (-0.045891) | 0.581685 / 4.805227 (-4.223542) | 0.123643 / 6.500664 (-6.377021) | 0.060286 / 0.075469 (-0.015183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237024 / 1.841788 (-0.604763) | 17.778533 / 8.074308 (9.704225) | 13.202205 / 10.191392 (3.010813) | 0.141301 / 0.680424 (-0.539123) | 0.016453 / 0.534201 (-0.517748) | 0.329173 / 0.579283 (-0.250110) | 0.349945 / 0.434364 (-0.084419) | 0.375319 / 0.540337 (-0.165018) | 0.530394 / 1.386936 (-0.856542) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005863 / 0.011353 (-0.005489) | 0.003578 / 0.011008 (-0.007430) | 0.062719 / 0.038508 (0.024211) | 0.056192 / 0.023109 (0.033082) | 0.422812 / 0.275898 (0.146914) | 0.454316 / 0.323480 (0.130836) | 0.004446 / 0.007986 (-0.003540) | 0.002808 / 0.004328 (-0.001521) | 0.062819 / 0.004250 (0.058569) | 0.046243 / 0.037052 (0.009190) | 0.445858 / 0.258489 (0.187369) | 0.463750 / 0.293841 (0.169909) | 0.027504 / 0.128546 (-0.101042) | 0.007897 / 0.075646 (-0.067749) | 0.068248 / 0.419271 (-0.351024) | 0.041921 / 0.043533 (-0.001612) | 0.413314 / 0.255139 (0.158175) | 0.441619 / 0.283200 (0.158419) | 0.019246 / 0.141683 (-0.122437) | 1.457069 / 1.452155 (0.004914) | 1.524168 / 1.492716 (0.031452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237785 / 0.018006 (0.219779) | 0.418455 / 0.000490 (0.417965) | 0.002301 / 0.000200 (0.002101) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025630 / 0.037411 (-0.011781) | 0.076673 / 0.014526 (0.062147) | 0.084877 / 0.176557 (-0.091680) | 0.137528 / 0.737135 (-0.599607) | 0.085261 / 0.296338 (-0.211077) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419040 / 0.215209 (0.203831) | 4.183022 / 2.077655 (2.105368) | 2.157852 / 1.504120 (0.653732) | 1.966177 / 1.541195 (0.424982) | 2.019612 / 1.468490 (0.551122) | 0.497415 / 4.584777 (-4.087362) | 3.102873 / 3.745712 (-0.642839) | 4.526336 / 5.269862 (-0.743525) | 2.991503 / 4.565676 (-1.574174) | 0.057235 / 0.424275 (-0.367040) | 0.006735 / 0.007607 (-0.000872) | 0.498255 / 0.226044 (0.272211) | 4.957364 / 2.268929 (2.688435) | 2.632643 / 55.444624 (-52.811981) | 2.249788 / 6.876477 (-4.626688) | 2.289134 / 2.142072 (0.147062) | 0.583581 / 4.805227 (-4.221646) | 0.126046 / 6.500664 (-6.374618) | 0.062966 / 0.075469 (-0.012504) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295215 / 1.841788 (-0.546573) | 18.554020 / 8.074308 (10.479711) | 13.683273 / 10.191392 (3.491881) | 0.132266 / 0.680424 (-0.548158) | 0.016376 / 0.534201 (-0.517825) | 0.334495 / 0.579283 (-0.244788) | 0.347106 / 0.434364 (-0.087258) | 0.387531 / 0.540337 (-0.152806) | 0.525745 / 1.386936 (-0.861191) |\n\n</details>\n</details>\n\n\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6036). All of your documentation changes will be reflected on that endpoint.",
"I don't think `transformers` should have any dataset indexing code. So before deprecating I'd be in favor of finding a suitable replacement. Not sure about the stats of the RAG model that uses `datasets` indexing though",
"The RAG downloads stats are decent (over 20k downloads last month).\r\n\r\nI think it's suboptimal to maintain an API that only a single model uses. One option is to put this code into a separate lib. However, `langchain` and `docarray` already provide a unified interface to vector stores, so I don't see this as an impactful project. Considering how specific this model is, I think we should go with the simplest solution and combine an index with a dataset in Transformers (this wouldn't require too much code).",
"What about migrating to the [datasets-server](https://github.com/huggingface/datasets-server) search feature instead? Would make more sense from a product perspective ",
"I don't think it's a good idea:\r\n- using datasets-server would require to upload the data and to not control the indexing, whereas the current feature is about using a local index that you control\r\n- faiss indexes are vector indexes that are not supported by datasets-server, and they are also very customised. For instance RAG uses DPR embeddings and cosine similarity\r\n- FTS is only done for the first 5GB of data for now in datasets-server\r\n\r\nI think a better option would be to integrate with open source search tools such as docarray.\r\nAnd if we want to make the datasets-server search available in python we can build an integration in docarray and/or in huggingface_hub.",
"`llama_index` is another popular tool in this space.\r\n\r\n@lhoestq \r\n> I think a better option would be to integrate with open source search tools such as docarray.\r\nAnd if we want to make the datasets-server search available in python we can build an integration in docarray and/or in huggingface_hub.\r\n\r\nI don't think these integrations would be popular unless we integrate them with the Hub \"UI-wise\" (e.g., through a widget), so they can wait IMO. Also, FAISS supports `fsspec` already with the callback reader/writer, so this doesn't require a specific integration. ",
"After discussing it a bit with @lhoestq, do we need to deprecate the search API? While I understand it's imperfect, it looks like this will result in significant work to update it everywhere, so I'd favor keeping it until there's an obviously better alternative; this way we can focus on different things in the meantime.",
"FAISS/ES are simple to use (probably the main reason why they are so popular), so creating \"better alternatives\" is not easy - they usually add more complexity (as is the case here, `langchain`, etc.)\r\n\r\nSo, instead of waiting for better alternatives, IMO it makes more sense to wait for the RAG model to be deprecated in Transformers (less than 1,000 cumulated downloads over all checkpoints in the past 30 days) before deprecating this API here.\r\n\r\nIn the meantime, we should make it clear that the vector search API is in maintenance mode (no new features, etc.).\r\n\r\nHow does that sound?"
] | 2023-07-14T16:22:09Z
| 2023-09-07T16:44:32Z
| null |
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6036",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6036"
}
|
The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn't seem to be significant and is not integrated with the Hub. Since we have no plans/bandwidth to improve it and better alternatives such as `langchain` and `docarray` exist, I think it should be deprecated (and eventually removed).
If we decide to deprecate/remove it, the following usage instances need to be addressed:
* [Course](https://github.com/huggingface/course/blob/0018bb434204d9750a03592cb0d4e846093218d8/chapters/en/chapter5/6.mdx#L342 ) and [Blog](https://github.com/huggingface/blog/blob/4897c6f73d4492a0955ade503281711d01840e09/image-search-datasets.md?plain=1#L252) - calling the FAISS API directly should be OK in these instances as it's pretty simple to use for basic scenarios. Alternatively, we can use `langchain`, but this adds an extra dependency
* [Transformers](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/rag/retrieval_rag.py#L183) - we can use the FAISS API directly and store the index as a separate attribute (and instead of building the `wiki_dpr` index each time the dataset is generated, we can generate it once and push it to the Hub repo, and then read it from there
cc @huggingface/datasets @LysandreJik for the opinion
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6036/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6036/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2032
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2032/events
|
https://github.com/huggingface/datasets/issues/2032
| 829,250,912
|
MDU6SXNzdWU4MjkyNTA5MTI=
| 2,032
|
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
}
] | null |
[] | 2021-03-11T15:18:50Z
| 2021-03-11T17:20:57Z
| null |
MEMBER
| null | null | null |
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.
Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.
I think there are two cases:
- if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)`
- if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)`
The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table.
The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask.
Feel free to discuss this idea in this thread :)
One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle.
cc @theo-m @gchhablani
related issues: #1796 #1949
|
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4237
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4237/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4237/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4237/events
|
https://github.com/huggingface/datasets/issues/4237
| 1,217,121,044
|
I_kwDODunzps5Ii8sU
| 4,237
|
Common Voice 8 doesn't show datasets viewer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10.9k/10.9k [00:00<00:00, 10.9MB/s]\r\nDownloading extra modules: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.98k/2.98k [00:00<00:00, 3.36MB/s]\r\nDownloading extra modules: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 53.1k/53.1k [00:00<00:00, 650kB/s]\r\nNo config specified, defaulting to: common_voice/en\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 153, in _split_generators\r\n self._log_download(self.config.name, bundle_version, hf_auth_token)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 139, in _log_download\r\n email = HfApi().whoami(auth_token)[\"email\"]\r\nKeyError: 'email'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```",
"Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.\r\n\r\nUnfortunately I'm not able to reproduce the error.\r\n\r\nI think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/blob/main/common_voice_8_0.py#L137-L139\r\n```python\r\nfrom huggingface_hub import HfApi, HfFolder\r\n\r\nif isinstance(auth_token, bool):\r\n email = HfApi().whoami(auth_token)\r\nemail = HfApi().whoami(auth_token)[\"email\"]\r\n```\r\n\r\nCould you please verify the previous code with the `auth_token` you pass to `load_dataset(..., use_auth_token=auth_token,...`?",
"OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!\r\n\r\n```python\r\n>>> from huggingface_hub import HfApi, HfFolder\r\n>>> auth_token = \"hf_app_******\"\r\n>>> t = HfApi().whoami(auth_token)\r\n>>> t\r\n{'type': 'app', 'name': 'dataset-preview-backend'}\r\n>>> t[\"email\"]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nKeyError: 'email'\r\n```\r\n\r\nNote also that the doc (https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.whoami) does not state that `whoami` should return an `email` key.\r\n\r\n@SBrandeis @julien-c: do you think the app token should have an email associated, like the users?",
"We can workaround this with\r\n```python\r\nemail = HfApi().whoami(auth_token).get(\"email\", \"system@huggingface.co\")\r\n```\r\nin the common voice scripts",
"Hmmm, does this mean that any person who downloads the common voice dataset will be logged as \"system@huggingface.co\"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right?",
"I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.\r\n\r\nAdditionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nCC: @patrickvonplaten @lhoestq @SBrandeis @julien-c ",
"Hmm I don't agree here. \r\n\r\nAnybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the \"correct\" email but to just whatever and it would work.\r\n\r\nNote that someone only has visibility on the code after having \"signed\" the access-mechanism so I think we can expect the users to have agreed to not do anything malicious. \r\n\r\nI'm fine with both @lhoestq's solution or we find a way that forces the user to be logged in + being able to load the data for the datasets viewer. Wdyt @lhoestq @severo @albertvillanova ?",
"> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nYes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they only require sending an email (no auth is inplace on their side, contrary to what I had previously thought). Therefore, currently we impose stronger requirements than them: we require the user having logged in and accepted the access mechanism.\r\n\r\nCurrently the script as it is already requires the user being logged in:\r\n```python\r\nHfApi().whoami(auth_token)\r\n```\r\nthrows an exception if None/invalid auth_token is passed.\r\n\r\nOn the other hand, we should agree on the way to allow the viewer to stream the data.",
"The preview is back now, thanks !"
] | 2022-04-27T10:05:20Z
| 2022-05-10T12:17:05Z
| 2022-05-10T12:17:04Z
|
MEMBER
| null | null | null |
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4237/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4237/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.