url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3196
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3196/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3196/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3196/events
|
https://github.com/huggingface/datasets/pull/3196
| 1,042,223,913
|
PR_kwDODunzps4t-bxy
| 3,196
|
QOL improvements: auto-flatten_indices and desc in map calls
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-11-02T11:28:50Z
| 2021-11-02T15:41:09Z
| 2021-11-02T15:41:08Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3196",
"merged_at": "2021-11-02T15:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3196"
}
|
This PR:
* automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file
* adds descriptions to the map calls
Fix #3040
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3196/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3196/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4643
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4643/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4643/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4643/events
|
https://github.com/huggingface/datasets/pull/4643
| 1,295,852,650
|
PR_kwDODunzps468Cqk
| 4,643
|
Rename master to main
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"All the mentions I found on google were simple URLs that will be redirected, so it's fine. I also checked the spaces and we should be good:\r\n- dalle-mini used to install the master branch but [it's no longer the case](https://huggingface.co/spaces/flax-community/dalle-mini/commit/b78c972afd5c2d2bed087be6479fe5c9c6cfa741)\r\n- same for [logo generator](https://huggingface.co/spaces/tom-doerr/logo_generator/commit/a9ea330e518870d0ca8f65abb56f71d86750d8e4)\r\n- I opened a PR to fix [vision-datasets-viewer](https://huggingface.co/spaces/nateraw/vision-datasets-viewer/discussions/1)\r\n",
"Ok let's rename the branch, and then we can merge this PR"
] | 2022-07-06T13:34:30Z
| 2022-07-06T15:36:46Z
| 2022-07-06T15:25:08Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4643.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4643",
"merged_at": "2022-07-06T15:25:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4643.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4643"
}
|
This PR renames mentions of "master" by "main" in the code base for several cases:
- set the default dataset script version to "main" if the local installation of `datasets` is a dev installation
- update URLs to this github repository to use "main"
- update the DVC benchmark
- update the github workflows
- update docstrings
- update tests to compare the changes in dataset cards against "main"
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4643/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4643/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2536/events
|
https://github.com/huggingface/datasets/issues/2536
| 927,338,639
|
MDU6SXNzdWU5MjczMzg2Mzk=
| 2,536
|
Use `Audio` features for `AutomaticSpeechRecognition` task template
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[
"I'm just retaking and working on #2324. 😉 ",
"Resolved via https://github.com/huggingface/datasets/pull/4006."
] | 2021-06-22T15:07:21Z
| 2022-06-01T17:18:16Z
| 2022-06-01T17:18:16Z
|
MEMBER
| null | null | null |
In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'.
The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2536/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2536/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4748
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4748/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4748/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4748/events
|
https://github.com/huggingface/datasets/pull/4748
| 1,318,874,913
|
PR_kwDODunzps48JTEb
| 4,748
|
Add image classification processing guide
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-27T00:11:11Z
| 2022-07-27T17:28:21Z
| 2022-07-27T17:16:12Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4748",
"merged_at": "2022-07-27T17:16:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4748"
}
|
This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4748/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4748/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6024
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6024/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6024/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6024/events
|
https://github.com/huggingface/datasets/pull/6024
| 1,801,708,808
|
PR_kwDODunzps5VWbGe
| 6,024
|
Don't reference self in Spark._validate_cache_dir
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maddiedawson",
"id": 106995444,
"login": "maddiedawson",
"node_id": "U_kgDOBmCe9A",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maddiedawson"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Ptal @lhoestq :) I tested this manually on a multi-node Databricks cluster",
"Hm looks like the check_code_quality failures are unrelated to me change... https://github.com/huggingface/datasets/actions/runs/5536162850/jobs/10103451883?pr=6024",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005952 / 0.011353 (-0.005400) | 0.003585 / 0.011008 (-0.007424) | 0.079163 / 0.038508 (0.040655) | 0.057926 / 0.023109 (0.034817) | 0.326647 / 0.275898 (0.050749) | 0.383485 / 0.323480 (0.060005) | 0.004530 / 0.007986 (-0.003456) | 0.002821 / 0.004328 (-0.001508) | 0.062071 / 0.004250 (0.057820) | 0.048023 / 0.037052 (0.010971) | 0.329368 / 0.258489 (0.070879) | 0.390877 / 0.293841 (0.097036) | 0.026959 / 0.128546 (-0.101588) | 0.007911 / 0.075646 (-0.067735) | 0.259956 / 0.419271 (-0.159315) | 0.044582 / 0.043533 (0.001049) | 0.320537 / 0.255139 (0.065398) | 0.373814 / 0.283200 (0.090614) | 0.020275 / 0.141683 (-0.121408) | 1.532128 / 1.452155 (0.079973) | 1.595031 / 1.492716 (0.102315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186127 / 0.018006 (0.168120) | 0.428586 / 0.000490 (0.428097) | 0.005180 / 0.000200 (0.004980) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024876 / 0.037411 (-0.012536) | 0.072169 / 0.014526 (0.057643) | 0.082015 / 0.176557 (-0.094542) | 0.147467 / 0.737135 (-0.589668) | 0.082769 / 0.296338 (-0.213570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410625 / 0.215209 (0.195416) | 4.116742 / 2.077655 (2.039088) | 2.172291 / 1.504120 (0.668171) | 2.022462 / 1.541195 (0.481268) | 2.048142 / 1.468490 (0.579651) | 0.503152 / 4.584777 (-4.081625) | 3.019135 / 3.745712 (-0.726577) | 3.589451 / 5.269862 (-1.680410) | 2.206876 / 4.565676 (-2.358801) | 0.057687 / 0.424275 (-0.366588) | 0.006560 / 0.007607 (-0.001047) | 0.475585 / 0.226044 (0.249541) | 4.784344 / 2.268929 (2.515416) | 2.506322 / 55.444624 (-52.938302) | 2.168251 / 6.876477 (-4.708225) | 2.324453 / 2.142072 (0.182381) | 0.590609 / 4.805227 (-4.214618) | 0.124178 / 6.500664 (-6.376486) | 0.059197 / 0.075469 (-0.016272) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212359 / 1.841788 (-0.629429) | 17.915843 / 8.074308 (9.841535) | 13.128330 / 10.191392 (2.936938) | 0.144805 / 0.680424 (-0.535618) | 0.016889 / 0.534201 (-0.517312) | 0.344056 / 0.579283 (-0.235227) | 0.359370 / 0.434364 (-0.074994) | 0.404199 / 0.540337 (-0.136138) | 0.549117 / 1.386936 (-0.837819) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005439) | 0.003565 / 0.011008 (-0.007443) | 0.061575 / 0.038508 (0.023067) | 0.057677 / 0.023109 (0.034568) | 0.359753 / 0.275898 (0.083855) | 0.394135 / 0.323480 (0.070655) | 0.004648 / 0.007986 (-0.003338) | 0.002795 / 0.004328 (-0.001534) | 0.061877 / 0.004250 (0.057626) | 0.049673 / 0.037052 (0.012621) | 0.363120 / 0.258489 (0.104631) | 0.402685 / 0.293841 (0.108844) | 0.027021 / 0.128546 (-0.101525) | 0.008006 / 0.075646 (-0.067641) | 0.067398 / 0.419271 (-0.351874) | 0.044442 / 0.043533 (0.000909) | 0.364851 / 0.255139 (0.109712) | 0.387219 / 0.283200 (0.104019) | 0.027267 / 0.141683 (-0.114416) | 1.466675 / 1.452155 (0.014520) | 1.512607 / 1.492716 (0.019891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206156 / 0.018006 (0.188150) | 0.410877 / 0.000490 (0.410387) | 0.003061 / 0.000200 (0.002861) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024869 / 0.037411 (-0.012542) | 0.075736 / 0.014526 (0.061210) | 0.083922 / 0.176557 (-0.092634) | 0.139510 / 0.737135 (-0.597626) | 0.087685 / 0.296338 (-0.208654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414473 / 0.215209 (0.199264) | 4.150633 / 2.077655 (2.072979) | 2.132892 / 1.504120 (0.628773) | 1.964072 / 1.541195 (0.422878) | 2.003353 / 1.468490 (0.534863) | 0.498012 / 4.584777 (-4.086765) | 3.010135 / 3.745712 (-0.735577) | 2.841130 / 5.269862 (-2.428732) | 1.826013 / 4.565676 (-2.739664) | 0.057443 / 0.424275 (-0.366832) | 0.006374 / 0.007607 (-0.001234) | 0.490337 / 0.226044 (0.264292) | 4.889628 / 2.268929 (2.620700) | 2.575626 / 55.444624 (-52.868998) | 2.246522 / 6.876477 (-4.629955) | 2.276183 / 2.142072 (0.134110) | 0.581465 / 4.805227 (-4.223763) | 0.123877 / 6.500664 (-6.376787) | 0.060339 / 0.075469 (-0.015130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333202 / 1.841788 (-0.508585) | 18.363558 / 8.074308 (10.289250) | 14.109356 / 10.191392 (3.917964) | 0.147358 / 0.680424 (-0.533066) | 0.016813 / 0.534201 (-0.517388) | 0.334815 / 0.579283 (-0.244468) | 0.366576 / 0.434364 (-0.067788) | 0.397223 / 0.540337 (-0.143115) | 0.547893 / 1.386936 (-0.839043) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-12T20:31:16Z
| 2023-07-13T16:58:32Z
| 2023-07-13T12:37:09Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6024",
"merged_at": "2023-07-13T12:37:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6024"
}
|
Fix for https://github.com/huggingface/datasets/issues/5963
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6024/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6024/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3644
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3644/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3644/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3644/events
|
https://github.com/huggingface/datasets/issues/3644
| 1,116,519,670
|
I_kwDODunzps5CjLz2
| 3,644
|
Add a GROUP BY operator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4",
"events_url": "https://api.github.com/users/felix-schneider/events{/privacy}",
"followers_url": "https://api.github.com/users/felix-schneider/followers",
"following_url": "https://api.github.com/users/felix-schneider/following{/other_user}",
"gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/felix-schneider",
"id": 208336,
"login": "felix-schneider",
"node_id": "MDQ6VXNlcjIwODMzNg==",
"organizations_url": "https://api.github.com/users/felix-schneider/orgs",
"received_events_url": "https://api.github.com/users/felix-schneider/received_events",
"repos_url": "https://api.github.com/users/felix-schneider/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions",
"type": "User",
"url": "https://api.github.com/users/felix-schneider"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though)\r\n\r\nWe use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately.\r\n\r\nI just drafted what it could look like to have `group_by` in `datasets`:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\ndef group_by(d, col, join): \r\n \"\"\"from: https://github.com/huggingface/datasets/issues/3644\"\"\"\r\n # Get the indices of each group\r\n groups = {key: [] for key in d.unique(col)} \r\n def create_groups_indices(key, i): \r\n groups[key].append(i) \r\n d.map(create_groups_indices, with_indices=True, input_columns=col) \r\n # Get one dataset object per group\r\n groups = {key: d.select(indices) for key, indices in groups.items()} \r\n # Apply join function\r\n groups = {\r\n key: dataset_group.map(join, batched=True, batch_size=len(dataset_group), remove_columns=d.column_names)\r\n for key, dataset_group in groups.items()\r\n } \r\n # Return concatenation of all the joined groups\r\n return concatenate_datasets(groups.values())\r\n```\r\n\r\nexample of usage:\r\n```python\r\n\r\ndef join(batch): \r\n # take the batch of all the examples of a group, and return a batch with one aggregated example\r\n # (we could aggregate examples into several rows instead of one, if you want)\r\n return {\"total\": [batch[\"i\"]]} \r\n\r\nd = Dataset.from_dict({\r\n \"i\": [i for i in range(50)],\r\n \"group_key\": [i % 4 for i in range(50)],\r\n})\r\nprint(group_by(d, \"group_key\", join))\r\n# total\r\n# 0 [0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48]\r\n# 1 [1, 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49]\r\n# 2 [2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46]\r\n# 3 [3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 43, 47]\r\n```\r\n\r\nLet me know if that helps !\r\n\r\ncc @albertvillanova @mariosasko for visibility",
"@lhoestq As of PyArrow 7.0.0, `pa.Table` has the [`group_by` method](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.group_by), so we should also consider using that function for grouping. ",
"Any update on this?",
"You can use https://github.com/mariosasko/datasets_sql by @mariosasko to go group by operations using SQL queries",
"Hi, I have a similar issue as OP but the suggested solutions do not work for my case. Basically, I process documents through a model to extract the last_hidden_state, using the \"map\" method on a Dataset object, but would like to average the result over a categorical column at the end (i.e. groupby this column).\r\n- A to_pandas() saturates the memory, although it gives me the desired result through a .groupby().apply(np.mean, axis=0) on a smaller use-case,\r\n- The solution posted on Feb 4 is much too slow,\r\n- datasets_sql seems to not like the fact that I'm averaging np.arrays.\r\nSo I'm kinda out of \"non brute force\" options... Any help appreciated",
"> Hi, I have a similar issue as OP but the suggested solutions do not work for my case. Basically, I process documents through a model to extract the last_hidden_state, using the \"map\" method on a Dataset object, but would like to average the result over a categorical column at the end (i.e. groupby this column).\r\n \r\nIf you haven't yet, you could explore using [Polars](https://www.pola.rs/) for this. It's a new DataFrame library written in Rust with Python bindings. It is Pandas like it in many ways ,but does have some biggish differences in syntax/approach so it's definitely not a drop-in replacement. \r\n\r\nPolar's also uses Arrow as a backend but also supports out-of-memory operations; in this case, it's probably easiest to write out your dataset to parquet and then use the polar's `scan_parquet` method (this will lazily read from the parquet file). The thing you get back from that is a `LazyDataFrame` i.e. nothing is loaded into memory until you specify a query and call a `collect` method. \r\n\r\nExample below of doing a groupby on a dataset which definitely wouldn't fit into memory on my machine:\r\n\r\n```\r\nfrom datasets import load_dataset\r\nimport polars as pl\r\n\r\nds = load_dataset(\"blbooks\")\r\nds['train'].to_parquet(\"test.parquet\")\r\ndf = pl.scan_parquet(\"test.parquet\")\r\ndf.groupby('date').agg([pl.count()]).collect()\r\n```\r\n\r\n>datasets_sql seems to not like the fact that I'm averaging np.arrays.\r\n\r\nI am not certain how Polars will handle this either. It does have NumPy support (https://pola-rs.github.io/polars-book/user-guide/howcani/interop/numpy.html) but I assume Polars will need to have at least enough memory in each group you want to average over so you may still end up needing more memory depending on the size of your dataset/groups. \r\n\r\n\r\n",
"Hi @davanstrien , thanks a lot, I didn't know about this library and the answer works! I need to try it on the full dataset now, but I'm hopeful. Here's what my code looks like:\r\n```\r\nlist_size = 768\r\ndf.groupby(\"date\").agg(\r\n pl.concat_list(\r\n [\r\n pl.col(\"hidden_state\")\r\n .arr.slice(n, 1)\r\n .arr.first()\r\n .mean()\r\n for n in range(0, list_size)\r\n ]\r\n ).collect()\r\n```\r\n\r\nFor some reasons, the following code was giving me a \"mean() got unexpected argument 'axis'\":\r\n```\r\ndf2 = df.groupby('date').agg(\r\n pl.col(\"hidden_state\").map(np.mean).alias(\"average_hidden_state\")\r\n).collect()\r\n\r\n```\r\n\r\nEDIT: The solution works on my large dataset, the memory does not crash and the time is reasonable, thanks a lot again!",
"@jeremylhour glad this worked for you :) ",
"I find this functionality missing in my workflow as well and the workarounds with SQL and Polars unsatisfying. Since PyArrow has exposed this functionality, I hope this soon makes it into a release. (:"
] | 2022-01-27T16:57:54Z
| 2023-03-14T14:45:59Z
| null |
NONE
| null | null | null |
**Is your feature request related to a problem? Please describe.**
Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example:
```python
# features:
# {
# "example_id": datasets.Value("int32"),
# "text": datasets.Value("string")
# }
ds = datasets.Dataset()
def split(examples):
sentences = [text.split(".") for text in examples["text"]]
return {
"example_id": [
example_id
for example_id, sents in zip(examples["example_id"], sentences)
for _ in sents
],
"sentence": [sent for sents in sentences for sent in sents],
"sentence_id": [i for sents in sentences for i in range(len(sents))],
}
split_ds = ds.map(split, batched=True)
def process(examples):
outputs = some_neural_network_that_works_on_sentences(examples["sentence"])
return {"outputs": outputs}
split_ds = split_ds.map(process, batched=True)
```
I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together.
**Describe the solution you'd like**
Ideally, it would look something like this:
```python
def join(examples):
order = np.argsort(examples["sentence_id"])
text = ".".join(examples["text"][i] for i in order)
outputs = [examples["outputs"][i] for i in order]
return {"text": text, "outputs": outputs}
ds = split_ds.group_by("example_id", join)
```
**Describe alternatives you've considered**
Right now, we can do this:
```python
def merge(example):
meeting_id = example["example_id"]
parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no")
return {"outputs": list(parts["outputs"])}
ds = ds.map(merge)
```
Of course, we could process the dataset like this:
```python
def process(example):
outputs = some_neural_network_that_works_on_sentences(example["text"].split("."))
return {"outputs": outputs}
ds = ds.map(process, batched=True)
```
However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example.
I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3644/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3644/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/250
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/250/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/250/comments
|
https://api.github.com/repos/huggingface/datasets/issues/250/events
|
https://github.com/huggingface/datasets/pull/250
| 634,416,751
|
MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4
| 250
|
Remove checksum download in c4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Commenting again in case [previous thread](https://github.com/huggingface/nlp/pull/233) was inactive.\r\n\r\n@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?"
] | 2020-06-08T09:13:00Z
| 2020-08-25T07:04:56Z
| 2020-06-08T09:16:59Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/250",
"merged_at": "2020-06-08T09:16:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/250"
}
|
There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/250/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/250/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5096
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5096/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5096/events
|
https://github.com/huggingface/datasets/issues/5096
| 1,403,379,816
|
I_kwDODunzps5TpeBo
| 5,096
|
Transfer some canonical datasets under an organization namespace
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 2.01MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default (download: 411 bytes, generated: 385 bytes, post-processed: Unknown size, total: 796 bytes) to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 293kB/s]\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 304.16it/s]\r\nOut[1]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"dummy-canonical-org/dummy_canonical_dataset\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 1.57MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 362.48it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n```",
"Cool ! 🚀 ",
"Maybe we should be a bit more proactive with these transfers. There are only ≈70 canonical models, so reaching that number with datasets would be great, too. It's not easy considering the current number of ≈750 canonical datasets, but doable.\r\n\r\nFor instance, it shouldn't be too hard to transfer these datasets (partial list; all of them have more than > 1k downloads):\r\n\r\n<details>\r\n\r\n<summary> Datasets to transfer </summary>\r\n\r\n```\r\nquickdraw -> google\r\nopenai_humaneval -> openai\r\nc4 -> allenai/c4 (the canonical version reads data from the org version)\r\nmbpp -> google (ask jaaustin (author) where to transfer the dataset)\r\ncompetition_math -> hendrycks (author)\r\ngsm8k -> openai\r\nai2_arc -> allenai\r\nimdb -> stanfordai\r\ngreek_legal_code -> chrispap (author)\r\nspider -> Yale-LILY\r\nsquad and squad_v2 -> rajpurkarlab (or rajpurkar, a member of the org and one of the authors)\r\ncppe-5 -> rishitdagli\r\nnews_commentary -> Helsinki-NLP\r\njfleg -> keisks (author)\r\npubmed_qa -> qiaojin (author)\r\nmedmcqa -> infinitylogesh (author)\r\ncifar10 and cifar100 -> UniversityofToronto\r\ncc100 -> gwenzek (author)\r\nasset -> facebook\r\nblbooks -> BritishLibraryLabs\r\ncapes -> FLSRDS (maybe the author?)\r\ncc_news -> fhamborg (author)\r\nclue -> CLUE benchmark\r\ncoqa -> stanfordnlp\r\nlambada -> germank (author)\r\nlibrispeech_asr -> openslr\r\ndrop -> allenai\r\nduorc -> salesforce (ask amritasaha87 (author) where to transfer)\r\nglue -> nyu-mll ?\r\ngo_emotions -> google\r\ncommonsense_qa -> tau\r\ndbpedia_14 -> JensLehmann (author?)\r\ndiscofuse -> google\r\nmc4 -> allenai/c4\r\nopenbookqa -> allenai\r\nropes -> allene\r\ntrivia_qa -> mandarjoshi (author)\r\nwikiann -> afshinrahimi (author)\r\nxtreme -> google\r\nxscr -> INK-USC\r\nyelp_review_full -> Yelp\r\ntruthful_qa -> jacobhilton22 (author)\r\nbigbench -> google\r\nxnli -> facebook\r\nsciq -> allenai\r\nsst2 -> stanfordnlp\r\nblimp -> alexwarstadt (author)\r\ntweet_eval -> cardiffnlp\r\nbeans -> AI-Lab-Makerere\r\nlex_glue -> coastalcph\r\namericas_nli -> abteen (author)\r\nopus_euconst -> tiedeman (author)\r\nmedical_questions_pairs -> curaihealth\r\nweb_questions -> joberant (author)\r\nanli -> facebook\r\nrace -> CarnegieMellonCS\r\nklue -> klue\r\nwino_bias -> uclanlp\r\nwiki_qa -> microsoft\r\nxcopa -> cambridgeltl\r\nindic_glue -> ai4bharat\r\nboolq -> google\r\nadversarial_qa -> mbartolo (author)\r\nnq_open -> google\r\nsnli -> stanfordnlp\r\nstsb_multi_mt -> PhilipMay (author)\r\nmulti_nli -> sleepinyourhat (author)\r\npaws -> google\r\npaws-x -> google\r\nms_marco - microsoft\r\nxquad -> deepmind\r\nnarrativeqa -> deepmind\r\nkilt_tasks -> facebook\r\nhate_speech_offensive -> tdavidson (author)\r\nwiki40b -> google\r\ncovost2 -> facebook\r\ncommon_gen -> INKLAB\r\nmulti_eurlex -> kiddothe2b (author)\r\nexams -> mhardalov (author)\r\ntiny_shakespeare -> karpathy (author)\r\nblbooksgenre -> BritishLibraryLabs ?\r\nfood101 -> ethz ?\r\nscitail -> allenai\r\nbillsum -> FiscalNote\r\nimppres -> facebook\r\nquartz -> allenai\r\nqasc -> allenai\r\nquail -> textmachinelab\r\nwiki_lingua -> esdurmus\r\ncos_e -> salesforce ?\r\ncivil_comments -> google ? (create a “jigsaw” org) \r\nxquad_r -> google\r\nwikitext-> metamind (or salesforce)\r\n\r\n// deprecate c4 and mc4 in favor of allenai/c4 (add a dataset script to the org version to make it easier to use?)\r\n```\r\n</details>\r\n\r\nAlso, a space that allows users to claim the existing canonical datasets (for themselves or their organizations) could be nice.\r\n\r\nWDYT?",
"Next week I can take care of some of them :) In most cases we just need to send an email to ask them if they're ok with it.\r\nLet's coordinate on slack ?",
"Yup, sounds good to me!",
"I can also continuing working on this if we agree this has become a priority now.",
"cool stuff! \r\n\r\nthis morning on my side i moved huggingface.co/ctrl (a not very used model) to its rightful entity"
] | 2022-10-10T15:44:31Z
| 2023-09-15T14:01:02Z
| null |
MEMBER
| null | null | null |
As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it).
First, we should test it using a dummy dataset/organization.
TODO:
- [x] Test with a dummy dataset
- [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset
- [x] Create dummy organization: https://huggingface.co/dummy-canonical-org
- [x] Transfer dummy canonical dataset to dummy organization
- [ ] Transfer datasets
- [x] babi_qa => facebook
- [x] cord19 => allenai
- [x] emotion => dair-ai
- [ ] gem => GEM
- [x] hendrycks_test => cais/mmlu
- [x] indonlu => indonlp
- [ ] multilingual_librispeech => facebook
- It already exists "facebook/multilingual_librispeech"
- [ ] oscar => oscar-corpus
- [x] peer_read => allenai
- [x] qasper => allenai
- [x] reddit => webis/tldr-17
- [x] russian_super_glue => russiannlp
- [x] rvl_cdip => aharley
- [x] s2orc => allenai
- [x] scicite => allenai
- [x] scifact => allenai
- [x] scitldr => allenai
- [x] swiss_judgment_prediction => rcds
- [x] the_pile => EleutherAI
- [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt
- [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist
- [x] banking77 => PolyAI
- [x] common_voice => mozilla-foundation
- [x] german_legal_entity_recognition => elenanereiss
- ...
EDIT: the list above is continuously being updated
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5096/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/547
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/547/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/547/comments
|
https://api.github.com/repos/huggingface/datasets/issues/547/events
|
https://github.com/huggingface/datasets/pull/547
| 689,268,589
|
MDExOlB1bGxSZXF1ZXN0NDc2MzQ4OTk5
| 547
|
[Distributed] Making loading distributed datasets a bit safer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-08-31T14:51:34Z
| 2020-08-31T15:16:30Z
| 2020-08-31T15:16:29Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/547",
"merged_at": "2020-08-31T15:16:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/547"
}
|
Add some file-locks during dataset loading
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/547/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/547/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4844
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4844/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4844/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4844/events
|
https://github.com/huggingface/datasets/pull/4844
| 1,337,878,249
|
PR_kwDODunzps49IFLa
| 4,844
|
Add 'val' to VALIDATION_KEYWORDS.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akt42",
"id": 98386959,
"login": "akt42",
"node_id": "U_kgDOBd1EDw",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"repos_url": "https://api.github.com/users/akt42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akt42"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@mariosasko not sure about how the reviewing process works. Maybe you can have a look because we discussed this elsewhere?",
"Hi, thanks! \r\n\r\nLet's add one pattern with `val` to this test before merging: \r\nhttps://github.com/huggingface/datasets/blob/b88a656cf94c4ad972154371c83c1af759fde522/tests/test_data_files.py#L598",
"_The documentation is not available anymore as the PR was closed or merged._",
"@akt42 note that there is some info about splits keywords in the docs: https://huggingface.co/docs/datasets/main/en/repository_structure#split-names-keywords. I agree it's not clear that it applies not only to filenames, but to directories as well.\r\n\r\nI think \"val\" should be now added to the documentation source file here: https://github.com/huggingface/datasets/blob/main/docs/source/repository_structure.mdx?plain=1#L98",
"@polinaeterna Thanks for notifying us that there is a list of supported keywords\r\n\r\nI've added \"val\" to that list and a test."
] | 2022-08-13T06:49:41Z
| 2022-08-30T10:17:35Z
| 2022-08-30T10:14:54Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4844.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4844",
"merged_at": "2022-08-30T10:14:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4844.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4844"
}
|
This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well.
I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4844/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4844/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/944
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/944/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/944/comments
|
https://api.github.com/repos/huggingface/datasets/issues/944/events
|
https://github.com/huggingface/datasets/pull/944
| 754,228,947
|
MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5
| 944
|
Add German Legal Entity Recognition Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
}
|
[] |
closed
| false
| null |
[] | null |
[
"thanks ! merging this one"
] | 2020-12-01T09:38:22Z
| 2020-12-03T13:06:56Z
| 2020-12-03T13:06:55Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/944",
"merged_at": "2020-12-03T13:06:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/944"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/944/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/944/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/3239
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3239/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3239/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3239/events
|
https://github.com/huggingface/datasets/issues/3239
| 1,048,360,232
|
I_kwDODunzps4-fLUo
| 3,239
|
Inconsistent performance of the "arabic_billion_words" dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vitalyshalumov",
"id": 33824221,
"login": "vitalyshalumov",
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vitalyshalumov"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[] | 2021-11-09T09:11:00Z
| 2021-11-09T09:11:00Z
| null |
NONE
| null | null | null |
## Describe the bug
When downloaded from macine 1 the dataset is downloaded and parsed correctly.
When downloaded from machine two (which has a different cache directory),
the following script:
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload')
gives the following error:
**Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 348M/348M [00:24<00:00, 14.0MB/s]
Traceback (most recent call last):
File ".../why_mismatch.py", line 3, in <module>
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words')}]**
Note that the package versions of datasets (1.15.1) and rarfile (4.0) are identical.
## Steps to reproduce the bug
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload')
# Sample code to reproduce the bug
## Expected results
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17...
Downloading: 100%|███████████████████████████| 348M/348M [00:22<00:00, 15.8MB/s]
Dataset arabic_billion_words downloaded and prepared to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17. Subsequent calls will reuse this data.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Machine 1:
- `datasets` version: 1.15.1
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
Machine 2 (the bugged one)
- `datasets` version: 1.15.1
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 6.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3239/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3239/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/6286
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6286/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6286/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6286/events
|
https://github.com/huggingface/datasets/pull/6286
| 1,932,640,128
|
PR_kwDODunzps5cPKNK
| 6,286
|
Create DefunctDatasetError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009157 / 0.011353 (-0.002195) | 0.004275 / 0.011008 (-0.006734) | 0.099341 / 0.038508 (0.060833) | 0.080634 / 0.023109 (0.057525) | 0.373598 / 0.275898 (0.097700) | 0.445048 / 0.323480 (0.121568) | 0.006541 / 0.007986 (-0.001444) | 0.003550 / 0.004328 (-0.000779) | 0.071034 / 0.004250 (0.066784) | 0.062637 / 0.037052 (0.025585) | 0.379110 / 0.258489 (0.120621) | 0.447896 / 0.293841 (0.154055) | 0.047739 / 0.128546 (-0.080807) | 0.012575 / 0.075646 (-0.063071) | 0.332314 / 0.419271 (-0.086957) | 0.065500 / 0.043533 (0.021967) | 0.365919 / 0.255139 (0.110780) | 0.438611 / 0.283200 (0.155412) | 0.034243 / 0.141683 (-0.107440) | 1.628034 / 1.452155 (0.175880) | 1.802970 / 1.492716 (0.310253) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224528 / 0.018006 (0.206522) | 0.482094 / 0.000490 (0.481604) | 0.012752 / 0.000200 (0.012552) | 0.000570 / 0.000054 (0.000515) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025456 / 0.037411 (-0.011956) | 0.082281 / 0.014526 (0.067756) | 0.100050 / 0.176557 (-0.076506) | 0.156931 / 0.737135 (-0.580204) | 0.108229 / 0.296338 (-0.188110) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.560688 / 0.215209 (0.345479) | 5.171711 / 2.077655 (3.094056) | 2.273178 / 1.504120 (0.769058) | 1.948158 / 1.541195 (0.406963) | 1.879744 / 1.468490 (0.411254) | 0.789216 / 4.584777 (-3.795561) | 4.529370 / 3.745712 (0.783658) | 4.008743 / 5.269862 (-1.261118) | 2.633555 / 4.565676 (-1.932121) | 0.085411 / 0.424275 (-0.338864) | 0.007256 / 0.007607 (-0.000351) | 0.623254 / 0.226044 (0.397209) | 6.327256 / 2.268929 (4.058327) | 2.911787 / 55.444624 (-52.532837) | 2.240610 / 6.876477 (-4.635867) | 2.352811 / 2.142072 (0.210738) | 0.930114 / 4.805227 (-3.875114) | 0.185028 / 6.500664 (-6.315636) | 0.062115 / 0.075469 (-0.013354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.394261 / 1.841788 (-0.447527) | 19.689376 / 8.074308 (11.615067) | 17.242289 / 10.191392 (7.050897) | 0.209122 / 0.680424 (-0.471302) | 0.027205 / 0.534201 (-0.506996) | 0.408613 / 0.579283 (-0.170670) | 0.503836 / 0.434364 (0.069472) | 0.485179 / 0.540337 (-0.055158) | 0.674333 / 1.386936 (-0.712603) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007506 / 0.011353 (-0.003847) | 0.004683 / 0.011008 (-0.006325) | 0.067584 / 0.038508 (0.029076) | 0.065635 / 0.023109 (0.042525) | 0.458814 / 0.275898 (0.182916) | 0.477549 / 0.323480 (0.154069) | 0.005212 / 0.007986 (-0.002774) | 0.003393 / 0.004328 (-0.000936) | 0.075307 / 0.004250 (0.071057) | 0.051989 / 0.037052 (0.014937) | 0.484229 / 0.258489 (0.225740) | 0.470889 / 0.293841 (0.177048) | 0.043528 / 0.128546 (-0.085018) | 0.014685 / 0.075646 (-0.060962) | 0.084199 / 0.419271 (-0.335073) | 0.053970 / 0.043533 (0.010437) | 0.432362 / 0.255139 (0.177223) | 0.467472 / 0.283200 (0.184272) | 0.031109 / 0.141683 (-0.110574) | 1.525938 / 1.452155 (0.073784) | 1.631993 / 1.492716 (0.139276) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200196 / 0.018006 (0.182190) | 0.479316 / 0.000490 (0.478827) | 0.010146 / 0.000200 (0.009947) | 0.000118 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027911 / 0.037411 (-0.009500) | 0.089720 / 0.014526 (0.075194) | 0.097000 / 0.176557 (-0.079557) | 0.157549 / 0.737135 (-0.579587) | 0.098247 / 0.296338 (-0.198092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581401 / 0.215209 (0.366192) | 5.703829 / 2.077655 (3.626174) | 2.688272 / 1.504120 (1.184152) | 2.321691 / 1.541195 (0.780496) | 2.355987 / 1.468490 (0.887497) | 0.759109 / 4.584777 (-3.825668) | 4.711288 / 3.745712 (0.965576) | 4.093019 / 5.269862 (-1.176843) | 2.648240 / 4.565676 (-1.917437) | 0.087839 / 0.424275 (-0.336436) | 0.007060 / 0.007607 (-0.000547) | 0.702783 / 0.226044 (0.476739) | 6.986924 / 2.268929 (4.717996) | 3.365970 / 55.444624 (-52.078654) | 2.670876 / 6.876477 (-4.205600) | 2.776431 / 2.142072 (0.634358) | 0.920005 / 4.805227 (-3.885222) | 0.197521 / 6.500664 (-6.303143) | 0.069974 / 0.075469 (-0.005495) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.596947 / 1.841788 (-0.244841) | 20.606007 / 8.074308 (12.531699) | 18.437425 / 10.191392 (8.246033) | 0.222445 / 0.680424 (-0.457978) | 0.028610 / 0.534201 (-0.505591) | 0.419748 / 0.579283 (-0.159535) | 0.513409 / 0.434364 (0.079045) | 0.487517 / 0.540337 (-0.052820) | 0.706637 / 1.386936 (-0.680299) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007744 / 0.011353 (-0.003609) | 0.004678 / 0.011008 (-0.006330) | 0.101243 / 0.038508 (0.062735) | 0.085653 / 0.023109 (0.062543) | 0.383772 / 0.275898 (0.107874) | 0.422151 / 0.323480 (0.098671) | 0.004566 / 0.007986 (-0.003419) | 0.003900 / 0.004328 (-0.000429) | 0.077778 / 0.004250 (0.073528) | 0.063761 / 0.037052 (0.026709) | 0.385505 / 0.258489 (0.127016) | 0.436186 / 0.293841 (0.142345) | 0.036172 / 0.128546 (-0.092374) | 0.009935 / 0.075646 (-0.065711) | 0.341434 / 0.419271 (-0.077837) | 0.061866 / 0.043533 (0.018333) | 0.385020 / 0.255139 (0.129881) | 0.399455 / 0.283200 (0.116256) | 0.029324 / 0.141683 (-0.112358) | 1.784749 / 1.452155 (0.332594) | 1.845926 / 1.492716 (0.353209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266322 / 0.018006 (0.248316) | 0.508708 / 0.000490 (0.508218) | 0.013680 / 0.000200 (0.013480) | 0.000868 / 0.000054 (0.000814) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033887 / 0.037411 (-0.003525) | 0.096709 / 0.014526 (0.082183) | 0.109472 / 0.176557 (-0.067084) | 0.174422 / 0.737135 (-0.562713) | 0.110830 / 0.296338 (-0.185509) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457533 / 0.215209 (0.242324) | 4.615229 / 2.077655 (2.537575) | 2.418820 / 1.504120 (0.914700) | 2.181079 / 1.541195 (0.639884) | 2.229164 / 1.468490 (0.760674) | 0.554861 / 4.584777 (-4.029916) | 4.323787 / 3.745712 (0.578075) | 3.769396 / 5.269862 (-1.500466) | 2.376850 / 4.565676 (-2.188826) | 0.065030 / 0.424275 (-0.359245) | 0.008397 / 0.007607 (0.000790) | 0.541109 / 0.226044 (0.315065) | 5.477540 / 2.268929 (3.208612) | 2.957049 / 55.444624 (-52.487576) | 2.511732 / 6.876477 (-4.364744) | 2.703953 / 2.142072 (0.561881) | 0.660822 / 4.805227 (-4.144405) | 0.147035 / 6.500664 (-6.353630) | 0.066045 / 0.075469 (-0.009424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.526481 / 1.841788 (-0.315307) | 22.020256 / 8.074308 (13.945948) | 16.854566 / 10.191392 (6.663174) | 0.192958 / 0.680424 (-0.487466) | 0.021505 / 0.534201 (-0.512696) | 0.462867 / 0.579283 (-0.116416) | 0.514813 / 0.434364 (0.080449) | 0.546147 / 0.540337 (0.005809) | 0.767853 / 1.386936 (-0.619083) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007770 / 0.011353 (-0.003583) | 0.004671 / 0.011008 (-0.006337) | 0.080862 / 0.038508 (0.042354) | 0.087049 / 0.023109 (0.063940) | 0.479497 / 0.275898 (0.203599) | 0.559787 / 0.323480 (0.236307) | 0.007168 / 0.007986 (-0.000818) | 0.003829 / 0.004328 (-0.000500) | 0.079018 / 0.004250 (0.074768) | 0.067359 / 0.037052 (0.030307) | 0.516140 / 0.258489 (0.257651) | 0.547000 / 0.293841 (0.253159) | 0.037955 / 0.128546 (-0.090591) | 0.010007 / 0.075646 (-0.065639) | 0.087673 / 0.419271 (-0.331598) | 0.059309 / 0.043533 (0.015777) | 0.473920 / 0.255139 (0.218781) | 0.529216 / 0.283200 (0.246017) | 0.028236 / 0.141683 (-0.113447) | 1.771127 / 1.452155 (0.318972) | 1.918878 / 1.492716 (0.426162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242010 / 0.018006 (0.224004) | 0.494944 / 0.000490 (0.494454) | 0.006319 / 0.000200 (0.006119) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039220 / 0.037411 (0.001809) | 0.113805 / 0.014526 (0.099279) | 0.125704 / 0.176557 (-0.050853) | 0.189198 / 0.737135 (-0.547937) | 0.126334 / 0.296338 (-0.170004) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502226 / 0.215209 (0.287017) | 5.039133 / 2.077655 (2.961478) | 2.782352 / 1.504120 (1.278232) | 2.587654 / 1.541195 (1.046460) | 2.692588 / 1.468490 (1.224098) | 0.585672 / 4.584777 (-3.999105) | 4.553078 / 3.745712 (0.807366) | 3.864739 / 5.269862 (-1.405123) | 2.536109 / 4.565676 (-2.029567) | 0.069567 / 0.424275 (-0.354708) | 0.008749 / 0.007607 (0.001142) | 0.620645 / 0.226044 (0.394601) | 6.247286 / 2.268929 (3.978357) | 3.345293 / 55.444624 (-52.099332) | 2.873970 / 6.876477 (-4.002507) | 3.123190 / 2.142072 (0.981118) | 0.687391 / 4.805227 (-4.117837) | 0.159046 / 6.500664 (-6.341618) | 0.071019 / 0.075469 (-0.004450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.728724 / 1.841788 (-0.113064) | 22.828390 / 8.074308 (14.754082) | 17.305225 / 10.191392 (7.113833) | 0.176571 / 0.680424 (-0.503853) | 0.023837 / 0.534201 (-0.510364) | 0.467935 / 0.579283 (-0.111348) | 0.503701 / 0.434364 (0.069337) | 0.558140 / 0.540337 (0.017803) | 0.789326 / 1.386936 (-0.597610) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-09T09:23:23Z
| 2023-10-10T07:13:22Z
| 2023-10-10T07:03:04Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6286",
"merged_at": "2023-10-10T07:03:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6286"
}
|
Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible.
See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6286/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6286/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2845
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2845/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2845/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2845/events
|
https://github.com/huggingface/datasets/issues/2845
| 981,487,861
|
MDU6SXNzdWU5ODE0ODc4NjE=
| 2,845
|
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2021-08-27T18:21:51Z
| 2021-08-27T18:24:05Z
| null |
CONTRIBUTOR
| null | null | null |
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do:
```
if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds)
```
This can already be done with:
```
builder = load_dataset_builder(ds)
if not os.path.idsir(builder.cache_dir):
builder.download_and_prepare()
```
but the current way is a way less intuitive and much harder to remember than the proposed API, IMHO.
One more way is to do:
```
_ = load_dataset(ds)
```
but it wastes resources loading the dataset when it's not needed.
this has been discussed at https://huggingface.slack.com/archives/C01229B19EX/p1630021912025800
Thank you!
@lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2845/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2845/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2422
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2422/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2422/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2422/events
|
https://github.com/huggingface/datasets/pull/2422
| 905,568,548
|
MDExOlB1bGxSZXF1ZXN0NjU2NjM3MzY1
| 2,422
|
Fix save_to_disk nested features order in dataset_info.json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-05-28T15:03:28Z
| 2021-05-28T15:26:57Z
| 2021-05-28T15:26:56Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2422.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2422",
"merged_at": "2021-05-28T15:26:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2422.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2422"
}
|
Fix issue https://github.com/huggingface/datasets/issues/2267
The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2422/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2422/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1763
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1763/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1763/events
|
https://github.com/huggingface/datasets/pull/1763
| 791,389,763
|
MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1
| 1,763
|
PAWS-X: Fix csv Dictreader splitting data on quotes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4",
"events_url": "https://api.github.com/users/gowtham1997/events{/privacy}",
"followers_url": "https://api.github.com/users/gowtham1997/followers",
"following_url": "https://api.github.com/users/gowtham1997/following{/other_user}",
"gists_url": "https://api.github.com/users/gowtham1997/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gowtham1997",
"id": 9641196,
"login": "gowtham1997",
"node_id": "MDQ6VXNlcjk2NDExOTY=",
"organizations_url": "https://api.github.com/users/gowtham1997/orgs",
"received_events_url": "https://api.github.com/users/gowtham1997/received_events",
"repos_url": "https://api.github.com/users/gowtham1997/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gowtham1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gowtham1997/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gowtham1997"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-01-21T18:21:01Z
| 2021-01-22T10:14:33Z
| 2021-01-22T10:13:45Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"merged_at": "2021-01-22T10:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1763"
}
|
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
```
changed `data = csv.DictReader(f, delimiter="\t")` to `data = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)` in the dataloader to make csv module not split by quotes.
The results are as expected for all languages after the change.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1763/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1505
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1505/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1505/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1505/events
|
https://github.com/huggingface/datasets/pull/1505
| 763,750,773
|
MDExOlB1bGxSZXF1ZXN0NTM4MTEyMTk5
| 1,505
|
add ilist dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thevasudevgupta",
"id": 53136577,
"login": "thevasudevgupta",
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thevasudevgupta"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-12T12:44:12Z
| 2020-12-17T15:43:07Z
| 2020-12-17T15:43:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1505.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1505",
"merged_at": "2020-12-17T15:43:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1505.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1505"
}
|
This PR will add Indo-Aryan Language Identification Shared Task Dataset.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1505/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1505/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1973
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1973/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1973/events
|
https://github.com/huggingface/datasets/issues/1973
| 820,077,312
|
MDU6SXNzdWU4MjAwNzczMTI=
| 1,973
|
Question: what gets stored in the datasets cache and why is it so huge?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.",
"Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.",
"Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ",
"And to clarify, it's not memory, it's disk space. Thank you!",
"Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```",
"Thanks for the tip, this is useful. ",
"Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.",
"Thank you!"
] | 2021-03-02T14:35:53Z
| 2021-03-30T14:03:59Z
| 2021-03-16T09:44:00Z
|
NONE
| null | null | null |
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1973/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1808
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1808/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1808/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1808/events
|
https://github.com/huggingface/datasets/issues/1808
| 798,879,180
|
MDU6SXNzdWU3OTg4NzkxODA=
| 1,808
|
writing Datasets in a human readable format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] | null |
[
"AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\r\nYou can convert the Arrow table to a pandas dataframe to save the data as csv as follows:\r\n```python\r\narrow_table = dataset.data\r\ndataframe = arrow_table.to_pandas()\r\ndataframe.to_csv(\"/path/to/file.csv\")\r\n```\r\n\r\nSimilarly, you can convert the dataset to a Python dict and save it as JSON:\r\n```python\r\nimport json\r\narrow_table = dataset.data\r\npy_dict = arrow_table.to_pydict()\r\nwith open(\"/path/to/file.json\", \"w+\") as f:\r\n json.dump(py_dict, f)\r\n```",
"Indeed this works as long as you have enough memory.\r\nIt would be amazing to have export options like csv, json etc. !\r\n\r\nIt should be doable to implement something that iterates through the dataset batch by batch to write to csv for example.\r\nThere is already an `export` method but currently the only export type that is supported is `tfrecords`.",
"Hi! `datasets` now supports `Dataset.to_csv` and `Dataset.to_json` for saving data in a human readable format."
] | 2021-02-02T02:55:40Z
| 2022-06-01T15:38:13Z
| 2022-06-01T15:38:13Z
|
NONE
| null | null | null |
Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1808/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1808/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/664
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/664/comments
|
https://api.github.com/repos/huggingface/datasets/issues/664/events
|
https://github.com/huggingface/datasets/issues/664
| 707,017,791
|
MDU6SXNzdWU3MDcwMTc3OTE=
| 664
|
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activity, closing",
"It happened when try to change the old project which use 'nlp' to new project which use 'datasets'. You should check you old 'my_squad.py' file, change the inherit class from `nlp.xxx` to `datasets.xxx`. Otherwise datasets - load.py - import_main_class() `if inspect.isclass(obj) and issubclass(obj, main_cls_type):` can not find the main_cls."
] | 2020-09-23T03:53:36Z
| 2023-04-17T09:31:20Z
| 2020-10-20T09:06:13Z
|
NONE
| null | null | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/664/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3530
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3530/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3530/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3530/events
|
https://github.com/huggingface/datasets/pull/3530
| 1,093,894,732
|
PR_kwDODunzps4wiZCw
| 3,530
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
}
] | null |
[] | 2022-01-05T01:32:07Z
| 2022-01-05T12:50:51Z
| 2022-01-05T12:50:50Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3530",
"merged_at": "2022-01-05T12:50:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3530"
}
|
Removing reference to "Common Voice" in Personal and Sensitive Information section.
Adding link to license.
Correct license type in metadata.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3530/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3530/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6447
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6447/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6447/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6447/events
|
https://github.com/huggingface/datasets/issues/6447
| 2,008,195,298
|
I_kwDODunzps53sqDi
| 6,447
|
Support one dataset loader per config when using YAML
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2023-11-23T13:03:07Z
| 2023-11-23T13:03:07Z
| null |
CONTRIBUTOR
| null | null | null |
### Feature request
See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1
I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc.
### Motivation
It would be more flexible for the users
### Your contribution
No specific contribution
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6447/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6447/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/3813
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3813/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3813/events
|
https://github.com/huggingface/datasets/issues/3813
| 1,158,474,859
|
I_kwDODunzps5FDOxr
| 3,813
|
Add MetaShift dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
}
] | null |
[
"I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.",
"#self-assign",
"I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_metashift_dataset/datasets/metashift/metashift.py)\r\n1. The dataset does not have a typical - train/test/val split. What do we do for the _split_generators() function ? How do we go about this ?\r\n2. This dataset builds on the Visual Genome dataset, using a metadata file. The dataset is generated using generate_full_MetaShift.py script. By default, the authors choose to generate the dataset only for a SELECTED_CLASSES. The following script is used : \r\nCode : https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/generate_full_MetaShift.py \r\nInfo : https://metashift.readthedocs.io/en/latest/sub_pages/download_MetaShift.html#generate-the-full-metashift-dataset\r\nCan I just copy over the required functions into the metashift.py to generate the dataset ?\r\n3. How do we complete the _generate_examples for this dataset ?\r\n\r\nThe user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nInputs, suggestions would be helpful. Thank you.",
"I think @mariosasko and @lhoestq should be able to help here 😄 ",
"Hi ! Thanks for adding this dataset :) Let me answer your questions:\r\n\r\n1. in this case you can put everything in the \"train\" split\r\n2. Yes you can copy the script (provided you also include the MIT license of the code in the file header for example). Though we ideally try to not create new directories nor files when generating dataset, so if possible this script should be adapted to not create the file structure they mentioned, but instead yield the images one by one in `_generate_examples`. Let me know if you think this is feasible\r\n3. see point 2 haha\r\n\r\n> The user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nYup ! We can also define a `selected_classes` parameter such that users can do\r\n```python\r\nload_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...])\r\n```",
"Great. This is helpful. Thanks @lhoestq .\r\nRegarding Point 2, I'll try using yield instead of creating the directories and see if its feasible. selected_classes config sounds good.",
"Closed via #3900 "
] | 2022-03-03T14:26:45Z
| 2022-04-10T13:39:59Z
| 2022-04-10T13:39:59Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3813/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2592
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2592/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2592/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2592/events
|
https://github.com/huggingface/datasets/pull/2592
| 937,060,559
|
MDExOlB1bGxSZXF1ZXN0NjgzNjc2MjA4
| 2,592
|
Add c4.noclean infos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-07-05T12:51:40Z
| 2021-07-05T13:15:53Z
| 2021-07-05T13:15:52Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2592",
"merged_at": "2021-07-05T13:15:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2592"
}
|
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2592/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2592/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/449
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/449/comments
|
https://api.github.com/repos/huggingface/datasets/issues/449/events
|
https://github.com/huggingface/datasets/pull/449
| 666,898,923
|
MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx
| 449
|
add reuters21578 dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> Awesome !\r\n> Good job on parsing these files :O\r\n> \r\n> Do you think it would be hard to get the two other split configurations ?\r\n\r\nIt shouldn't be that hard, I think I can consider different config names for each split ",
"> > Awesome !\r\n> > Good job on parsing these files :O\r\n> > Do you think it would be hard to get the two other split configurations ?\r\n> \r\n> It shouldn't be that hard, I think I can consider different config names for each split\r\n\r\nYes that would be perfect",
"closing this PR and opening a new one to fix the circle CI problems"
] | 2020-07-28T08:58:12Z
| 2023-09-24T09:49:28Z
| 2020-08-03T11:10:31Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/449.diff",
"html_url": "https://github.com/huggingface/datasets/pull/449",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/449.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/449"
}
|
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/449/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2389
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2389/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2389/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2389/events
|
https://github.com/huggingface/datasets/pull/2389
| 897,822,270
|
MDExOlB1bGxSZXF1ZXN0NjQ5Nzc3MDMz
| 2,389
|
Insert task templates for text classification
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Update: found a few datasets that slipped through the net. Adding them shortly!",
"You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?",
"> You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?\r\n\r\nhi @yjernite, these code insertions are auto-generated so could certainly be improved :) \r\n\r\njust so i understand, your idea is that instead of doing something like\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n task_templates=[\r\n TextClassification(\r\n labels=(\"Business\", \"Sci/Tech\", \"Sports\", \"World\"),\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ],\r\n )\r\n```\r\n\r\nwe could do the following:\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n info = datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n )\r\n\r\n info.task_templates = [\r\n TextClassification(\r\n labels=info.features.names,\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ]\r\n return info\r\n```\r\n\r\n",
"Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?",
"> Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?\r\n\r\nOh yes, that would be great! It does mean enforcing that people use the right feature type (sometimes people still use a `string` feature still because they don't want to enumerate the classes, but I guess you've been catching most of those in reviews @lhoestq )\r\n\r\nThere might be reasons where there should be a legitimate difference, but I can't really think of nay right now, and we can always duplicate the feature",
"Let's ignore the CI fails since they are unrelated to your changes. They're about dataset cards issues"
] | 2021-05-21T08:36:26Z
| 2021-05-28T15:28:58Z
| 2021-05-28T15:26:28Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2389",
"merged_at": "2021-05-28T15:26:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2389"
}
|
This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2389/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2389/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5785
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5785/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5785/events
|
https://github.com/huggingface/datasets/issues/5785
| 1,680,956,964
|
I_kwDODunzps5kMV4k
| 5,785
|
Unsupported data files raise TypeError: 'NoneType' object is not iterable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2023-04-24T10:38:03Z
| 2023-04-27T12:57:30Z
| 2023-04-27T12:57:30Z
|
MEMBER
| null | null | null |
Currently, we raise a TypeError for unsupported data files:
```
TypeError: 'NoneType' object is not iterable
```
See:
- https://github.com/huggingface/datasets-server/issues/1073
We should give a more informative error message.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5785/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6500
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6500/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6500/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6500/events
|
https://github.com/huggingface/datasets/pull/6500
| 2,043,258,633
|
PR_kwDODunzps5iFc6e
| 6,500
|
Enable setting config as default when push_to_hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6500). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This is ready for review @huggingface/datasets. ",
"Also what if the config is being overwritten and it was the default config and the user doesn't pass `set_default` ?\r\nI'd expect the config to keep being the default one but lmk what you think",
"How can you unset a config as the default one? In the case you mentioned, I would expect the config not being the default one.",
"Maybe by passing `set_default=False` ? (set_default can be None by default)",
"I think that way we are unnecessarily complicating the logic of `push_to_hub` and as I told you, I would expect the contrary: the result of calling `push_to_hub` with a determined set of arguments should always be the same, independently of previous calls and the current state of the config on the Hub. Push to hub should be somehow stateless in that sense, and IMO the user expects that the push overwrites previous config if already present on the Hub. I find very confusing making it to partially update the config on the Hub.",
"That makes sense, having it stateless is simpler and no need to do something too fancy indeed",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005329 / 0.011353 (-0.006024) | 0.002998 / 0.011008 (-0.008010) | 0.063756 / 0.038508 (0.025248) | 0.051713 / 0.023109 (0.028603) | 0.248135 / 0.275898 (-0.027763) | 0.269136 / 0.323480 (-0.054344) | 0.002970 / 0.007986 (-0.005015) | 0.002566 / 0.004328 (-0.001763) | 0.048110 / 0.004250 (0.043859) | 0.038415 / 0.037052 (0.001363) | 0.254012 / 0.258489 (-0.004477) | 0.281915 / 0.293841 (-0.011926) | 0.027503 / 0.128546 (-0.101043) | 0.010370 / 0.075646 (-0.065276) | 0.208965 / 0.419271 (-0.210306) | 0.035508 / 0.043533 (-0.008024) | 0.249116 / 0.255139 (-0.006023) | 0.266350 / 0.283200 (-0.016850) | 0.018440 / 0.141683 (-0.123243) | 1.101089 / 1.452155 (-0.351066) | 1.164870 / 1.492716 (-0.327847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090909 / 0.018006 (0.072903) | 0.298041 / 0.000490 (0.297551) | 0.000211 / 0.000200 (0.000012) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018137 / 0.037411 (-0.019275) | 0.059574 / 0.014526 (0.045048) | 0.071754 / 0.176557 (-0.104803) | 0.117980 / 0.737135 (-0.619155) | 0.072903 / 0.296338 (-0.223435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282844 / 0.215209 (0.067635) | 2.740916 / 2.077655 (0.663261) | 1.444546 / 1.504120 (-0.059574) | 1.321904 / 1.541195 (-0.219291) | 1.356957 / 1.468490 (-0.111533) | 0.568389 / 4.584777 (-4.016388) | 2.354042 / 3.745712 (-1.391671) | 2.719427 / 5.269862 (-2.550435) | 1.719616 / 4.565676 (-2.846061) | 0.062537 / 0.424275 (-0.361738) | 0.004915 / 0.007607 (-0.002692) | 0.334716 / 0.226044 (0.108672) | 3.299499 / 2.268929 (1.030571) | 1.814629 / 55.444624 (-53.629996) | 1.515245 / 6.876477 (-5.361232) | 1.553085 / 2.142072 (-0.588987) | 0.643859 / 4.805227 (-4.161368) | 0.116650 / 6.500664 (-6.384014) | 0.041432 / 0.075469 (-0.034037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948227 / 1.841788 (-0.893561) | 11.331103 / 8.074308 (3.256795) | 10.209658 / 10.191392 (0.018266) | 0.126721 / 0.680424 (-0.553703) | 0.013638 / 0.534201 (-0.520563) | 0.282540 / 0.579283 (-0.296743) | 0.262635 / 0.434364 (-0.171729) | 0.335357 / 0.540337 (-0.204981) | 0.441798 / 1.386936 (-0.945138) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005200 / 0.011353 (-0.006153) | 0.003012 / 0.011008 (-0.007996) | 0.047571 / 0.038508 (0.009063) | 0.055069 / 0.023109 (0.031959) | 0.271150 / 0.275898 (-0.004748) | 0.294957 / 0.323480 (-0.028523) | 0.003922 / 0.007986 (-0.004064) | 0.002627 / 0.004328 (-0.001702) | 0.047777 / 0.004250 (0.043527) | 0.039507 / 0.037052 (0.002454) | 0.276314 / 0.258489 (0.017825) | 0.300436 / 0.293841 (0.006595) | 0.028951 / 0.128546 (-0.099595) | 0.010583 / 0.075646 (-0.065063) | 0.056535 / 0.419271 (-0.362737) | 0.032654 / 0.043533 (-0.010879) | 0.272945 / 0.255139 (0.017806) | 0.291909 / 0.283200 (0.008709) | 0.017545 / 0.141683 (-0.124138) | 1.195897 / 1.452155 (-0.256258) | 1.171855 / 1.492716 (-0.320861) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091919 / 0.018006 (0.073913) | 0.299297 / 0.000490 (0.298807) | 0.000225 / 0.000200 (0.000025) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022271 / 0.037411 (-0.015140) | 0.068903 / 0.014526 (0.054377) | 0.083767 / 0.176557 (-0.092790) | 0.120239 / 0.737135 (-0.616896) | 0.083448 / 0.296338 (-0.212891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295353 / 0.215209 (0.080144) | 2.911452 / 2.077655 (0.833798) | 1.577941 / 1.504120 (0.073821) | 1.454514 / 1.541195 (-0.086681) | 1.459575 / 1.468490 (-0.008915) | 0.572475 / 4.584777 (-4.012302) | 2.443634 / 3.745712 (-1.302078) | 2.801171 / 5.269862 (-2.468691) | 1.724214 / 4.565676 (-2.841462) | 0.063539 / 0.424275 (-0.360736) | 0.004939 / 0.007607 (-0.002668) | 0.347705 / 0.226044 (0.121660) | 3.489591 / 2.268929 (1.220663) | 1.944952 / 55.444624 (-53.499672) | 1.652810 / 6.876477 (-5.223667) | 1.656361 / 2.142072 (-0.485712) | 0.647052 / 4.805227 (-4.158176) | 0.117286 / 6.500664 (-6.383379) | 0.040979 / 0.075469 (-0.034490) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971761 / 1.841788 (-0.870027) | 11.770547 / 8.074308 (3.696239) | 10.402502 / 10.191392 (0.211110) | 0.128280 / 0.680424 (-0.552144) | 0.015160 / 0.534201 (-0.519041) | 0.286706 / 0.579283 (-0.292578) | 0.274539 / 0.434364 (-0.159825) | 0.324591 / 0.540337 (-0.215747) | 0.573846 / 1.386936 (-0.813090) |\n\n</details>\n</details>\n\n\n"
] | 2023-12-15T09:17:41Z
| 2023-12-18T11:56:11Z
| 2023-12-18T11:50:03Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6500",
"merged_at": "2023-12-18T11:50:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6500"
}
|
Fix #6497.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6500/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6500/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1899
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1899/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1899/events
|
https://github.com/huggingface/datasets/pull/1899
| 810,308,332
|
MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4
| 1,899
|
Fix: ALT - fix duplicated examples in alt-parallel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-17T15:53:56Z
| 2021-02-17T17:20:49Z
| 2021-02-17T17:20:49Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"merged_at": "2021-02-17T17:20:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1899"
}
|
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1899/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2948
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2948/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2948/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2948/events
|
https://github.com/huggingface/datasets/pull/2948
| 1,000,844,077
|
PR_kwDODunzps4r9PdV
| 2,948
|
Fix minor URL format in scitldr dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-09-20T11:11:32Z
| 2021-09-20T13:18:28Z
| 2021-09-20T13:18:28Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2948",
"merged_at": "2021-09-20T13:18:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2948"
}
|
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2948/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2948/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4835
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4835/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4835/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4835/events
|
https://github.com/huggingface/datasets/pull/4835
| 1,336,994,835
|
PR_kwDODunzps49FJg9
| 4,835
|
Fix documentation card of ethos dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-12T09:51:06Z
| 2022-08-12T13:13:55Z
| 2022-08-12T12:59:39Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4835.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4835",
"merged_at": "2022-08-12T12:59:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4835.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4835"
}
|
Fix documentation card of ethos dataset.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4835/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4835/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/639
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/639/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/639/comments
|
https://api.github.com/repos/huggingface/datasets/issues/639/events
|
https://github.com/huggingface/datasets/pull/639
| 704,217,963
|
MDExOlB1bGxSZXF1ZXN0NDg5MTgxOTY3
| 639
|
Update glue QQP checksum
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-09-18T09:08:15Z
| 2020-09-18T11:37:08Z
| 2020-09-18T11:37:07Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/639.diff",
"html_url": "https://github.com/huggingface/datasets/pull/639",
"merged_at": "2020-09-18T11:37:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/639.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/639"
}
|
Fix #638
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/639/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/639/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6330
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6330/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6330/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6330/events
|
https://github.com/huggingface/datasets/issues/6330
| 1,956,053,294
|
I_kwDODunzps50lwEu
| 6,330
|
Latest fsspec==2023.10.0 issue with streaming datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"I also encountered a similar error below.\r\nAppreciate the team could shed some light on this issue.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n[/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb](https://vscode-remote+ssh-002dremote-002braspberry-002dg5-002e4x.vscode-resource.vscode-cdn.net/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb) Cell 1 line 4\r\n [1](vscode-notebook-cell://ssh-remote%2Braspberry-g5.4x/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) from datasets import load_dataset, load_dataset\r\n [3](vscode-notebook-cell://ssh-remote%2Braspberry-g5.4x/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) # ds = load_dataset(\"parquet\", data_dir=\"/home/ubuntu/work/EveryDream2trainer/datasets/monse_v1/data\")\r\n----> [4](vscode-notebook-cell://ssh-remote%2Braspberry-g5.4x/home/ubuntu/work/EveryDream2trainer/prepare_dataset.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3) ds = load_dataset(\"Raspberry-ai/monse-v1\")\r\n\r\nFile [/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/load.py:1804](https://vscode-remote+ssh-002dremote-002braspberry-002dg5-002e4x.vscode-resource.vscode-cdn.net/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/load.py:1804), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1800 # Build dataset for splits\r\n 1801 keep_in_memory = (\r\n 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1803 )\r\n-> 1804 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)\r\n 1805 # Rename and cast features to match task schema\r\n 1806 if task is not None:\r\n\r\nFile [/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/builder.py:1108](https://vscode-remote+ssh-002dremote-002braspberry-002dg5-002e4x.vscode-resource.vscode-cdn.net/opt/conda/envs/everydream/lib/python3.10/site-packages/datasets/builder.py:1108), in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)\r\n 1106 is_local = not is_remote_filesystem(self._fs)\r\n 1107 if not is_local:\r\n-> 1108 raise NotImplementedError(f\"Loading a dataset cached in a {type(self._fs).__name__} is not supported.\")\r\n 1109 if not os.path.exists(self._output_dir):\r\n 1110 raise FileNotFoundError(\r\n 1111 f\"Dataset {self.name}: could not find data in {self._output_dir}. Please make sure to call \"\r\n 1112 \"builder.download_and_prepare(), or use \"\r\n 1113 \"datasets.load_dataset() before trying to access the Dataset object.\"\r\n 1114 )\r\n\r\nNotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.\r\n```\r\n\r\nCode to reproduce the issue:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"Raspberry-ai/monse-v1\")\r\n```\r\n\r\n\r\nDependencies:\r\n```\r\nPackage Version\r\n------------------------- ------------\r\nabsl-py 2.0.0\r\naccelerate 0.23.0\r\naiohttp 3.8.4\r\naiosignal 1.3.1\r\nantlr4-python3-runtime 4.9.3\r\nanyio 4.0.0\r\nappdirs 1.4.4\r\nargon2-cffi 23.1.0\r\nargon2-cffi-bindings 21.2.0\r\narrow 1.3.0\r\nasttokens 2.4.0\r\nasync-lru 2.0.4\r\nasync-timeout 4.0.3\r\nattrs 23.1.0\r\nBabel 2.13.0\r\nbackcall 0.2.0\r\nbeautifulsoup4 4.12.2\r\nbitsandbytes 0.41.1\r\nbleach 6.1.0\r\nbraceexpand 0.1.7\r\ncachetools 5.3.1\r\ncertifi 2023.7.22\r\ncffi 1.16.0\r\ncharset-normalizer 3.3.1\r\nclick 8.1.7\r\ncmake 3.27.7\r\ncolorama 0.4.6\r\ncomm 0.1.4\r\ncompel 1.1.6\r\ndatasets 2.11.0\r\ndebugpy 1.8.0\r\ndecorator 5.1.1\r\ndefusedxml 0.7.1\r\ndiffusers 0.18.0\r\ndill 0.3.6\r\ndocker-pycreds 0.4.0\r\ndowg 0.3.1\r\neinops 0.7.0\r\neinops-exts 0.0.4\r\nexceptiongroup 1.1.3\r\nexecuting 2.0.0\r\nfastjsonschema 2.18.1\r\nfilelock 3.12.4\r\nfqdn 1.5.1\r\nfrozenlist 1.4.0\r\nfsspec 2023.10.0\r\nftfy 6.1.1\r\ngitdb 4.0.11\r\nGitPython 3.1.40\r\ngoogle-auth 2.23.3\r\ngoogle-auth-oauthlib 1.1.0\r\ngrpcio 1.59.0\r\nhuggingface-hub 0.18.0\r\nidna 3.4\r\nimportlib-metadata 6.8.0\r\ninflection 0.5.1\r\nipykernel 6.25.2\r\nipython 8.16.1\r\nisoduration 20.11.0\r\njedi 0.19.1\r\nJinja2 3.1.2\r\njoblib 1.3.2\r\njson5 0.9.14\r\njsonpointer 2.4\r\njsonschema 4.19.1\r\njsonschema-specifications 2023.7.1\r\njupyter_client 8.4.0\r\njupyter_core 5.4.0\r\njupyter-events 0.8.0\r\njupyter-lsp 2.2.0\r\njupyter_server 2.8.0\r\njupyter_server_terminals 0.4.4\r\njupyterlab 4.0.7\r\njupyterlab-pygments 0.2.2\r\njupyterlab_server 2.25.0\r\nlightning-utilities 0.9.0\r\nlion-pytorch 0.1.2\r\nlit 17.0.3\r\nMarkdown 3.5\r\nMarkupSafe 2.1.3\r\nmatplotlib-inline 0.1.6\r\nmistune 3.0.2\r\nmore-itertools 10.1.0\r\nmpmath 1.3.0\r\nmultidict 6.0.4\r\nmultiprocess 0.70.14\r\nmypy-extensions 1.0.0\r\nnbclient 0.8.0\r\nnbconvert 7.9.2\r\nnbformat 5.9.2\r\nnest-asyncio 1.5.8\r\nnetworkx 3.2\r\nnltk 3.8.1\r\nnotebook_shim 0.2.3\r\nnumpy 1.23.5\r\noauthlib 3.2.2\r\nomegaconf 2.2.3\r\nopen-clip-torch 2.22.0\r\nopen-flamingo 2.0.0\r\noverrides 7.4.0\r\npackaging 23.2\r\npandas 2.1.1\r\npandocfilters 1.5.0\r\nparso 0.8.3\r\npathtools 0.1.2\r\npexpect 4.8.0\r\npickleshare 0.7.5\r\nPillow 10.1.0\r\npip 23.3.1\r\nplatformdirs 3.11.0\r\nprometheus-client 0.17.1\r\nprompt-toolkit 3.0.39\r\nprotobuf 3.20.1\r\npsutil 5.9.6\r\nptyprocess 0.7.0\r\npure-eval 0.2.2\r\npyarrow 13.0.0\r\npyasn1 0.5.0\r\npyasn1-modules 0.3.0\r\npycparser 2.21\r\npyDeprecate 0.3.2\r\nPygments 2.16.1\r\npynvml 11.4.1\r\npyparsing 3.1.1\r\npyre-extensions 0.0.29\r\npython-dateutil 2.8.2\r\npython-json-logger 2.0.7\r\npytorch-lightning 1.6.5\r\npytz 2023.3.post1\r\nPyYAML 6.0.1\r\npyzmq 25.1.1\r\nreferencing 0.30.2\r\nregex 2023.10.3\r\nrequests 2.31.0\r\nrequests-oauthlib 1.3.1\r\nresponses 0.18.0\r\nrfc3339-validator 0.1.4\r\nrfc3986-validator 0.1.1\r\nrpds-py 0.10.6\r\nrsa 4.9\r\nsafetensors 0.4.0\r\nscipy 1.11.3\r\nSend2Trash 1.8.2\r\nsentencepiece 0.1.98\r\nsentry-sdk 1.32.0\r\nsetproctitle 1.3.3\r\nsetuptools 68.2.2\r\nsix 1.16.0\r\nsmmap 5.0.1\r\nsniffio 1.3.0\r\nsoupsieve 2.5\r\nstack-data 0.6.3\r\nsympy 1.12\r\ntensorboard 2.15.0\r\ntensorboard-data-server 0.7.1\r\nterminado 0.17.1\r\ntimm 0.9.8\r\ntinycss2 1.2.1\r\ntokenizers 0.13.3\r\ntomli 2.0.1\r\ntorch 2.0.1+cu118\r\ntorchmetrics 1.2.0\r\ntorchvision 0.15.2+cu118\r\ntornado 6.3.3\r\ntqdm 4.66.1\r\ntraitlets 5.11.2\r\ntransformers 4.29.2\r\ntriton 2.0.0\r\ntypes-python-dateutil 2.8.19.14\r\ntyping_extensions 4.8.0\r\ntyping-inspect 0.9.0\r\ntzdata 2023.3\r\nuri-template 1.3.0\r\nurllib3 2.0.7\r\nwandb 0.15.12\r\nwcwidth 0.2.8\r\nwebcolors 1.13\r\nwebdataset 0.2.62\r\nwebencodings 0.5.1\r\nwebsocket-client 1.6.4\r\nWerkzeug 3.0.0\r\nwheel 0.41.2\r\nxformers 0.0.20\r\nxxhash 3.4.1\r\nyarl 1.9.2\r\nzipp 3.17.0\r\n```",
"@humpydonkey FWIW setting fsspec down to 2023.9.2 fixed the issue\r\n\r\n`pip install fsspec==2023.9.2`",
"got it, thanks @ZachNagengast ",
"Thanks for reporting and for the investigation, @ZachNagengast! :hugs: \r\n\r\nWe are investigating the root cause of the issue. In the meantime, we are going to pin fsspec < 2023.10.0. ",
"https://stackoverflow.com/questions/77433096/notimplementederror-loading-a-dataset-cached-in-a-localfilesystem-is-not-suppor/77433141#77433141",
"You can also update `datasets`:\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nIt will also update `fsspec` to use the right version"
] | 2023-10-22T20:57:10Z
| 2023-11-07T10:02:14Z
| 2023-10-23T09:17:56Z
|
CONTRIBUTOR
| null | null | null |
### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps to reproduce the bug
1. Upgrade fsspec to version `2023.10.0`
2. Attempt to load a streaming dataset e.g. `load_dataset("laion/gpt4v-emotion-dataset", split="train", streaming=True)`
3. Observe the following exception:
```
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/load.py", line 2146, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/builder.py", line 1318, in as_streaming_dataset
raise NotImplementedError(
NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.
```
### Expected behavior
Should stream the dataset as normal.
### Environment info
datasets@main
fsspec==2023.10.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6330/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6330/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3418
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3418/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3418/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3418/events
|
https://github.com/huggingface/datasets/pull/3418
| 1,077,053,296
|
PR_kwDODunzps4vsHMK
| 3,418
|
Add Wikisource dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
| null |
[] | null |
[
"As we are removing the dataset scripts from GitHub and moving them to the Hugging Face Hub, I am going to transfer this script to the repo: https://huggingface.co/datasets/wikimedia/wikisource"
] | 2021-12-10T17:04:44Z
| 2022-10-04T09:35:56Z
| 2022-10-03T09:37:20Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3418.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3418",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3418.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3418"
}
|
Add loading script for Wikisource dataset.
Fix #3399.
CC: @geohci, @yjernite
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3418/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3418/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6196
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6196/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6196/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6196/events
|
https://github.com/huggingface/datasets/issues/6196
| 1,875,070,972
|
I_kwDODunzps5vw0_8
| 6,196
|
Split order is not preserved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2023-08-31T08:47:16Z
| 2023-08-31T13:48:43Z
| 2023-08-31T13:48:43Z
|
MEMBER
| null | null | null |
I have noticed that in some cases the split order is not preserved.
For example, consider a no-script dataset with configs:
```yaml
configs:
- config_name: default
data_files:
- split: train
path: train.csv
- split: test
path: test.csv
```
- Note the defined split order is [train, test]
Once the dataset is loaded, the split order is not preserved:
```python
In [16]: ds
Out[16]:
DatasetDict({
test: Dataset({
features: ['text', 'label'],
num_rows: 1
})
train: Dataset({
features: ['text', 'label'],
num_rows: 2
})
})
```
- Note the obtained split order is [test, train]
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6196/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6196/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3467
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3467/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3467/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3467/events
|
https://github.com/huggingface/datasets/pull/3467
| 1,085,870,665
|
PR_kwDODunzps4wIoqd
| 3,467
|
Push dataset infos.json to Hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The change from `___` to `--` was allowed by https://github.com/huggingface/moon-landing/pull/1657"
] | 2021-12-21T14:07:13Z
| 2021-12-21T17:00:10Z
| 2021-12-21T17:00:09Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3467",
"merged_at": "2021-12-21T17:00:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3467"
}
|
When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394).
This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types.
Other minor changes:
- renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end.
I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost.
Close https://github.com/huggingface/datasets/issues/3394
I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3467/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3467/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6356
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6356/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6356/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6356/events
|
https://github.com/huggingface/datasets/pull/6356
| 1,964,015,802
|
PR_kwDODunzps5d5Jri
| 6,356
|
Add `fsspec` version to the `datasets-cli env` command output
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008775 / 0.011353 (-0.002578) | 0.005304 / 0.011008 (-0.005704) | 0.108912 / 0.038508 (0.070404) | 0.075589 / 0.023109 (0.052479) | 0.456612 / 0.275898 (0.180713) | 0.502303 / 0.323480 (0.178823) | 0.006695 / 0.007986 (-0.001291) | 0.004404 / 0.004328 (0.000076) | 0.084802 / 0.004250 (0.080552) | 0.062711 / 0.037052 (0.025659) | 0.465062 / 0.258489 (0.206573) | 0.505321 / 0.293841 (0.211480) | 0.049401 / 0.128546 (-0.079146) | 0.014784 / 0.075646 (-0.060862) | 0.378202 / 0.419271 (-0.041069) | 0.069826 / 0.043533 (0.026293) | 0.461161 / 0.255139 (0.206022) | 0.484616 / 0.283200 (0.201416) | 0.035998 / 0.141683 (-0.105685) | 1.846343 / 1.452155 (0.394189) | 1.999439 / 1.492716 (0.506723) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317779 / 0.018006 (0.299773) | 0.605967 / 0.000490 (0.605477) | 0.011412 / 0.000200 (0.011212) | 0.000410 / 0.000054 (0.000356) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031118 / 0.037411 (-0.006293) | 0.095425 / 0.014526 (0.080900) | 0.108002 / 0.176557 (-0.068554) | 0.184625 / 0.737135 (-0.552511) | 0.108180 / 0.296338 (-0.188159) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.587497 / 0.215209 (0.372288) | 5.818632 / 2.077655 (3.740977) | 2.629776 / 1.504120 (1.125656) | 2.266129 / 1.541195 (0.724934) | 2.324618 / 1.468490 (0.856128) | 0.830049 / 4.584777 (-3.754728) | 5.380062 / 3.745712 (1.634350) | 4.808525 / 5.269862 (-0.461336) | 2.960368 / 4.565676 (-1.605309) | 0.093637 / 0.424275 (-0.330638) | 0.009187 / 0.007607 (0.001580) | 0.703468 / 0.226044 (0.477424) | 6.924509 / 2.268929 (4.655580) | 3.380582 / 55.444624 (-52.064043) | 2.689118 / 6.876477 (-4.187358) | 2.712418 / 2.142072 (0.570345) | 1.017144 / 4.805227 (-3.788084) | 0.212874 / 6.500664 (-6.287791) | 0.080053 / 0.075469 (0.004584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623663 / 1.841788 (-0.218125) | 23.668872 / 8.074308 (15.594564) | 20.245972 / 10.191392 (10.054580) | 0.236448 / 0.680424 (-0.443976) | 0.029730 / 0.534201 (-0.504470) | 0.491525 / 0.579283 (-0.087758) | 0.593780 / 0.434364 (0.159416) | 0.548776 / 0.540337 (0.008438) | 0.799370 / 1.386936 (-0.587566) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009714 / 0.011353 (-0.001639) | 0.005328 / 0.011008 (-0.005681) | 0.078460 / 0.038508 (0.039952) | 0.077791 / 0.023109 (0.054682) | 0.510124 / 0.275898 (0.234226) | 0.547769 / 0.323480 (0.224289) | 0.006868 / 0.007986 (-0.001118) | 0.004145 / 0.004328 (-0.000183) | 0.088696 / 0.004250 (0.084445) | 0.072387 / 0.037052 (0.035334) | 0.527373 / 0.258489 (0.268884) | 0.561948 / 0.293841 (0.268107) | 0.049769 / 0.128546 (-0.078777) | 0.014401 / 0.075646 (-0.061246) | 0.097541 / 0.419271 (-0.321731) | 0.062237 / 0.043533 (0.018705) | 0.531001 / 0.255139 (0.275862) | 0.561797 / 0.283200 (0.278597) | 0.038482 / 0.141683 (-0.103201) | 1.783558 / 1.452155 (0.331404) | 1.864339 / 1.492716 (0.371622) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289389 / 0.018006 (0.271383) | 0.595326 / 0.000490 (0.594836) | 0.004583 / 0.000200 (0.004383) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034492 / 0.037411 (-0.002919) | 0.102934 / 0.014526 (0.088409) | 0.121689 / 0.176557 (-0.054868) | 0.182121 / 0.737135 (-0.555015) | 0.127087 / 0.296338 (-0.169252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.645726 / 0.215209 (0.430517) | 6.462235 / 2.077655 (4.384580) | 3.044176 / 1.504120 (1.540056) | 2.731181 / 1.541195 (1.189986) | 2.805508 / 1.468490 (1.337018) | 0.846324 / 4.584777 (-3.738453) | 5.341074 / 3.745712 (1.595362) | 4.687111 / 5.269862 (-0.582751) | 3.035472 / 4.565676 (-1.530205) | 0.099193 / 0.424275 (-0.325082) | 0.008825 / 0.007607 (0.001218) | 0.795102 / 0.226044 (0.569058) | 7.895770 / 2.268929 (5.626842) | 3.826752 / 55.444624 (-51.617873) | 3.112217 / 6.876477 (-3.764259) | 3.526878 / 2.142072 (1.384806) | 1.011352 / 4.805227 (-3.793875) | 0.213424 / 6.500664 (-6.287240) | 0.076228 / 0.075469 (0.000759) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.805232 / 1.841788 (-0.036556) | 24.049100 / 8.074308 (15.974792) | 23.056011 / 10.191392 (12.864619) | 0.261656 / 0.680424 (-0.418767) | 0.032021 / 0.534201 (-0.502179) | 0.483829 / 0.579283 (-0.095454) | 0.602208 / 0.434364 (0.167844) | 0.565848 / 0.540337 (0.025511) | 0.818678 / 1.386936 (-0.568258) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008043 / 0.011353 (-0.003310) | 0.004642 / 0.011008 (-0.006366) | 0.102592 / 0.038508 (0.064084) | 0.099508 / 0.023109 (0.076399) | 0.377692 / 0.275898 (0.101794) | 0.409929 / 0.323480 (0.086450) | 0.006363 / 0.007986 (-0.001622) | 0.003881 / 0.004328 (-0.000447) | 0.076636 / 0.004250 (0.072386) | 0.067021 / 0.037052 (0.029969) | 0.371454 / 0.258489 (0.112964) | 0.423637 / 0.293841 (0.129796) | 0.038632 / 0.128546 (-0.089914) | 0.010055 / 0.075646 (-0.065591) | 0.352021 / 0.419271 (-0.067251) | 0.064988 / 0.043533 (0.021456) | 0.369614 / 0.255139 (0.114475) | 0.396972 / 0.283200 (0.113773) | 0.028866 / 0.141683 (-0.112817) | 1.757620 / 1.452155 (0.305465) | 1.886283 / 1.492716 (0.393567) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257579 / 0.018006 (0.239572) | 0.529859 / 0.000490 (0.529369) | 0.011720 / 0.000200 (0.011520) | 0.000455 / 0.000054 (0.000401) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034163 / 0.037411 (-0.003248) | 0.101422 / 0.014526 (0.086896) | 0.114858 / 0.176557 (-0.061698) | 0.180265 / 0.737135 (-0.556870) | 0.116034 / 0.296338 (-0.180305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477609 / 0.215209 (0.262400) | 4.830116 / 2.077655 (2.752461) | 2.323844 / 1.504120 (0.819724) | 2.174496 / 1.541195 (0.633301) | 2.268594 / 1.468490 (0.800104) | 0.612429 / 4.584777 (-3.972348) | 4.265277 / 3.745712 (0.519565) | 4.095741 / 5.269862 (-1.174121) | 2.561532 / 4.565676 (-2.004144) | 0.068043 / 0.424275 (-0.356233) | 0.009139 / 0.007607 (0.001532) | 0.545512 / 0.226044 (0.319467) | 5.456403 / 2.268929 (3.187475) | 2.778937 / 55.444624 (-52.665688) | 2.428560 / 6.876477 (-4.447917) | 2.557483 / 2.142072 (0.415411) | 0.696721 / 4.805227 (-4.108506) | 0.157217 / 6.500664 (-6.343447) | 0.071334 / 0.075469 (-0.004135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.617755 / 1.841788 (-0.224032) | 23.368508 / 8.074308 (15.294200) | 17.028591 / 10.191392 (6.837199) | 0.195881 / 0.680424 (-0.484542) | 0.021788 / 0.534201 (-0.512413) | 0.468484 / 0.579283 (-0.110799) | 0.474604 / 0.434364 (0.040240) | 0.544738 / 0.540337 (0.004400) | 0.771722 / 1.386936 (-0.615214) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007939 / 0.011353 (-0.003414) | 0.004684 / 0.011008 (-0.006324) | 0.077273 / 0.038508 (0.038765) | 0.088763 / 0.023109 (0.065654) | 0.489178 / 0.275898 (0.213280) | 0.531547 / 0.323480 (0.208067) | 0.006214 / 0.007986 (-0.001772) | 0.003988 / 0.004328 (-0.000340) | 0.076685 / 0.004250 (0.072434) | 0.066628 / 0.037052 (0.029576) | 0.497153 / 0.258489 (0.238664) | 0.538301 / 0.293841 (0.244460) | 0.037939 / 0.128546 (-0.090607) | 0.010054 / 0.075646 (-0.065592) | 0.084642 / 0.419271 (-0.334629) | 0.057140 / 0.043533 (0.013608) | 0.487701 / 0.255139 (0.232562) | 0.519676 / 0.283200 (0.236477) | 0.026560 / 0.141683 (-0.115123) | 1.809676 / 1.452155 (0.357521) | 1.864884 / 1.492716 (0.372168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259005 / 0.018006 (0.240998) | 0.522900 / 0.000490 (0.522410) | 0.006885 / 0.000200 (0.006685) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039838 / 0.037411 (0.002426) | 0.117777 / 0.014526 (0.103251) | 0.129189 / 0.176557 (-0.047368) | 0.198584 / 0.737135 (-0.538552) | 0.129753 / 0.296338 (-0.166586) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.543366 / 0.215209 (0.328157) | 5.241502 / 2.077655 (3.163847) | 2.719079 / 1.504120 (1.214959) | 2.525337 / 1.541195 (0.984142) | 2.648908 / 1.468490 (1.180418) | 0.589239 / 4.584777 (-3.995538) | 4.379856 / 3.745712 (0.634144) | 4.139919 / 5.269862 (-1.129943) | 2.633412 / 4.565676 (-1.932264) | 0.074582 / 0.424275 (-0.349693) | 0.009106 / 0.007607 (0.001499) | 0.635540 / 0.226044 (0.409495) | 6.072965 / 2.268929 (3.804037) | 3.327233 / 55.444624 (-52.117391) | 3.012637 / 6.876477 (-3.863840) | 3.113226 / 2.142072 (0.971154) | 0.712705 / 4.805227 (-4.092523) | 0.159550 / 6.500664 (-6.341114) | 0.073446 / 0.075469 (-0.002023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.718732 / 1.841788 (-0.123055) | 23.249445 / 8.074308 (15.175137) | 17.630643 / 10.191392 (7.439251) | 0.201017 / 0.680424 (-0.479407) | 0.024162 / 0.534201 (-0.510039) | 0.475054 / 0.579283 (-0.104229) | 0.492348 / 0.434364 (0.057985) | 0.587118 / 0.540337 (0.046781) | 0.777462 / 1.386936 (-0.609474) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-26T17:19:25Z
| 2023-10-26T18:42:56Z
| 2023-10-26T18:32:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6356.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6356",
"merged_at": "2023-10-26T18:32:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6356.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6356"
}
|
... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6356/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6356/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4315
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4315/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4315/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4315/events
|
https://github.com/huggingface/datasets/pull/4315
| 1,232,549,330
|
PR_kwDODunzps43pZ6p
| 4,315
|
Fix CLI run_beam namespace
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T12:21:00Z
| 2022-05-11T13:13:00Z
| 2022-05-11T13:05:08Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4315.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4315",
"merged_at": "2022-05-11T13:05:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4315.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4315"
}
|
Currently, it raises TypeError:
```
TypeError: __init__() got an unexpected keyword argument 'namespace'
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4315/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4315/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1268
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1268/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1268/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1268/events
|
https://github.com/huggingface/datasets/pull/1268
| 758,871,252
|
MDExOlB1bGxSZXF1ZXN0NTMzOTY0OTQ4
| 1,268
|
new pr for Turkish NER
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Can you run `make style` to fix the code format ?\r\n\r\nAlso it looks like the file `file_downloaded/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.DUMP` is missing inside the dummy_data.zip\r\n\r\n\r\n(note that `TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip` is a directory name, not an actual zip file)",
"Hi Quentin, thank you for your patience with me. I've fixed the preprocessing pipeline, got this very weird error that Yacine told me to push. I've pushed it and after I'll find out that it will work, I will have my final pr on styling.",
"looks like you removed the dataset script file in your latest commit, is it expected ?"
] | 2020-12-07T21:40:26Z
| 2020-12-09T13:45:05Z
| 2020-12-09T13:45:05Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1268",
"merged_at": "2020-12-09T13:45:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1268"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1268/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1268/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/3380
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3380/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3380/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3380/events
|
https://github.com/huggingface/datasets/issues/3380
| 1,071,166,270
|
I_kwDODunzps4_2LM-
| 3,380
|
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-12-04T09:18:33Z
| 2022-01-11T12:29:53Z
| 2022-01-11T12:29:53Z
|
MEMBER
| null | null | null |
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week!
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://hf.co/oss-survey)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3380/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3380/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/270
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/270/comments
|
https://api.github.com/repos/huggingface/datasets/issues/270/events
|
https://github.com/huggingface/datasets/issues/270
| 638,121,617
|
MDU6SXNzdWU2MzgxMjE2MTc=
| 270
|
c4 dataset is not viewable in nlpviewer demo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4",
"events_url": "https://api.github.com/users/rajarsheem/events{/privacy}",
"followers_url": "https://api.github.com/users/rajarsheem/followers",
"following_url": "https://api.github.com/users/rajarsheem/following{/other_user}",
"gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajarsheem",
"id": 6441313,
"login": "rajarsheem",
"node_id": "MDQ6VXNlcjY0NDEzMTM=",
"organizations_url": "https://api.github.com/users/rajarsheem/orgs",
"received_events_url": "https://api.github.com/users/rajarsheem/received_events",
"repos_url": "https://api.github.com/users/rajarsheem/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajarsheem"
}
|
[
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] |
closed
| false
| null |
[] | null |
[
"C4 is too large to be shown in the viewer"
] | 2020-06-13T08:26:16Z
| 2020-10-27T15:35:29Z
| 2020-10-27T15:35:13Z
|
NONE
| null | null | null |
I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/)
```python
ModuleNotFoundError: No module named 'langdetect'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 54, in <module>
configs = get_confs(option.id)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs
builder_cls = nlp.load.import_main_class(module_path, dataset=True)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module>
from .c4_utils import (
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module>
import langdetect
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/270/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5118
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5118/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5118/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5118/events
|
https://github.com/huggingface/datasets/issues/5118
| 1,410,547,373
|
I_kwDODunzps5UEz6t
| 5,118
|
Installing `datasets` on M1 computers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david1542",
"id": 9879252,
"login": "david1542",
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"repos_url": "https://api.github.com/users/david1542/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david1542"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @david1542."
] | 2022-10-16T16:50:08Z
| 2022-10-19T09:10:08Z
| 2022-10-19T09:10:08Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5118/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5118/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3729/events
|
https://github.com/huggingface/datasets/issues/3729
| 1,139,398,442
|
I_kwDODunzps5D6dcq
| 3,729
|
Wrong number of examples when loading a text dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/58376804?v=4",
"events_url": "https://api.github.com/users/kg-nlp/events{/privacy}",
"followers_url": "https://api.github.com/users/kg-nlp/followers",
"following_url": "https://api.github.com/users/kg-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/kg-nlp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kg-nlp",
"id": 58376804,
"login": "kg-nlp",
"node_id": "MDQ6VXNlcjU4Mzc2ODA0",
"organizations_url": "https://api.github.com/users/kg-nlp/orgs",
"received_events_url": "https://api.github.com/users/kg-nlp/received_events",
"repos_url": "https://api.github.com/users/kg-nlp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kg-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kg-nlp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kg-nlp"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Hi @kg-nlp, thanks for reporting.\r\n\r\nThat is weird... I guess we would need some sample data file where this behavior appears to reproduce the bug for further investigation... ",
"ok, I found the reason why that two results are not same.\r\nthere is /u2029 in the text, the datasets will split sentence according to the /u2029,but when I use open function will not do that .\r\nso I want to know which function shell do that\r\nthanks"
] | 2022-02-16T01:13:31Z
| 2022-03-15T16:16:09Z
| 2022-03-15T16:16:09Z
|
NONE
| null | null | null |
## Describe the bug
when I use load_dataset to read a txt file I find that the number of the samples is incorrect
## Steps to reproduce the bug
```
fr = open('train.txt','r',encoding='utf-8').readlines()
print(len(fr)) # 1199637
datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming=False)
print(len(datasets['train'])) # 1199649
```
I also use command line operation to verify it
```
$ wc -l train.txt
1199637 train.txt
```
## Expected results
please fix that issue
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.3
- Platform:windows&linux
- Python version:3.7
- PyArrow version:6.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3729/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2377
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2377/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2377/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2377/events
|
https://github.com/huggingface/datasets/issues/2377
| 894,918,927
|
MDU6SXNzdWU4OTQ5MTg5Mjc=
| 2,377
|
ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ark-kun",
"id": 1829149,
"login": "Ark-kun",
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ark-kun"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.\r\nMore info at #1933 ",
"Not sure if this was resolved, but I am getting a similar error when trying to load a dataset.arrow file directly: `ArrowInvalid: Not an Arrow file`",
"Since we're using the streaming format, you need to use `open_stream`:\r\n\r\n```python\r\nimport pyarrow as pa\r\n\r\ndef in_memory_arrow_table_from_file(filename: str) -> pa.Table:\r\n in_memory_stream = pa.input_stream(filename)\r\n opened_stream = pa.ipc.open_stream(in_memory_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n\r\ndef memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n memory_mapped_stream = pa.memory_map(filename)\r\n opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n```"
] | 2021-05-19T02:04:37Z
| 2023-03-15T18:06:42Z
| null |
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arrow')
```
## Expected results
I expect that the saved dataset can be read by the official Apache Arrow methods.
## Actual results
```
File "/usr/local/lib/python3.7/site-packages/pyarrow/feather.py", line 236, in read_table
reader.open(source, use_memory_map=memory_map)
File "pyarrow/feather.pxi", line 67, in pyarrow.lib.FeatherReader.open
File "pyarrow/error.pxi", line 123, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not a Feather V1 or Arrow IPC file
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-1.6.2
- Platform: Linux
- Python version: 3.7
- PyArrow version: 0.17.1, also 2.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2377/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2377/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1649
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1649/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1649/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1649/events
|
https://github.com/huggingface/datasets/pull/1649
| 775,544,487
|
MDExOlB1bGxSZXF1ZXN0NTQ2MjAzMjE1
| 1,649
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4",
"events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}",
"followers_url": "https://api.github.com/users/MisbahKhan789/followers",
"following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}",
"gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MisbahKhan789",
"id": 15351802,
"login": "MisbahKhan789",
"node_id": "MDQ6VXNlcjE1MzUxODAy",
"organizations_url": "https://api.github.com/users/MisbahKhan789/orgs",
"received_events_url": "https://api.github.com/users/MisbahKhan789/received_events",
"repos_url": "https://api.github.com/users/MisbahKhan789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MisbahKhan789"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-28T19:05:00Z
| 2020-12-29T10:50:58Z
| 2020-12-29T10:43:03Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1649.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1649",
"merged_at": "2020-12-29T10:43:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1649.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1649"
}
|
Added information in the dataset card
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1649/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1649/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/886
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/886/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/886/comments
|
https://api.github.com/repos/huggingface/datasets/issues/886/events
|
https://github.com/huggingface/datasets/pull/886
| 750,829,314
|
MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5
| 886
|
Fix wikipedia custom config
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https://github.com/huggingface/datasets/issues/577#issuecomment-868122769)"
] | 2020-11-25T13:44:12Z
| 2021-06-25T05:24:16Z
| 2020-11-25T15:42:13Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/886.diff",
"html_url": "https://github.com/huggingface/datasets/pull/886",
"merged_at": "2020-11-25T15:42:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/886.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/886"
}
|
It should be possible to use the wikipedia dataset with any `language` and `date`.
However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.
I fixed that and was able to run
```python
from datasets import load_dataset
load_dataset("./datasets/wikipedia", language="zh", date="20201120", beam_runner='DirectRunner')
```
cc @stvhuang @SamuelCahyawijaya
Fix #784
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/886/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/886/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6009
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6009/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6009/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6009/events
|
https://github.com/huggingface/datasets/pull/6009
| 1,792,059,808
|
PR_kwDODunzps5U1mus
| 6,009
|
Fix cast for dictionaries with no keys
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006961 / 0.011353 (-0.004392) | 0.004390 / 0.011008 (-0.006618) | 0.103249 / 0.038508 (0.064741) | 0.048084 / 0.023109 (0.024975) | 0.351213 / 0.275898 (0.075315) | 0.416918 / 0.323480 (0.093439) | 0.005539 / 0.007986 (-0.002446) | 0.003555 / 0.004328 (-0.000774) | 0.079306 / 0.004250 (0.075055) | 0.066937 / 0.037052 (0.029884) | 0.382601 / 0.258489 (0.124112) | 0.406125 / 0.293841 (0.112284) | 0.032269 / 0.128546 (-0.096277) | 0.009133 / 0.075646 (-0.066514) | 0.354449 / 0.419271 (-0.064822) | 0.068978 / 0.043533 (0.025445) | 0.352314 / 0.255139 (0.097175) | 0.390398 / 0.283200 (0.107199) | 0.025640 / 0.141683 (-0.116043) | 1.553865 / 1.452155 (0.101710) | 1.601292 / 1.492716 (0.108576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208310 / 0.018006 (0.190303) | 0.440076 / 0.000490 (0.439586) | 0.000363 / 0.000200 (0.000163) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029173 / 0.037411 (-0.008238) | 0.111323 / 0.014526 (0.096797) | 0.123001 / 0.176557 (-0.053556) | 0.180180 / 0.737135 (-0.556955) | 0.125804 / 0.296338 (-0.170534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419919 / 0.215209 (0.204710) | 4.194515 / 2.077655 (2.116860) | 1.881234 / 1.504120 (0.377114) | 1.672914 / 1.541195 (0.131720) | 1.723102 / 1.468490 (0.254612) | 0.543584 / 4.584777 (-4.041193) | 3.822477 / 3.745712 (0.076765) | 1.837946 / 5.269862 (-3.431915) | 1.094975 / 4.565676 (-3.470701) | 0.066788 / 0.424275 (-0.357487) | 0.011689 / 0.007607 (0.004082) | 0.520983 / 0.226044 (0.294938) | 5.209245 / 2.268929 (2.940316) | 2.392916 / 55.444624 (-53.051708) | 2.060042 / 6.876477 (-4.816434) | 2.162291 / 2.142072 (0.020219) | 0.668472 / 4.805227 (-4.136755) | 0.144373 / 6.500664 (-6.356291) | 0.066152 / 0.075469 (-0.009318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251256 / 1.841788 (-0.590532) | 15.161338 / 8.074308 (7.087030) | 14.416133 / 10.191392 (4.224741) | 0.166145 / 0.680424 (-0.514279) | 0.018168 / 0.534201 (-0.516033) | 0.433364 / 0.579283 (-0.145919) | 0.417484 / 0.434364 (-0.016880) | 0.502543 / 0.540337 (-0.037794) | 0.602904 / 1.386936 (-0.784032) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006946 / 0.011353 (-0.004407) | 0.004248 / 0.011008 (-0.006761) | 0.079707 / 0.038508 (0.041199) | 0.046226 / 0.023109 (0.023117) | 0.375864 / 0.275898 (0.099966) | 0.430740 / 0.323480 (0.107260) | 0.006222 / 0.007986 (-0.001764) | 0.003474 / 0.004328 (-0.000854) | 0.079622 / 0.004250 (0.075372) | 0.066666 / 0.037052 (0.029613) | 0.379487 / 0.258489 (0.120998) | 0.423002 / 0.293841 (0.129161) | 0.032836 / 0.128546 (-0.095710) | 0.008976 / 0.075646 (-0.066670) | 0.086578 / 0.419271 (-0.332693) | 0.055651 / 0.043533 (0.012118) | 0.360787 / 0.255139 (0.105648) | 0.384265 / 0.283200 (0.101065) | 0.025350 / 0.141683 (-0.116333) | 1.547880 / 1.452155 (0.095725) | 1.605850 / 1.492716 (0.113134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184227 / 0.018006 (0.166220) | 0.442071 / 0.000490 (0.441582) | 0.002887 / 0.000200 (0.002687) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031923 / 0.037411 (-0.005488) | 0.119093 / 0.014526 (0.104568) | 0.128704 / 0.176557 (-0.047853) | 0.187065 / 0.737135 (-0.550070) | 0.134135 / 0.296338 (-0.162204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455731 / 0.215209 (0.240522) | 4.562911 / 2.077655 (2.485256) | 2.247431 / 1.504120 (0.743311) | 2.053346 / 1.541195 (0.512151) | 2.049611 / 1.468490 (0.581121) | 0.546069 / 4.584777 (-4.038708) | 3.821852 / 3.745712 (0.076140) | 3.358497 / 5.269862 (-1.911364) | 1.667697 / 4.565676 (-2.897979) | 0.067968 / 0.424275 (-0.356307) | 0.012344 / 0.007607 (0.004737) | 0.550864 / 0.226044 (0.324820) | 5.496867 / 2.268929 (3.227939) | 2.680031 / 55.444624 (-52.764594) | 2.328673 / 6.876477 (-4.547804) | 2.436754 / 2.142072 (0.294682) | 0.681195 / 4.805227 (-4.124033) | 0.148761 / 6.500664 (-6.351904) | 0.067716 / 0.075469 (-0.007753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353798 / 1.841788 (-0.487990) | 15.992965 / 8.074308 (7.918657) | 14.051539 / 10.191392 (3.860147) | 0.181087 / 0.680424 (-0.499337) | 0.018653 / 0.534201 (-0.515548) | 0.433499 / 0.579283 (-0.145784) | 0.428845 / 0.434364 (-0.005519) | 0.501100 / 0.540337 (-0.039238) | 0.603666 / 1.386936 (-0.783270) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010983 / 0.011353 (-0.000370) | 0.005630 / 0.011008 (-0.005378) | 0.109967 / 0.038508 (0.071458) | 0.101580 / 0.023109 (0.078471) | 0.490205 / 0.275898 (0.214307) | 0.534653 / 0.323480 (0.211173) | 0.008365 / 0.007986 (0.000379) | 0.004317 / 0.004328 (-0.000012) | 0.082429 / 0.004250 (0.078179) | 0.080556 / 0.037052 (0.043504) | 0.494627 / 0.258489 (0.236138) | 0.544189 / 0.293841 (0.250348) | 0.049419 / 0.128546 (-0.079127) | 0.014033 / 0.075646 (-0.061613) | 0.370406 / 0.419271 (-0.048866) | 0.083468 / 0.043533 (0.039935) | 0.463829 / 0.255139 (0.208690) | 0.507516 / 0.283200 (0.224316) | 0.053266 / 0.141683 (-0.088417) | 1.778680 / 1.452155 (0.326525) | 1.916616 / 1.492716 (0.423900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267646 / 0.018006 (0.249640) | 0.617824 / 0.000490 (0.617334) | 0.007720 / 0.000200 (0.007520) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034464 / 0.037411 (-0.002948) | 0.113626 / 0.014526 (0.099100) | 0.118911 / 0.176557 (-0.057646) | 0.194701 / 0.737135 (-0.542434) | 0.123431 / 0.296338 (-0.172907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.606073 / 0.215209 (0.390863) | 6.086393 / 2.077655 (4.008738) | 2.568712 / 1.504120 (1.064593) | 2.260801 / 1.541195 (0.719606) | 2.411798 / 1.468490 (0.943307) | 0.876433 / 4.584777 (-3.708344) | 5.521280 / 3.745712 (1.775568) | 5.969722 / 5.269862 (0.699861) | 3.671028 / 4.565676 (-0.894649) | 0.097082 / 0.424275 (-0.327193) | 0.011354 / 0.007607 (0.003747) | 0.713842 / 0.226044 (0.487798) | 7.291172 / 2.268929 (5.022244) | 3.315272 / 55.444624 (-52.129352) | 2.777487 / 6.876477 (-4.098990) | 3.025449 / 2.142072 (0.883377) | 1.014115 / 4.805227 (-3.791112) | 0.217928 / 6.500664 (-6.282736) | 0.083097 / 0.075469 (0.007627) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640060 / 1.841788 (-0.201728) | 25.342172 / 8.074308 (17.267864) | 22.776510 / 10.191392 (12.585118) | 0.227300 / 0.680424 (-0.453124) | 0.032233 / 0.534201 (-0.501968) | 0.507547 / 0.579283 (-0.071736) | 0.647044 / 0.434364 (0.212680) | 0.607019 / 0.540337 (0.066682) | 0.823548 / 1.386936 (-0.563388) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009576 / 0.011353 (-0.001777) | 0.009322 / 0.011008 (-0.001687) | 0.087184 / 0.038508 (0.048676) | 0.100795 / 0.023109 (0.077685) | 0.492138 / 0.275898 (0.216240) | 0.528386 / 0.323480 (0.204906) | 0.006689 / 0.007986 (-0.001296) | 0.004735 / 0.004328 (0.000406) | 0.085519 / 0.004250 (0.081269) | 0.072648 / 0.037052 (0.035595) | 0.496068 / 0.258489 (0.237579) | 0.549634 / 0.293841 (0.255793) | 0.049709 / 0.128546 (-0.078837) | 0.015077 / 0.075646 (-0.060569) | 0.099445 / 0.419271 (-0.319826) | 0.068080 / 0.043533 (0.024547) | 0.500426 / 0.255139 (0.245287) | 0.531437 / 0.283200 (0.248238) | 0.053176 / 0.141683 (-0.088507) | 1.827942 / 1.452155 (0.375787) | 1.914286 / 1.492716 (0.421570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247658 / 0.018006 (0.229652) | 0.590805 / 0.000490 (0.590315) | 0.005319 / 0.000200 (0.005119) | 0.000165 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036993 / 0.037411 (-0.000418) | 0.112944 / 0.014526 (0.098419) | 0.118964 / 0.176557 (-0.057593) | 0.194867 / 0.737135 (-0.542269) | 0.120816 / 0.296338 (-0.175523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638062 / 0.215209 (0.422853) | 6.246785 / 2.077655 (4.169130) | 2.957779 / 1.504120 (1.453659) | 2.739118 / 1.541195 (1.197924) | 2.795362 / 1.468490 (1.326872) | 0.890532 / 4.584777 (-3.694245) | 5.508198 / 3.745712 (1.762486) | 5.222315 / 5.269862 (-0.047547) | 3.152731 / 4.565676 (-1.412946) | 0.098344 / 0.424275 (-0.325931) | 0.008800 / 0.007607 (0.001193) | 0.757889 / 0.226044 (0.531845) | 7.545715 / 2.268929 (5.276787) | 3.694536 / 55.444624 (-51.750088) | 3.112872 / 6.876477 (-3.763605) | 3.182358 / 2.142072 (1.040285) | 1.028171 / 4.805227 (-3.777056) | 0.215223 / 6.500664 (-6.285441) | 0.085856 / 0.075469 (0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853138 / 1.841788 (0.011350) | 25.939672 / 8.074308 (17.865364) | 23.118029 / 10.191392 (12.926637) | 0.250599 / 0.680424 (-0.429825) | 0.029942 / 0.534201 (-0.504259) | 0.508748 / 0.579283 (-0.070535) | 0.593966 / 0.434364 (0.159602) | 0.605499 / 0.540337 (0.065162) | 0.863827 / 1.386936 (-0.523109) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-06T18:48:14Z
| 2023-07-07T14:13:00Z
| 2023-07-07T14:01:13Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6009.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6009",
"merged_at": "2023-07-07T14:01:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6009.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6009"
}
|
Fix #5677
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6009/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6009/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/753
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/753/comments
|
https://api.github.com/repos/huggingface/datasets/issues/753/events
|
https://github.com/huggingface/datasets/pull/753
| 727,434,935
|
MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0
| 753
|
Fix doc links to viewer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4",
"events_url": "https://api.github.com/users/Pierrci/events{/privacy}",
"followers_url": "https://api.github.com/users/Pierrci/followers",
"following_url": "https://api.github.com/users/Pierrci/following{/other_user}",
"gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Pierrci",
"id": 5020707,
"login": "Pierrci",
"node_id": "MDQ6VXNlcjUwMjA3MDc=",
"organizations_url": "https://api.github.com/users/Pierrci/orgs",
"received_events_url": "https://api.github.com/users/Pierrci/received_events",
"repos_url": "https://api.github.com/users/Pierrci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Pierrci"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-10-22T14:20:16Z
| 2020-10-23T08:42:11Z
| 2020-10-23T08:42:11Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/753",
"merged_at": "2020-10-23T08:42:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/753"
}
|
It seems #733 forgot some links in the doc :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/753/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4435
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4435/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4435/events
|
https://github.com/huggingface/datasets/issues/4435
| 1,257,496,552
|
I_kwDODunzps5K89_o
| 4,435
|
Load a local cached dataset that has been modified
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4",
"events_url": "https://api.github.com/users/mihail911/events{/privacy}",
"followers_url": "https://api.github.com/users/mihail911/followers",
"following_url": "https://api.github.com/users/mihail911/following{/other_user}",
"gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mihail911",
"id": 2789441,
"login": "mihail911",
"node_id": "MDQ6VXNlcjI3ODk0NDE=",
"organizations_url": "https://api.github.com/users/mihail911/orgs",
"received_events_url": "https://api.github.com/users/mihail911/received_events",
"repos_url": "https://api.github.com/users/mihail911/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihail911/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mihail911"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.",
"Awesome, hvala Mario! This works. "
] | 2022-06-02T01:51:49Z
| 2022-06-02T23:59:26Z
| 2022-06-02T23:59:18Z
|
NONE
| null | null | null |
## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns:
```
modified_dataset
dataset_info.json
emotion-test.arrow
emotion-train.arrow
emotion-validation.arrow
```
as expected. However, when I try to load up the modified cached dataset via a call to
```
modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset")
```
it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:
```
Using custom data configuration validation-cdbf51685638421b
Downloading and preparing dataset emotion/validation to ...
```
How am I supposed to load the original modified local cache copy of the dataset?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4435/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5661
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5661/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5661/events
|
https://github.com/huggingface/datasets/issues/5661
| 1,637,129,445
|
I_kwDODunzps5hlJzl
| 5,661
|
CI is broken: Unnecessary `dict` comprehension
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[] | 2023-03-23T09:13:01Z
| 2023-03-23T09:37:51Z
| 2023-03-23T09:37:51Z
|
MEMBER
| null | null | null |
CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5661/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3915
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3915/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3915/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3915/events
|
https://github.com/huggingface/datasets/pull/3915
| 1,168,848,101
|
PR_kwDODunzps40a54e
| 3,915
|
Metric card template
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances inputs `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference in the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n",
"Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference to the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n",
"Thanks for your feedback, @mcmillanmajora ! I totally agree that we should write a post -- we were going to write one up when we are done with a good chunk of the metric cards, but we can also do that earlier :smile: \r\n\r\nWith regards to your more specific comments:\r\n\r\n- It is our intention to put what the metric was developed for (whether it is a specific task or dataset, for example). You can see the [WER](https://github.com/huggingface/datasets/tree/master/metrics/wer) metric card for that.\r\n- `input_field` works for me!\r\n- the values aren't always scores, it's more like the values the metric can take. And it does include the range of possible values, including the max and min, that are outputted.\r\n- I like the suggestion to add: 'Provide a range of examples that show both typical and atypical results' :hugs: \r\n- I have been putting specific use cases in 'Further references', just because there isn't always something to put there, especially for less popular metrics",
"Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! ",
"Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! "
] | 2022-03-14T20:07:08Z
| 2022-05-04T10:44:09Z
| 2022-05-04T10:37:06Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3915.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3915",
"merged_at": "2022-05-04T10:37:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3915.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3915"
}
|
Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!).
All feedback is welcome, but am especially curious about feedback in terms of:
- things that should be included but aren't
- things that are included but should be changed or removed
- the instructions I included, and whether they should be added to, clarified, or deleted altogether
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3915/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3915/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6228
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6228/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6228/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6228/events
|
https://github.com/huggingface/datasets/pull/6228
| 1,887,959,311
|
PR_kwDODunzps5Z5HZi
| 6,228
|
Remove RGB -> BGR image conversion in Object Detection tutorial
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009443 / 0.011353 (-0.001910) | 0.005274 / 0.011008 (-0.005734) | 0.105950 / 0.038508 (0.067441) | 0.079947 / 0.023109 (0.056837) | 0.414248 / 0.275898 (0.138350) | 0.440611 / 0.323480 (0.117131) | 0.006779 / 0.007986 (-0.001206) | 0.004301 / 0.004328 (-0.000028) | 0.080616 / 0.004250 (0.076366) | 0.061425 / 0.037052 (0.024372) | 0.418460 / 0.258489 (0.159971) | 0.468108 / 0.293841 (0.174267) | 0.051090 / 0.128546 (-0.077456) | 0.014133 / 0.075646 (-0.061513) | 0.376121 / 0.419271 (-0.043151) | 0.070715 / 0.043533 (0.027182) | 0.415435 / 0.255139 (0.160296) | 0.457925 / 0.283200 (0.174725) | 0.053653 / 0.141683 (-0.088030) | 1.872681 / 1.452155 (0.420527) | 1.961187 / 1.492716 (0.468470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255829 / 0.018006 (0.237823) | 0.574224 / 0.000490 (0.573735) | 0.007597 / 0.000200 (0.007397) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032562 / 0.037411 (-0.004849) | 0.097528 / 0.014526 (0.083003) | 0.113487 / 0.176557 (-0.063070) | 0.185670 / 0.737135 (-0.551465) | 0.118909 / 0.296338 (-0.177430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.611441 / 0.215209 (0.396232) | 5.908576 / 2.077655 (3.830921) | 2.586758 / 1.504120 (1.082638) | 2.310199 / 1.541195 (0.769004) | 2.333396 / 1.468490 (0.864906) | 0.900884 / 4.584777 (-3.683893) | 5.438304 / 3.745712 (1.692591) | 4.806611 / 5.269862 (-0.463250) | 2.970631 / 4.565676 (-1.595046) | 0.097861 / 0.424275 (-0.326414) | 0.009873 / 0.007607 (0.002266) | 0.739553 / 0.226044 (0.513509) | 7.104953 / 2.268929 (4.836024) | 3.150128 / 55.444624 (-52.294497) | 2.469552 / 6.876477 (-4.406924) | 2.709206 / 2.142072 (0.567133) | 0.983081 / 4.805227 (-3.822147) | 0.205150 / 6.500664 (-6.295514) | 0.075947 / 0.075469 (0.000478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631255 / 1.841788 (-0.210532) | 24.213679 / 8.074308 (16.139370) | 21.514481 / 10.191392 (11.323089) | 0.220360 / 0.680424 (-0.460063) | 0.031663 / 0.534201 (-0.502538) | 0.516029 / 0.579283 (-0.063254) | 0.591461 / 0.434364 (0.157097) | 0.612398 / 0.540337 (0.072061) | 0.807609 / 1.386936 (-0.579328) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009443 / 0.011353 (-0.001910) | 0.005510 / 0.011008 (-0.005498) | 0.085722 / 0.038508 (0.047214) | 0.076256 / 0.023109 (0.053146) | 0.604248 / 0.275898 (0.328349) | 0.596222 / 0.323480 (0.272742) | 0.006786 / 0.007986 (-0.001200) | 0.004135 / 0.004328 (-0.000193) | 0.085934 / 0.004250 (0.081683) | 0.065890 / 0.037052 (0.028838) | 0.592080 / 0.258489 (0.333591) | 0.624560 / 0.293841 (0.330719) | 0.048200 / 0.128546 (-0.080346) | 0.015477 / 0.075646 (-0.060169) | 0.097042 / 0.419271 (-0.322230) | 0.060513 / 0.043533 (0.016981) | 0.557171 / 0.255139 (0.302032) | 0.582057 / 0.283200 (0.298858) | 0.035678 / 0.141683 (-0.106005) | 1.894947 / 1.452155 (0.442792) | 1.956652 / 1.492716 (0.463936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268927 / 0.018006 (0.250921) | 0.566086 / 0.000490 (0.565597) | 0.007190 / 0.000200 (0.006990) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.042090 / 0.037411 (0.004679) | 0.109618 / 0.014526 (0.095092) | 0.126588 / 0.176557 (-0.049968) | 0.200426 / 0.737135 (-0.536709) | 0.127032 / 0.296338 (-0.169306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669773 / 0.215209 (0.454564) | 6.453417 / 2.077655 (4.375763) | 3.119147 / 1.504120 (1.615027) | 2.818632 / 1.541195 (1.277437) | 2.930880 / 1.468490 (1.462390) | 0.922164 / 4.584777 (-3.662612) | 5.769564 / 3.745712 (2.023852) | 4.885108 / 5.269862 (-0.384754) | 3.041640 / 4.565676 (-1.524037) | 0.100186 / 0.424275 (-0.324090) | 0.009417 / 0.007607 (0.001810) | 0.783138 / 0.226044 (0.557094) | 8.113361 / 2.268929 (5.844432) | 4.018630 / 55.444624 (-51.425995) | 3.246772 / 6.876477 (-3.629704) | 3.520690 / 2.142072 (1.378618) | 1.063686 / 4.805227 (-3.741541) | 0.218667 / 6.500664 (-6.281997) | 0.084169 / 0.075469 (0.008700) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.791949 / 1.841788 (-0.049839) | 23.148341 / 8.074308 (15.074033) | 23.321125 / 10.191392 (13.129733) | 0.245391 / 0.680424 (-0.435032) | 0.031911 / 0.534201 (-0.502290) | 0.470707 / 0.579283 (-0.108576) | 0.608195 / 0.434364 (0.173832) | 0.559590 / 0.540337 (0.019253) | 0.786007 / 1.386936 (-0.600929) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008428 / 0.011353 (-0.002925) | 0.004064 / 0.011008 (-0.006944) | 0.088421 / 0.038508 (0.049913) | 0.078042 / 0.023109 (0.054933) | 0.306356 / 0.275898 (0.030458) | 0.349766 / 0.323480 (0.026286) | 0.004086 / 0.007986 (-0.003900) | 0.003900 / 0.004328 (-0.000428) | 0.068379 / 0.004250 (0.064129) | 0.056214 / 0.037052 (0.019161) | 0.310211 / 0.258489 (0.051722) | 0.363692 / 0.293841 (0.069851) | 0.050421 / 0.128546 (-0.078125) | 0.011661 / 0.075646 (-0.063985) | 0.298400 / 0.419271 (-0.120871) | 0.063503 / 0.043533 (0.019970) | 0.339799 / 0.255139 (0.084660) | 0.359479 / 0.283200 (0.076279) | 0.039265 / 0.141683 (-0.102418) | 1.390578 / 1.452155 (-0.061576) | 1.573333 / 1.492716 (0.080617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260442 / 0.018006 (0.242436) | 0.560390 / 0.000490 (0.559900) | 0.003926 / 0.000200 (0.003726) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025809 / 0.037411 (-0.011602) | 0.081902 / 0.014526 (0.067376) | 0.093655 / 0.176557 (-0.082901) | 0.149432 / 0.737135 (-0.587703) | 0.099059 / 0.296338 (-0.197279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.505644 / 0.215209 (0.290435) | 5.108292 / 2.077655 (3.030638) | 2.121689 / 1.504120 (0.617569) | 1.846576 / 1.541195 (0.305381) | 1.836587 / 1.468490 (0.368097) | 0.708088 / 4.584777 (-3.876689) | 4.562630 / 3.745712 (0.816918) | 3.934747 / 5.269862 (-1.335115) | 2.453409 / 4.565676 (-2.112267) | 0.081908 / 0.424275 (-0.342367) | 0.012996 / 0.007607 (0.005389) | 0.636588 / 0.226044 (0.410544) | 6.361086 / 2.268929 (4.092157) | 2.911681 / 55.444624 (-52.532943) | 2.271809 / 6.876477 (-4.604667) | 2.670327 / 2.142072 (0.528254) | 0.943688 / 4.805227 (-3.861539) | 0.191677 / 6.500664 (-6.308988) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.400139 / 1.841788 (-0.441648) | 21.896198 / 8.074308 (13.821890) | 17.853604 / 10.191392 (7.662212) | 0.226603 / 0.680424 (-0.453821) | 0.026682 / 0.534201 (-0.507518) | 0.460131 / 0.579283 (-0.119152) | 0.536790 / 0.434364 (0.102427) | 0.492913 / 0.540337 (-0.047424) | 0.724290 / 1.386936 (-0.662646) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007795 / 0.011353 (-0.003557) | 0.009045 / 0.011008 (-0.001963) | 0.085480 / 0.038508 (0.046972) | 0.071881 / 0.023109 (0.048772) | 0.514520 / 0.275898 (0.238622) | 0.569762 / 0.323480 (0.246282) | 0.006126 / 0.007986 (-0.001859) | 0.004153 / 0.004328 (-0.000175) | 0.072150 / 0.004250 (0.067900) | 0.056511 / 0.037052 (0.019458) | 0.484097 / 0.258489 (0.225607) | 0.532673 / 0.293841 (0.238832) | 0.040974 / 0.128546 (-0.087572) | 0.012071 / 0.075646 (-0.063575) | 0.102608 / 0.419271 (-0.316663) | 0.052893 / 0.043533 (0.009360) | 0.485832 / 0.255139 (0.230693) | 0.530479 / 0.283200 (0.247280) | 0.031556 / 0.141683 (-0.110127) | 1.737508 / 1.452155 (0.285354) | 1.834637 / 1.492716 (0.341921) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.423314 / 0.018006 (0.405308) | 0.614163 / 0.000490 (0.613673) | 0.052784 / 0.000200 (0.052584) | 0.000206 / 0.000054 (0.000151) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031728 / 0.037411 (-0.005684) | 0.088048 / 0.014526 (0.073522) | 0.105759 / 0.176557 (-0.070798) | 0.181433 / 0.737135 (-0.555703) | 0.103133 / 0.296338 (-0.193205) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659710 / 0.215209 (0.444501) | 5.876378 / 2.077655 (3.798723) | 2.899444 / 1.504120 (1.395324) | 2.871592 / 1.541195 (1.330397) | 2.861205 / 1.468490 (1.392715) | 0.879452 / 4.584777 (-3.705325) | 5.395988 / 3.745712 (1.650275) | 4.548359 / 5.269862 (-0.721502) | 2.946601 / 4.565676 (-1.619076) | 0.099832 / 0.424275 (-0.324443) | 0.008958 / 0.007607 (0.001351) | 0.778480 / 0.226044 (0.552435) | 7.672282 / 2.268929 (5.403354) | 3.963701 / 55.444624 (-51.480923) | 3.154950 / 6.876477 (-3.721527) | 3.351070 / 2.142072 (1.208997) | 1.059459 / 4.805227 (-3.745768) | 0.212035 / 6.500664 (-6.288629) | 0.076941 / 0.075469 (0.001472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.639813 / 1.841788 (-0.201975) | 24.807517 / 8.074308 (16.733208) | 20.662500 / 10.191392 (10.471108) | 0.244486 / 0.680424 (-0.435937) | 0.032335 / 0.534201 (-0.501866) | 0.470896 / 0.579283 (-0.108387) | 0.581561 / 0.434364 (0.147197) | 0.495158 / 0.540337 (-0.045179) | 0.788350 / 1.386936 (-0.598586) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-08T16:09:13Z
| 2023-09-08T18:02:49Z
| 2023-09-08T17:52:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6228",
"merged_at": "2023-09-08T17:52:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6228"
}
|
Fix #6225
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6228/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6228/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2371
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2371/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2371/events
|
https://github.com/huggingface/datasets/issues/2371
| 894,193,403
|
MDU6SXNzdWU4OTQxOTM0MDM=
| 2,371
|
Align question answering tasks with sub-domains
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[
"Closing this issue as the `task_templates` API has been deprecated."
] | 2021-05-18T09:47:59Z
| 2023-07-25T16:52:05Z
| 2023-07-25T16:52:04Z
|
MEMBER
| null | null | null |
As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
>
> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.
> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).
> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad
>
> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
>
> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2371/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2906
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2906/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2906/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2906/events
|
https://github.com/huggingface/datasets/pull/2906
| 995,962,905
|
PR_kwDODunzps4rulH-
| 2,906
|
feat: 🎸 add a function to get a dataset config's split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is"
] | 2021-09-14T12:31:22Z
| 2021-10-04T09:55:38Z
| 2021-10-04T09:55:37Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2906.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2906",
"merged_at": "2021-10-04T09:55:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2906.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2906"
}
|
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?
-> no: reverted
- [x] Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
-> yes: added
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2906/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2906/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1871/events
|
https://github.com/huggingface/datasets/pull/1871
| 807,697,671
|
MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz
| 1,871
|
Add newspop dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for the changes :)\r\nmerging"
] | 2021-02-13T07:31:23Z
| 2021-03-08T10:12:45Z
| 2021-03-08T10:12:45Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1871",
"merged_at": "2021-03-08T10:12:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1871"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1871/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/3052
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3052/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3052/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3052/events
|
https://github.com/huggingface/datasets/issues/3052
| 1,021,944,435
|
I_kwDODunzps486aJz
| 3,052
|
load_dataset cannot download the data and hangs on forever if cache dir specified
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BenoitDalFerro",
"id": 69694610,
"login": "BenoitDalFerro",
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BenoitDalFerro"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The following packages are causing the inconsistency:\r\n> \r\n> - conda-forge/noarch::datasets==1.12.1=pyhd8ed1ab_1\r\n> - conda-forge/win-64::multiprocess==0.70.12.2=py38h294d835_0\r\n> done\r\n> \r\n> Package Plan\r\n> \r\n> environment location: C:\\xxx\\anaconda3\\envs\\UnBias-94-1\r\n> \r\n> added / updated specs:\r\n> - datasets\r\n> \r\n> \r\n> The following NEW packages will be INSTALLED:\r\n> \r\n> dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0\r\n> \r\n> The following packages will be UPDATED:\r\n> \r\n> ca-certificates pkgs/main::ca-certificates-2021.9.30-~ --> conda-forge::ca-certificates-2021.10.8-h5b45459_0\r\n> certifi pkgs/main::certifi-2021.5.30-py38haa9~ --> conda-forge::certifi-2021.10.8-py38haa244fe_0\r\n> \r\n> The following packages will be SUPERSEDED by a higher-priority channel:\r\n> "
] | 2021-10-10T10:31:36Z
| 2021-10-11T10:57:09Z
| 2021-10-11T10:56:36Z
|
NONE
| null | null | null |
## Describe the bug
After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud.
Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :(
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir)
```
Note that exact same code without specifying _cache_dir_ argument works perfectly fine.
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train')
```
## Expected results
Downloads the dataset and cache is handled in the _cache_dir_ directory
## Actual results
Data download keeps hanging on forever, **NO TRACEBACK**!
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3052/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3052/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/289
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/289/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/289/comments
|
https://api.github.com/repos/huggingface/datasets/issues/289/events
|
https://github.com/huggingface/datasets/pull/289
| 641,934,194
|
MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3
| 289
|
update xsum
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Looks cool!\r\n@mariamabarham can you add a detailed description here what exactly is changed and how the user can load xsum now?",
"And a rebase should solve the conflicts",
"This is a super useful PR :-) @sshleifer - maybe you can take a look at the updated version of xsum if you can use it for your use case. Now, one should be able to just load it with:\r\n\r\n```python \r\nnlp.load_datasets(\"xsum\", ....) # no manual dir required anymore\r\n```\r\n"
] | 2020-06-19T12:28:32Z
| 2020-06-22T13:27:26Z
| 2020-06-22T07:20:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/289",
"merged_at": "2020-06-22T07:20:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/289"
}
|
This PR makes the following update to the xsum dataset:
- Manual download is not required anymore
- dataset can be loaded as follow: `nlp.load_dataset('xsum')`
**Important**
Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json"
a more up-to-date url stored here: https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz is used
, so that the user does not need to manually download the data anymore.
There might be slight breaking changes here for xsum.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/289/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/289/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2278
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2278/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2278/events
|
https://github.com/huggingface/datasets/issues/2278
| 870,088,059
|
MDU6SXNzdWU4NzAwODgwNTk=
| 2,278
|
Loss result inGptNeoForCasual
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4",
"events_url": "https://api.github.com/users/Yossillamm/events{/privacy}",
"followers_url": "https://api.github.com/users/Yossillamm/followers",
"following_url": "https://api.github.com/users/Yossillamm/following{/other_user}",
"gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Yossillamm",
"id": 51174606,
"login": "Yossillamm",
"node_id": "MDQ6VXNlcjUxMTc0NjA2",
"organizations_url": "https://api.github.com/users/Yossillamm/orgs",
"received_events_url": "https://api.github.com/users/Yossillamm/received_events",
"repos_url": "https://api.github.com/users/Yossillamm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Yossillamm"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library"
] | 2021-04-28T15:39:52Z
| 2021-05-06T16:14:23Z
| 2021-05-06T16:14:23Z
|
NONE
| null | null | null |
Is there any way you give the " loss" and "logits" results in the gpt neo api?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2278/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4741/events
|
https://github.com/huggingface/datasets/pull/4741
| 1,316,621,272
|
PR_kwDODunzps48B2fl
| 4,741
|
Fix to dict conversion of `DatasetInfo`/`Features`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-25T10:41:27Z
| 2022-07-25T12:50:36Z
| 2022-07-25T12:37:53Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4741",
"merged_at": "2022-07-25T12:37:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4741"
}
|
Fix #4681
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4741/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4741/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4273
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4273/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4273/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4273/events
|
https://github.com/huggingface/datasets/pull/4273
| 1,224,681,036
|
PR_kwDODunzps43QaA6
| 4,273
|
leadboard info added for TNE
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanaiela",
"id": 8031035,
"login": "yanaiela",
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanaiela"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-03T21:35:41Z
| 2022-05-05T13:25:24Z
| 2022-05-05T13:18:13Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4273.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4273",
"merged_at": "2022-05-05T13:18:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4273.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4273"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4273/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4273/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2348
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2348/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2348/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2348/events
|
https://github.com/huggingface/datasets/pull/2348
| 887,927,737
|
MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4
| 2,348
|
Add tests for dataset cards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq\r\n\r\nShould I remove the scripts? or atleast remove running them from the CircleCI config?\r\n\r\nAlso, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata.",
"Also feel free to remove the scripts from the CI and also remove the scripts files :)"
] | 2021-05-11T17:14:27Z
| 2021-05-21T12:10:47Z
| 2021-05-21T12:10:47Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2348.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2348",
"merged_at": "2021-05-21T12:10:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2348.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2348"
}
|
Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2348/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2348/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2330
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2330/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2330/events
|
https://github.com/huggingface/datasets/issues/2330
| 878,490,927
|
MDU6SXNzdWU4Nzg0OTA5Mjc=
| 2,330
|
Allow passing `desc` to `tqdm` in `Dataset.map()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cccntu",
"id": 31893406,
"login": "cccntu",
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"repos_url": "https://api.github.com/users/cccntu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cccntu"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?",
"I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset."
] | 2021-05-07T05:52:54Z
| 2021-05-26T14:59:21Z
| 2021-05-26T14:59:21Z
|
CONTRIBUTOR
| null | null | null |
It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2330/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1577/events
|
https://github.com/huggingface/datasets/pull/1577
| 767,342,432
|
MDExOlB1bGxSZXF1ZXN0NTQwMDg2MzY5
| 1,577
|
Add comet metric
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ricardorei",
"id": 17256847,
"login": "ricardorei",
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ricardorei"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I also thought a bit about the fact that \"sources\" can't be added to the batch.. but changing that would require a lot more changes. And I agree that the idea of adding them as part of the references is not ideal. Conceptually they are not references.\r\n\r\nI would keep it like this for now.. And in the future, work on a more consistent batch interface."
] | 2020-12-15T08:56:00Z
| 2021-01-14T13:33:10Z
| 2021-01-14T13:33:10Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1577.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1577",
"merged_at": "2021-01-14T13:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1577.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1577"
}
|
Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics.
COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark.
We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric.
I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 !
Cheers,
Ricardo
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1577/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1837/events
|
https://github.com/huggingface/datasets/issues/1837
| 803,555,650
|
MDU6SXNzdWU4MDM1NTU2NTA=
| 1,837
|
Add VCTK
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] |
closed
| false
| null |
[] | null |
[
"@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me know! Otherwise, I'll try to write up a PR in the coming days.",
"That sounds great @jaketae - let me know if you need any help i.e. feel free to ping me on a first PR :-)"
] | 2021-02-08T13:15:28Z
| 2021-12-28T15:05:08Z
| 2021-12-28T15:05:08Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.*
- **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443
- **Data:** https://datashare.ed.ac.uk/handle/10283/3443
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1837/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3173
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3173/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3173/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3173/events
|
https://github.com/huggingface/datasets/pull/3173
| 1,038,404,300
|
PR_kwDODunzps4typcA
| 3,173
|
Fix issue with filelock filename being too long on encrypted filesystems
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-10-28T11:28:57Z
| 2021-10-29T09:42:24Z
| 2021-10-29T09:42:24Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3173",
"merged_at": "2021-10-29T09:42:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3173"
}
|
Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs.
Fix #2924
cc: @lmmx
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3173/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3173/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6116
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6116/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6116/events
|
https://github.com/huggingface/datasets/issues/6116
| 1,835,098,484
|
I_kwDODunzps5tYWF0
| 6,116
|
[Docs] The "Process" how-to guide lacks description of `select_columns` function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unifyh",
"id": 18213435,
"login": "unifyh",
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"repos_url": "https://api.github.com/users/unifyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unifyh"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Great idea, feel free to open a PR! :)"
] | 2023-08-03T13:45:10Z
| 2023-08-16T10:02:53Z
| 2023-08-16T10:02:53Z
|
CONTRIBUTOR
| null | null | null |
### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.
### Motivation
This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.
Mentioning it in the guide would help future users discover this added feature.
### Your contribution
I could submit a PR to add a brief description of the function to said guide.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6116/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6033
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6033/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6033/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6033/events
|
https://github.com/huggingface/datasets/issues/6033
| 1,804,482,051
|
I_kwDODunzps5rjjYD
| 6,033
|
`map` function doesn't fully utilize `input_columns`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-07-14T08:49:28Z
| 2023-07-14T09:16:04Z
| 2023-07-14T09:16:04Z
|
NONE
| null | null | null |
### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6033/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6033/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3433
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3433/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3433/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3433/events
|
https://github.com/huggingface/datasets/issues/3433
| 1,080,910,724
|
I_kwDODunzps5AbWOE
| 3,433
|
Add Multilingual Spoken Words dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] |
closed
| false
| null |
[] | null |
[] | 2021-12-15T11:14:44Z
| 2022-02-22T10:03:53Z
| 2022-02-22T10:03:53Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** Multilingual Spoken Words
- **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours).
Read more: https://mlcommons.org/en/news/spoken-words-blog/
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Data:** https://mlcommons.org/en/multilingual-spoken-words/
- **Motivation:**
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3433/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3433/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5341
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5341/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5341/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5341/events
|
https://github.com/huggingface/datasets/pull/5341
| 1,484,376,644
|
PR_kwDODunzps5Exohx
| 5,341
|
Remove tasks.json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-08T11:04:35Z
| 2022-12-09T12:26:21Z
| 2022-12-09T12:23:20Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5341.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5341",
"merged_at": "2022-12-09T12:23:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5341.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5341"
}
|
After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5341/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5341/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1536/events
|
https://github.com/huggingface/datasets/pull/1536
| 765,043,121
|
MDExOlB1bGxSZXF1ZXN0NTM4ODM2MDM3
| 1,536
|
Add Hippocorpus Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manandey",
"id": 6687858,
"login": "manandey",
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"repos_url": "https://api.github.com/users/manandey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manandey"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> Before we merge can you try to reduce the size of the dummy_data.zip file ?\r\n> \r\n> To do so feel free to only keep a few lines of the csv files ans also remove unnecessary chunks of texts (for example keep only the first sentences of a story).\r\n\r\nHi @lhoestq, I have reduced the size of the dummy_data.zip file by making the necessary changes you had suggested. ",
"merging since the CI is fixed on master"
] | 2020-12-13T06:13:02Z
| 2020-12-15T13:41:17Z
| 2020-12-15T13:40:11Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1536.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1536",
"merged_at": "2020-12-15T13:40:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1536.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1536"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1536/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1536/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/562
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/562/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/562/comments
|
https://api.github.com/repos/huggingface/datasets/issues/562/events
|
https://github.com/huggingface/datasets/pull/562
| 690,907,604
|
MDExOlB1bGxSZXF1ZXN0NDc3NzI1MjMx
| 562
|
[Reproductibility] Allow to pin versions of datasets/metrics
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Closing this one in favor of #584 "
] | 2020-09-02T10:30:13Z
| 2023-09-24T09:49:42Z
| 2020-09-09T13:04:54Z
|
MEMBER
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/562.diff",
"html_url": "https://github.com/huggingface/datasets/pull/562",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/562.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/562"
}
|
Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts:
```
dataset = nlp.load_dataset('squad', version='1.0.0')
metric = nlp.load_metric('squad', version='1.0.0')
```
Notes:
- version number are the release version of the library
- currently only possible for canonical datasets/metrics, ie. integrated in the GitHub repo of the library
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/562/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/562/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/966
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/966/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/966/comments
|
https://api.github.com/repos/huggingface/datasets/issues/966/events
|
https://github.com/huggingface/datasets/pull/966
| 754,558,686
|
MDExOlB1bGxSZXF1ZXN0NTMwNDM4NDE4
| 966
|
Add CLINC150 Dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sumanthd17",
"id": 28291870,
"login": "sumanthd17",
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sumanthd17"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR",
"created new [PR](https://github.com/huggingface/datasets/pull/1016)\r\n\r\nclosing this!"
] | 2020-12-01T16:50:13Z
| 2020-12-02T18:45:43Z
| 2020-12-02T18:45:30Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/966",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/966"
}
|
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/966/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/966/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6421
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6421/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6421/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6421/events
|
https://github.com/huggingface/datasets/pull/6421
| 1,994,451,553
|
PR_kwDODunzps5fgG1h
| 6,421
|
Add pyarrow-hotfix to release docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004755 / 0.011353 (-0.006598) | 0.002683 / 0.011008 (-0.008325) | 0.061701 / 0.038508 (0.023193) | 0.030123 / 0.023109 (0.007013) | 0.238186 / 0.275898 (-0.037712) | 0.266570 / 0.323480 (-0.056910) | 0.002898 / 0.007986 (-0.005088) | 0.002381 / 0.004328 (-0.001948) | 0.048033 / 0.004250 (0.043782) | 0.044529 / 0.037052 (0.007477) | 0.246728 / 0.258489 (-0.011761) | 0.302066 / 0.293841 (0.008225) | 0.024008 / 0.128546 (-0.104539) | 0.006626 / 0.075646 (-0.069020) | 0.202000 / 0.419271 (-0.217272) | 0.056492 / 0.043533 (0.012959) | 0.243417 / 0.255139 (-0.011722) | 0.263947 / 0.283200 (-0.019253) | 0.020481 / 0.141683 (-0.121202) | 1.130635 / 1.452155 (-0.321520) | 1.180570 / 1.492716 (-0.312146) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095541 / 0.018006 (0.077535) | 0.306152 / 0.000490 (0.305662) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018593 / 0.037411 (-0.018818) | 0.063029 / 0.014526 (0.048503) | 0.074312 / 0.176557 (-0.102245) | 0.119882 / 0.737135 (-0.617254) | 0.074066 / 0.296338 (-0.222273) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275409 / 0.215209 (0.060200) | 2.727061 / 2.077655 (0.649407) | 1.415632 / 1.504120 (-0.088488) | 1.294922 / 1.541195 (-0.246273) | 1.341636 / 1.468490 (-0.126854) | 0.403250 / 4.584777 (-4.181527) | 2.384657 / 3.745712 (-1.361055) | 2.604131 / 5.269862 (-2.665731) | 1.558888 / 4.565676 (-3.006789) | 0.046008 / 0.424275 (-0.378267) | 0.004819 / 0.007607 (-0.002789) | 0.331046 / 0.226044 (0.105002) | 3.340950 / 2.268929 (1.072021) | 1.801077 / 55.444624 (-53.643548) | 1.479162 / 6.876477 (-5.397315) | 1.503713 / 2.142072 (-0.638359) | 0.474931 / 4.805227 (-4.330296) | 0.101869 / 6.500664 (-6.398795) | 0.041946 / 0.075469 (-0.033523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955641 / 1.841788 (-0.886147) | 11.441032 / 8.074308 (3.366724) | 10.267731 / 10.191392 (0.076339) | 0.128735 / 0.680424 (-0.551689) | 0.013942 / 0.534201 (-0.520259) | 0.266620 / 0.579283 (-0.312663) | 0.262334 / 0.434364 (-0.172029) | 0.302713 / 0.540337 (-0.237624) | 0.430323 / 1.386936 (-0.956613) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004670 / 0.011353 (-0.006683) | 0.002671 / 0.011008 (-0.008338) | 0.048949 / 0.038508 (0.010441) | 0.052520 / 0.023109 (0.029411) | 0.272614 / 0.275898 (-0.003284) | 0.292618 / 0.323480 (-0.030862) | 0.004016 / 0.007986 (-0.003969) | 0.002430 / 0.004328 (-0.001899) | 0.048313 / 0.004250 (0.044063) | 0.038647 / 0.037052 (0.001595) | 0.279893 / 0.258489 (0.021404) | 0.305371 / 0.293841 (0.011530) | 0.023710 / 0.128546 (-0.104836) | 0.006999 / 0.075646 (-0.068648) | 0.053315 / 0.419271 (-0.365956) | 0.032417 / 0.043533 (-0.011115) | 0.272066 / 0.255139 (0.016927) | 0.291717 / 0.283200 (0.008518) | 0.018127 / 0.141683 (-0.123556) | 1.173611 / 1.452155 (-0.278544) | 1.183659 / 1.492716 (-0.309057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094831 / 0.018006 (0.076824) | 0.304911 / 0.000490 (0.304421) | 0.000225 / 0.000200 (0.000025) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020948 / 0.037411 (-0.016463) | 0.070255 / 0.014526 (0.055729) | 0.081371 / 0.176557 (-0.095186) | 0.118932 / 0.737135 (-0.618203) | 0.082207 / 0.296338 (-0.214132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294067 / 0.215209 (0.078858) | 2.856981 / 2.077655 (0.779326) | 1.598392 / 1.504120 (0.094273) | 1.479093 / 1.541195 (-0.062102) | 1.509495 / 1.468490 (0.041005) | 0.396303 / 4.584777 (-4.188473) | 2.429077 / 3.745712 (-1.316635) | 2.525037 / 5.269862 (-2.744824) | 1.503332 / 4.565676 (-3.062345) | 0.046191 / 0.424275 (-0.378084) | 0.004858 / 0.007607 (-0.002750) | 0.349528 / 0.226044 (0.123484) | 3.401451 / 2.268929 (1.132522) | 1.989613 / 55.444624 (-53.455012) | 1.664528 / 6.876477 (-5.211949) | 1.669076 / 2.142072 (-0.472997) | 0.467090 / 4.805227 (-4.338137) | 0.098137 / 6.500664 (-6.402527) | 0.040448 / 0.075469 (-0.035021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969578 / 1.841788 (-0.872210) | 12.064705 / 8.074308 (3.990396) | 10.991438 / 10.191392 (0.800046) | 0.130149 / 0.680424 (-0.550275) | 0.015357 / 0.534201 (-0.518844) | 0.266567 / 0.579283 (-0.312717) | 0.270619 / 0.434364 (-0.163744) | 0.305978 / 0.540337 (-0.234359) | 0.411164 / 1.386936 (-0.975772) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009810 / 0.011353 (-0.001543) | 0.005411 / 0.011008 (-0.005598) | 0.111670 / 0.038508 (0.073162) | 0.050288 / 0.023109 (0.027179) | 0.415625 / 0.275898 (0.139727) | 0.479382 / 0.323480 (0.155902) | 0.005104 / 0.007986 (-0.002882) | 0.007122 / 0.004328 (0.002793) | 0.079626 / 0.004250 (0.075375) | 0.079421 / 0.037052 (0.042369) | 0.406722 / 0.258489 (0.148233) | 0.461511 / 0.293841 (0.167670) | 0.053812 / 0.128546 (-0.074734) | 0.014315 / 0.075646 (-0.061331) | 0.389636 / 0.419271 (-0.029636) | 0.111859 / 0.043533 (0.068326) | 0.411703 / 0.255139 (0.156564) | 0.457072 / 0.283200 (0.173872) | 0.039807 / 0.141683 (-0.101876) | 1.744064 / 1.452155 (0.291909) | 1.968321 / 1.492716 (0.475604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.341839 / 0.018006 (0.323833) | 0.628083 / 0.000490 (0.627593) | 0.023787 / 0.000200 (0.023587) | 0.000601 / 0.000054 (0.000547) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034170 / 0.037411 (-0.003241) | 0.091159 / 0.014526 (0.076633) | 0.108993 / 0.176557 (-0.067563) | 0.186906 / 0.737135 (-0.550229) | 0.109753 / 0.296338 (-0.186586) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684138 / 0.215209 (0.468929) | 6.634852 / 2.077655 (4.557198) | 3.102870 / 1.504120 (1.598750) | 2.831023 / 1.541195 (1.289828) | 2.831597 / 1.468490 (1.363107) | 0.903584 / 4.584777 (-3.681193) | 5.503341 / 3.745712 (1.757629) | 4.970283 / 5.269862 (-0.299579) | 3.139413 / 4.565676 (-1.426264) | 0.109848 / 0.424275 (-0.314427) | 0.008501 / 0.007607 (0.000894) | 0.823815 / 0.226044 (0.597770) | 7.963355 / 2.268929 (5.694426) | 4.002010 / 55.444624 (-51.442614) | 3.229390 / 6.876477 (-3.647087) | 3.166413 / 2.142072 (1.024341) | 1.030313 / 4.805227 (-3.774914) | 0.219394 / 6.500664 (-6.281270) | 0.077760 / 0.075469 (0.002291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580309 / 1.841788 (-0.261479) | 24.279185 / 8.074308 (16.204877) | 22.305293 / 10.191392 (12.113901) | 0.235711 / 0.680424 (-0.444713) | 0.030342 / 0.534201 (-0.503859) | 0.498137 / 0.579283 (-0.081146) | 0.619173 / 0.434364 (0.184809) | 0.529904 / 0.540337 (-0.010434) | 0.822547 / 1.386936 (-0.564389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009375 / 0.011353 (-0.001978) | 0.006009 / 0.011008 (-0.004999) | 0.074080 / 0.038508 (0.035572) | 0.089454 / 0.023109 (0.066345) | 0.473458 / 0.275898 (0.197560) | 0.462558 / 0.323480 (0.139078) | 0.006415 / 0.007986 (-0.001571) | 0.004777 / 0.004328 (0.000448) | 0.076563 / 0.004250 (0.072313) | 0.062793 / 0.037052 (0.025741) | 0.455860 / 0.258489 (0.197371) | 0.485281 / 0.293841 (0.191440) | 0.052966 / 0.128546 (-0.075580) | 0.021600 / 0.075646 (-0.054046) | 0.090407 / 0.419271 (-0.328864) | 0.063951 / 0.043533 (0.020418) | 0.487561 / 0.255139 (0.232422) | 0.479958 / 0.283200 (0.196758) | 0.039263 / 0.141683 (-0.102420) | 1.727215 / 1.452155 (0.275061) | 1.962039 / 1.492716 (0.469323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296267 / 0.018006 (0.278261) | 0.604982 / 0.000490 (0.604493) | 0.007842 / 0.000200 (0.007642) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.097796 / 0.014526 (0.083270) | 0.126034 / 0.176557 (-0.050522) | 0.180873 / 0.737135 (-0.556262) | 0.125410 / 0.296338 (-0.170928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.608278 / 0.215209 (0.393069) | 6.154006 / 2.077655 (4.076351) | 2.822342 / 1.504120 (1.318222) | 2.568263 / 1.541195 (1.027068) | 2.518545 / 1.468490 (1.050055) | 0.863186 / 4.584777 (-3.721591) | 5.367969 / 3.745712 (1.622257) | 4.737691 / 5.269862 (-0.532170) | 2.917620 / 4.565676 (-1.648056) | 0.100731 / 0.424275 (-0.323544) | 0.008611 / 0.007607 (0.001004) | 0.735523 / 0.226044 (0.509479) | 7.552790 / 2.268929 (5.283862) | 3.821835 / 55.444624 (-51.622789) | 2.878259 / 6.876477 (-3.998217) | 2.957686 / 2.142072 (0.815613) | 0.964630 / 4.805227 (-3.840598) | 0.207098 / 6.500664 (-6.293566) | 0.084215 / 0.075469 (0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.711020 / 1.841788 (-0.130768) | 24.034122 / 8.074308 (15.959814) | 21.378504 / 10.191392 (11.187112) | 0.233433 / 0.680424 (-0.446990) | 0.037214 / 0.534201 (-0.496987) | 0.511952 / 0.579283 (-0.067332) | 0.591486 / 0.434364 (0.157123) | 0.606549 / 0.540337 (0.066211) | 0.833773 / 1.386936 (-0.553163) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-15T10:06:44Z
| 2023-11-15T13:49:55Z
| 2023-11-15T13:38:22Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6421",
"merged_at": "2023-11-15T13:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6421"
}
|
Add `pyarrow-hotfix` to release docs.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6421/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6421/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4988
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4988/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4988/events
|
https://github.com/huggingface/datasets/issues/4988
| 1,376,096,584
|
I_kwDODunzps5SBZFI
| 4,988
|
Add `IterableDataset.from_generator` to the API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
}
] | null |
[
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] | 2022-09-16T15:19:41Z
| 2022-10-05T12:10:49Z
| 2022-10-05T12:10:49Z
|
CONTRIBUTOR
| null | null | null |
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4988/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/213
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/213/comments
|
https://api.github.com/repos/huggingface/datasets/issues/213/events
|
https://github.com/huggingface/datasets/pull/213
| 626,587,995
|
MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3
| 213
|
better message if missing beam options
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-28T15:06:57Z
| 2020-05-29T09:51:17Z
| 2020-05-29T09:51:16Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/213",
"merged_at": "2020-05-29T09:51:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/213"
}
|
WDYT @yjernite ?
For example:
```python
dataset = nlp.load_dataset('wikipedia', '20200501.aa')
```
Raises:
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.aa', beam_runner='DirectRunner')`
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/213/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/213/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4000
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4000/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4000/events
|
https://github.com/huggingface/datasets/issues/4000
| 1,178,844,616
|
I_kwDODunzps5GQ73I
| 4,000
|
load_dataset error: sndfile library not found
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4",
"events_url": "https://api.github.com/users/i-am-neo/events{/privacy}",
"followers_url": "https://api.github.com/users/i-am-neo/followers",
"following_url": "https://api.github.com/users/i-am-neo/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/i-am-neo",
"id": 102043285,
"login": "i-am-neo",
"node_id": "U_kgDOBhUOlQ",
"organizations_url": "https://api.github.com/users/i-am-neo/orgs",
"received_events_url": "https://api.github.com/users/i-am-neo/received_events",
"repos_url": "https://api.github.com/users/i-am-neo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/i-am-neo"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third-party dependency on `torchaudio`.\r\n\r\nYou have all the information in our docs: https://huggingface.co/docs/datasets/audio_process#installation",
"Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?\r\n\r\n```\r\npip3 install datasets[audio]\r\n```\r\n### log\r\nRequirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)\r\nRequirement already satisfied: numpy>=1.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.21.5)\r\nRequirement already satisfied: xxhash in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.0.0)\r\nRequirement already satisfied: fsspec[http]>=2021.05.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2022.2.0)\r\nRequirement already satisfied: dill in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.3.4)\r\nRequirement already satisfied: pandas in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.3.5)\r\nRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.4.0)\r\nRequirement already satisfied: packaging in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (21.3)\r\nRequirement already satisfied: multiprocess in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.70.12.2)\r\nRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (7.0.0)\r\nRequirement already satisfied: tqdm>=4.62.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.63.1)\r\nRequirement already satisfied: aiohttp in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.8.1)\r\nRequirement already satisfied: importlib-metadata in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.11.3)\r\nRequirement already satisfied: requests>=2.19.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2.27.1)\r\nRequirement already satisfied: librosa in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.9.1)\r\nRequirement already satisfied: pyyaml in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (4.1.1)\r\nRequirement already satisfied: filelock in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (3.6.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from packaging->datasets[audio]) (3.0.7)\r\nRequirement already satisfied: idna<4,>=2.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (3.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (1.26.9)\r\nRequirement already satisfied: attrs>=17.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (21.4.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.3.0)\r\nRequirement already satisfied: aiosignal>=1.1.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.2.0)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.7.2)\r\nRequirement already satisfied: asynctest==0.13.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (0.13.0)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (6.0.2)\r\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (4.0.2)\r\nRequirement already satisfied: zipp>=0.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from importlib-metadata->datasets[audio]) (3.7.0)\r\nRequirement already satisfied: decorator>=4.0.10 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (5.1.1)\r\nRequirement already satisfied: soundfile>=0.10.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.55.1)\r\nRequirement already satisfied: pooch>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.6.0)\r\nRequirement already satisfied: resampy>=0.2.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.2.2)\r\nRequirement already satisfied: audioread>=2.1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.1.0)\r\nRequirement already satisfied: scipy>=1.2.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.7.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.0.2)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2022.1)\r\nRequirement already satisfied: setuptools in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (0.38.0)\r\nRequirement already satisfied: appdirs>=1.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa->datasets[audio]) (1.4.4)\r\nRequirement already satisfied: six>=1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets[audio]) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa->datasets[audio]) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa->datasets[audio]) (1.15.0)\r\nRequirement already satisfied: pycparser in ./.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa->datasets[audio]) (2.21)\r\n\r\n### reload\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### log\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n\r\n### just to double-check as per your docs\r\n```\r\npip3 install librosa torchaudio\r\n```\r\n\r\n### logs\r\nRequirement already satisfied: librosa in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.9.1)\r\nRequirement already satisfied: torchaudio in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.11.0+cu113)\r\nRequirement already satisfied: audioread>=2.1.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.1.0)\r\nRequirement already satisfied: packaging>=20.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (21.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.0.2)\r\nRequirement already satisfied: scipy>=1.2.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.7.3)\r\nRequirement already satisfied: decorator>=4.0.10 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (5.1.1)\r\nRequirement already satisfied: resampy>=0.2.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.2.2)\r\nRequirement already satisfied: pooch>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.6.0)\r\nRequirement already satisfied: numpy>=1.17.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.21.5)\r\nRequirement already satisfied: soundfile>=0.10.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.55.1)\r\nRequirement already satisfied: torch==1.11.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torchaudio) (1.11.0+cu113)\r\nRequirement already satisfied: typing-extensions in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torch==1.11.0->torchaudio) (4.1.1)\r\nRequirement already satisfied: setuptools in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (0.38.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from packaging>=20.0->librosa) (3.0.7)\r\nRequirement already satisfied: requests>=2.19.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (2.27.1)\r\nRequirement already satisfied: appdirs>=1.3.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (1.4.4)\r\nRequirement already satisfied: six>=1.3 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from resampy>=0.2.2->librosa) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa) (1.15.0)\r\nRequirement already satisfied: pycparser in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa) (2.21)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2.0.12)\r\nRequirement already satisfied: certifi>=2017.4.17 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2021.10.8)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (1.26.9)\r\nRequirement already satisfied: idna<4,>=2.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (3.3)\r\n\r\n### try loading again\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### same error\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n",
"Hi @i-am-neo, thanks again for your detailed report.\r\n\r\nOur `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:\r\n```shell\r\npip install datasets[audio]\r\n```\r\n\r\nHowever, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends on a non-Python package called `sndfile`. \r\n\r\nOn Linux (which is your case), this must be installed manually using your operating system package manager, for example:\r\n```shell\r\nsudo apt-get install libsndfile1\r\n```\r\n\r\nPlease, let me know if this works and if so, I will update our docs with all this information.",
"@albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously."
] | 2022-03-24T01:52:32Z
| 2022-03-25T17:53:33Z
| 2022-03-25T17:53:33Z
|
NONE
| null | null | null |
## Describe the bug
Can't load ami dataset
## Steps to reproduce the bug
```
python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
```
## Expected results
## Actual results
Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...
AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1.
100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 36004.88it/s]
100%|█████████████████████████████████████████████████████████| 136/136 [00:01<00:00, 79.10it/s]
100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25343.23it/s]
100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2874.78it/s]
100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 27950.38it/s]
100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2892.25it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset
use_auth_token=use_auth_token,
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
sndfile library not found
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- PyArrow version: 7.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4000/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/767/events
|
https://github.com/huggingface/datasets/issues/767
| 730,771,610
|
MDU6SXNzdWU3MzA3NzE2MTA=
| 767
|
Add option for named splits when using ds.train_test_split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090/5\r\n\r\nAnd in particular that it should advantageously be able to split in 3 splits as well instead of just 2 like we copied from sklearn."
] | 2020-10-27T19:59:44Z
| 2020-11-10T14:05:21Z
| null |
CONTRIBUTOR
| null | null | null |
### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
|
{
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/767/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5468
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5468/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5468/events
|
https://github.com/huggingface/datasets/issues/5468
| 1,558,066,625
|
I_kwDODunzps5c3jXB
| 5,468
|
Allow opposite of remove_columns on Dataset and DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hollance",
"id": 346853,
"login": "hollance",
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"repos_url": "https://api.github.com/users/hollance/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hollance"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also want to work on this issue I am a newbie to open source. ",
"This sounds related to https://github.com/huggingface/datasets/issues/5474\r\n\r\nI'm fine with `select_columns`, or we could also override `select` to also accept a list of columns maybe ?",
"@lhoestq, I am planning to add a member function to the dataset class to perform the selection operation. Do you think its the right way to proceed? or there is a better option ?",
"Unless @mariosasko thinks otherwise, I think it can go in `Dataset.select()` :)\r\nThough some parameters like keep_in_memory, indices_cache_file_name or writer_batch_size wouldn't when selecting columns, so we would need to update the docstring as well",
"If someone wants to give it a shot, feel free to comment `#self-assign` and it will assign the issue to you.\r\n\r\nFeel free to ping us here if you have questions or if we can help :)",
"I would rather have this functionality as a separate method. IMO it's always better to be explicit than to have an API where a single method can do different/uncorrelated things (somewhat reminds me of Pandas, and there is probably a good reason why PyArrow is more rigid in this aspect).",
"In the end I also think it would be nice to have it as a separate method, this way we can also have it for `IterableDataset` (which can't have `select` for indices)"
] | 2023-01-26T12:28:09Z
| 2023-02-13T09:59:38Z
| 2023-02-13T09:59:38Z
|
NONE
| null | null | null |
### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(columns_to_remove)
```
This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write:
```python
gigaspeech = gigaspeech.keep_columns(["text", "audio"])
```
Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is.
### Motivation
Less code to write for the user of the dataset.
### Your contribution
-
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5468/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2279
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2279/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2279/events
|
https://github.com/huggingface/datasets/issues/2279
| 870,431,662
|
MDU6SXNzdWU4NzA0MzE2NjI=
| 2,279
|
Compatibility with Ubuntu 18 and GLIBC 2.27?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tginart",
"id": 11379648,
"login": "tginart",
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"repos_url": "https://api.github.com/users/tginart/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tginart"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?",
"Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685"
] | 2021-04-28T22:08:07Z
| 2021-04-29T07:42:42Z
| 2021-04-29T07:42:42Z
|
NONE
| null | null | null |
## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC.
## Steps to reproduce the bug
1. clone the transformers repo
2. move to examples/pytorch/language-modeling
3. run example command:
```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm```
## Expected results
As described in the transformers repo.
## Actual results
```Traceback (most recent call last):
File "run_clm.py", line 34, in <module>
from transformers import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__
return super().__getattr__(name)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module>
from .tokenization_layoutlm import LayoutLMTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module>
from ..bert.tokenization_bert import BertTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module>
from tokenizers import AddedToken
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so)
```
## Versions
Paste the output of the following code:
```
- Datasets: 1.6.1
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2279/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/836
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/836/comments
|
https://api.github.com/repos/huggingface/datasets/issues/836/events
|
https://github.com/huggingface/datasets/issues/836
| 740,187,613
|
MDU6SXNzdWU3NDAxODc2MTM=
| 836
|
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8919490?v=4",
"events_url": "https://api.github.com/users/randubin/events{/privacy}",
"followers_url": "https://api.github.com/users/randubin/followers",
"following_url": "https://api.github.com/users/randubin/following{/other_user}",
"gists_url": "https://api.github.com/users/randubin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/randubin",
"id": 8919490,
"login": "randubin",
"node_id": "MDQ6VXNlcjg5MTk0OTA=",
"organizations_url": "https://api.github.com/users/randubin/orgs",
"received_events_url": "https://api.github.com/users/randubin/received_events",
"repos_url": "https://api.github.com/users/randubin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/randubin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/randubin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/randubin"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
| null |
[] | null |
[
"Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?",
"Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5",
"I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nThe problem is in arrow when the column data contains long strings.\r\nAny ideas on how to bypass this?",
"We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n\r\n\r\nIn the meantime you can specify yourself the `ReadOptions` config like this:\r\n```python\r\nimport pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n\r\nread_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\ndataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n```\r\n",
"This did help to load the data. But the problem now is that I get:\r\nArrowInvalid: CSV parse error: Expected 5 columns, got 187\r\n\r\nIt seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow\r\nBut I got a similar error, again it loaded fine in pandas so I am not sure what to do.\r\n\r\n\r\n\r\n",
"Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.",
"> We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n> \r\n> In the meantime you can specify yourself the `ReadOptions` config like this:\r\n> \r\n> ```python\r\n> import pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n> \r\n> read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\n> dataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n> ```\r\n\r\nThis did not work for me, I got\r\n`TypeError: __init__() got an unexpected keyword argument 'read_options'`",
"Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html))"
] | 2020-11-10T19:35:40Z
| 2021-11-24T16:59:19Z
| 2020-11-19T17:35:38Z
|
NONE
| null | null | null |
Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4...
I am getting this error:
6a4ac4/csv.py in _generate_tables(self, files)
78 def _generate_tables(self, files):
79 for i, file in enumerate(files):
---> 80 pa_table = pac.read_csv(
81 file,
82 read_options=self.config.pa_read_options,
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv()
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
**ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**
The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need.
There is no issue reading the file with pandas. any idea what could be the issue?
When I am running a different CSV I do not get this line:
(download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size)
Any ideas?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/836/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/206
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/206/comments
|
https://api.github.com/repos/huggingface/datasets/issues/206/events
|
https://github.com/huggingface/datasets/issues/206
| 625,842,989
|
MDU6SXNzdWU2MjU4NDI5ODk=
| 206
|
[Question] Combine 2 datasets which have the same columns
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"events_url": "https://api.github.com/users/airKlizz/events{/privacy}",
"followers_url": "https://api.github.com/users/airKlizz/followers",
"following_url": "https://api.github.com/users/airKlizz/following{/other_user}",
"gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/airKlizz",
"id": 25703835,
"login": "airKlizz",
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"organizations_url": "https://api.github.com/users/airKlizz/orgs",
"received_events_url": "https://api.github.com/users/airKlizz/received_events",
"repos_url": "https://api.github.com/users/airKlizz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/airKlizz"
}
|
[] |
closed
| false
| null |
[] | null |
[
"We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.",
"Ok great! I will look at it. Thanks"
] | 2020-05-27T16:25:52Z
| 2020-06-10T09:11:14Z
| 2020-06-10T09:11:14Z
|
CONTRIBUTOR
| null | null | null |
Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german)
My issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution.
Hoping this is clear enough,
Thanks a lot 😊
Best
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/206/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/239
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/239/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/239/comments
|
https://api.github.com/repos/huggingface/datasets/issues/239/events
|
https://github.com/huggingface/datasets/issues/239
| 631,340,440
|
MDU6SXNzdWU2MzEzNDA0NDA=
| 239
|
[Creating new dataset] Not found dataset_info.json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"I think you can just `rm` this directory and it should be good :)",
"@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?",
"Yes I have an idea of what's going on. I'm sure I can fix that",
"Hi, I rebase my local copy to `fix-empty-cache-dir`, and try to run again `python nlp-cli test datasets/bookcorpus --save_infos --all_configs`.\r\n\r\nI got this, \r\n```\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 10, in <module>\r\n from nlp.commands.run_beam import RunBeamCommand\r\n File \"/home/yisiang/nlp/src/nlp/commands/run_beam.py\", line 6, in <module>\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\nAnd after I installed it. I got this\r\n```\r\nFile \"/home/yisiang/nlp/src/nlp/datasets/bookcorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookcorpus.py\", line 88, in _split_generators\r\n downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 128, in download_custom\r\n downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)\r\n File \"/home/yisiang/nlp/src/nlp/utils/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 126, in url_to_downloaded_path\r\n return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))\r\n File \"/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py\", line 80, in join\r\n a = os.fspath(a)\r\n```\r\nThe problem is when I print `self._download_config.cache_dir` using pdb, it is `None`.\r\n\r\nDid I miss something ? Or can you provide a workaround first so I can keep testing my script ?",
"I'll close this issue because I brings more reports in another issue #249 ."
] | 2020-06-05T06:15:04Z
| 2020-06-07T13:01:04Z
| 2020-06-07T13:01:04Z
|
CONTRIBUTOR
| null | null | null |
Hi, I am trying to create Toronto Book Corpus. #131
I ran
`~/nlp % python nlp-cli test datasets/bookcorpus --save_infos --all_configs`
but this doesn't create `dataset_info.json` and try to use it
```
INFO:nlp.load:Checking datasets/bookcorpus/bookcorpus.py for additional imports.
INFO:filelock:Lock 139795325778640 acquired on datasets/bookcorpus/bookcorpus.py.lock
INFO:nlp.load:Found main folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus
INFO:nlp.load:Found specific version folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9
INFO:nlp.load:Found script file from datasets/bookcorpus/bookcorpus.py to /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/bookcorpus/dataset_infos.json
INFO:nlp.load:Found metadata file for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.json
INFO:filelock:Lock 139795325778640 released on datasets/bookcorpus/bookcorpus.py.lock
INFO:nlp.builder:Overwrite dataset info from restored data version.
INFO:nlp.info:Loading Dataset info from /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/commands/test.py", line 78, in run
builders.append(builder_cls(name=config.name, data_dir=self._data_dir))
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory
with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/dataset_info.json'
```
btw, `ls /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/` show me nothing is in the directory.
I have also pushed the script to my fork [bookcorpus.py](https://github.com/richardyy1188/nlp/blob/bookcorpusdev/datasets/bookcorpus/bookcorpus.py).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/239/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/239/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6423
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6423/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6423/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6423/events
|
https://github.com/huggingface/datasets/pull/6423
| 1,994,946,847
|
PR_kwDODunzps5fhzD6
| 6,423
|
Fix conda release by adding pyarrow-hotfix dependency
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004476 / 0.011353 (-0.006877) | 0.002691 / 0.011008 (-0.008317) | 0.061400 / 0.038508 (0.022892) | 0.030096 / 0.023109 (0.006986) | 0.279868 / 0.275898 (0.003970) | 0.310320 / 0.323480 (-0.013159) | 0.003873 / 0.007986 (-0.004112) | 0.002394 / 0.004328 (-0.001935) | 0.048307 / 0.004250 (0.044056) | 0.043326 / 0.037052 (0.006273) | 0.288256 / 0.258489 (0.029767) | 0.311449 / 0.293841 (0.017609) | 0.022970 / 0.128546 (-0.105576) | 0.006714 / 0.075646 (-0.068932) | 0.201656 / 0.419271 (-0.217615) | 0.052811 / 0.043533 (0.009278) | 0.285123 / 0.255139 (0.029984) | 0.301495 / 0.283200 (0.018295) | 0.017531 / 0.141683 (-0.124152) | 1.097660 / 1.452155 (-0.354494) | 1.161986 / 1.492716 (-0.330731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089223 / 0.018006 (0.071217) | 0.297815 / 0.000490 (0.297326) | 0.000205 / 0.000200 (0.000005) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018679 / 0.037411 (-0.018732) | 0.062742 / 0.014526 (0.048216) | 0.072869 / 0.176557 (-0.103687) | 0.120730 / 0.737135 (-0.616406) | 0.074526 / 0.296338 (-0.221813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299977 / 0.215209 (0.084768) | 2.921029 / 2.077655 (0.843375) | 1.632283 / 1.504120 (0.128163) | 1.508008 / 1.541195 (-0.033187) | 1.513967 / 1.468490 (0.045477) | 0.403056 / 4.584777 (-4.181721) | 2.340011 / 3.745712 (-1.405701) | 2.552319 / 5.269862 (-2.717543) | 1.549741 / 4.565676 (-3.015935) | 0.046303 / 0.424275 (-0.377972) | 0.004768 / 0.007607 (-0.002839) | 0.356921 / 0.226044 (0.130877) | 3.506410 / 2.268929 (1.237482) | 1.975394 / 55.444624 (-53.469230) | 1.688683 / 6.876477 (-5.187794) | 1.715502 / 2.142072 (-0.426571) | 0.471016 / 4.805227 (-4.334212) | 0.099552 / 6.500664 (-6.401112) | 0.042095 / 0.075469 (-0.033374) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955784 / 1.841788 (-0.886004) | 11.191802 / 8.074308 (3.117494) | 10.127818 / 10.191392 (-0.063574) | 0.141225 / 0.680424 (-0.539199) | 0.014486 / 0.534201 (-0.519715) | 0.267204 / 0.579283 (-0.312079) | 0.289108 / 0.434364 (-0.145256) | 0.309458 / 0.540337 (-0.230880) | 0.422802 / 1.386936 (-0.964134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004797 / 0.011353 (-0.006556) | 0.002907 / 0.011008 (-0.008101) | 0.047666 / 0.038508 (0.009158) | 0.051183 / 0.023109 (0.028074) | 0.266315 / 0.275898 (-0.009583) | 0.286429 / 0.323480 (-0.037051) | 0.003954 / 0.007986 (-0.004031) | 0.002041 / 0.004328 (-0.002288) | 0.047652 / 0.004250 (0.043401) | 0.038211 / 0.037052 (0.001158) | 0.272210 / 0.258489 (0.013721) | 0.299425 / 0.293841 (0.005584) | 0.024266 / 0.128546 (-0.104280) | 0.006747 / 0.075646 (-0.068900) | 0.052959 / 0.419271 (-0.366312) | 0.032094 / 0.043533 (-0.011439) | 0.265677 / 0.255139 (0.010538) | 0.285373 / 0.283200 (0.002174) | 0.017577 / 0.141683 (-0.124106) | 1.114514 / 1.452155 (-0.337640) | 1.212970 / 1.492716 (-0.279746) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088347 / 0.018006 (0.070341) | 0.296678 / 0.000490 (0.296188) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021159 / 0.037411 (-0.016253) | 0.069886 / 0.014526 (0.055360) | 0.079832 / 0.176557 (-0.096725) | 0.115512 / 0.737135 (-0.621623) | 0.081600 / 0.296338 (-0.214739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292659 / 0.215209 (0.077450) | 2.872556 / 2.077655 (0.794901) | 1.573017 / 1.504120 (0.068897) | 1.445122 / 1.541195 (-0.096072) | 1.485584 / 1.468490 (0.017094) | 0.388638 / 4.584777 (-4.196139) | 2.434847 / 3.745712 (-1.310865) | 2.518167 / 5.269862 (-2.751695) | 1.503000 / 4.565676 (-3.062676) | 0.045123 / 0.424275 (-0.379153) | 0.004778 / 0.007607 (-0.002829) | 0.347955 / 0.226044 (0.121910) | 3.384819 / 2.268929 (1.115891) | 1.920185 / 55.444624 (-53.524439) | 1.646910 / 6.876477 (-5.229567) | 1.638092 / 2.142072 (-0.503980) | 0.450535 / 4.805227 (-4.354692) | 0.095301 / 6.500664 (-6.405363) | 0.040275 / 0.075469 (-0.035194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956088 / 1.841788 (-0.885700) | 11.776642 / 8.074308 (3.702334) | 10.651063 / 10.191392 (0.459671) | 0.127079 / 0.680424 (-0.553345) | 0.015080 / 0.534201 (-0.519121) | 0.273737 / 0.579283 (-0.305546) | 0.271434 / 0.434364 (-0.162929) | 0.308448 / 0.540337 (-0.231889) | 0.412467 / 1.386936 (-0.974469) |\n\n</details>\n</details>\n\n\n",
"Once this PR is merged, we should upload the missing version to conda.\r\n\r\n@lhoestq you did this in the past. If you tell me your approach (I see a tag called `VERSION`...), I could do it myself.",
"Maybe open a PR against the 2.14 branch and update `release-conda.yml` like this ?\r\n\r\n```diff\r\n- on:\r\n- push:\r\n- tags:\r\n- - \"[0-9]+.[0-9]+.[0-9]+*\"\r\n+ on: push\r\n```\r\n\r\nand then set it back to normal after the release is done",
"After having cherry-picked the commit in this PR, I have released the conda package. See: \r\n- https://github.com/huggingface/datasets/actions/runs/6880182419/job/18713812449\r\n- https://anaconda.org/HuggingFace/datasets/files?version=2.14.7\r\n\r\nI am merging this PR.\r\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004993 / 0.011353 (-0.006360) | 0.002964 / 0.011008 (-0.008044) | 0.062588 / 0.038508 (0.024080) | 0.030794 / 0.023109 (0.007685) | 0.234856 / 0.275898 (-0.041042) | 0.264807 / 0.323480 (-0.058673) | 0.003139 / 0.007986 (-0.004847) | 0.002498 / 0.004328 (-0.001831) | 0.048058 / 0.004250 (0.043807) | 0.048349 / 0.037052 (0.011296) | 0.238210 / 0.258489 (-0.020279) | 0.278144 / 0.293841 (-0.015697) | 0.023219 / 0.128546 (-0.105327) | 0.007296 / 0.075646 (-0.068351) | 0.203263 / 0.419271 (-0.216008) | 0.058844 / 0.043533 (0.015311) | 0.246330 / 0.255139 (-0.008809) | 0.264550 / 0.283200 (-0.018649) | 0.018580 / 0.141683 (-0.123103) | 1.084163 / 1.452155 (-0.367992) | 1.154891 / 1.492716 (-0.337825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092393 / 0.018006 (0.074387) | 0.300545 / 0.000490 (0.300055) | 0.000203 / 0.000200 (0.000003) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018648 / 0.037411 (-0.018763) | 0.063151 / 0.014526 (0.048625) | 0.074206 / 0.176557 (-0.102350) | 0.120929 / 0.737135 (-0.616207) | 0.075970 / 0.296338 (-0.220368) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278489 / 0.215209 (0.063279) | 2.664804 / 2.077655 (0.587150) | 1.433040 / 1.504120 (-0.071080) | 1.321416 / 1.541195 (-0.219779) | 1.320964 / 1.468490 (-0.147526) | 0.401289 / 4.584777 (-4.183488) | 2.365310 / 3.745712 (-1.380402) | 2.635798 / 5.269862 (-2.634063) | 1.584384 / 4.565676 (-2.981293) | 0.045675 / 0.424275 (-0.378600) | 0.004854 / 0.007607 (-0.002753) | 0.337592 / 0.226044 (0.111548) | 3.330462 / 2.268929 (1.061534) | 1.794507 / 55.444624 (-53.650117) | 1.531284 / 6.876477 (-5.345193) | 1.507165 / 2.142072 (-0.634908) | 0.478622 / 4.805227 (-4.326606) | 0.099105 / 6.500664 (-6.401560) | 0.041575 / 0.075469 (-0.033894) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941790 / 1.841788 (-0.899997) | 11.609871 / 8.074308 (3.535563) | 10.770869 / 10.191392 (0.579477) | 0.138931 / 0.680424 (-0.541493) | 0.014406 / 0.534201 (-0.519795) | 0.269681 / 0.579283 (-0.309602) | 0.260556 / 0.434364 (-0.173808) | 0.308244 / 0.540337 (-0.232093) | 0.428867 / 1.386936 (-0.958069) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004803 / 0.011353 (-0.006550) | 0.003263 / 0.011008 (-0.007745) | 0.049143 / 0.038508 (0.010635) | 0.052033 / 0.023109 (0.028924) | 0.267815 / 0.275898 (-0.008083) | 0.288733 / 0.323480 (-0.034747) | 0.004159 / 0.007986 (-0.003826) | 0.002407 / 0.004328 (-0.001921) | 0.048978 / 0.004250 (0.044728) | 0.038994 / 0.037052 (0.001942) | 0.264028 / 0.258489 (0.005539) | 0.303930 / 0.293841 (0.010090) | 0.024283 / 0.128546 (-0.104263) | 0.007201 / 0.075646 (-0.068446) | 0.053810 / 0.419271 (-0.365461) | 0.032611 / 0.043533 (-0.010922) | 0.266730 / 0.255139 (0.011591) | 0.281564 / 0.283200 (-0.001635) | 0.018720 / 0.141683 (-0.122963) | 1.140676 / 1.452155 (-0.311479) | 1.206604 / 1.492716 (-0.286113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.109390 / 0.018006 (0.091384) | 0.313783 / 0.000490 (0.313294) | 0.000228 / 0.000200 (0.000028) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021228 / 0.037411 (-0.016183) | 0.070505 / 0.014526 (0.055979) | 0.081961 / 0.176557 (-0.094595) | 0.119943 / 0.737135 (-0.617193) | 0.083582 / 0.296338 (-0.212757) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295702 / 0.215209 (0.080493) | 2.886865 / 2.077655 (0.809210) | 1.583206 / 1.504120 (0.079086) | 1.451129 / 1.541195 (-0.090065) | 1.486253 / 1.468490 (0.017763) | 0.403207 / 4.584777 (-4.181570) | 2.408889 / 3.745712 (-1.336824) | 2.578480 / 5.269862 (-2.691381) | 1.533066 / 4.565676 (-3.032610) | 0.046075 / 0.424275 (-0.378200) | 0.004877 / 0.007607 (-0.002730) | 0.345995 / 0.226044 (0.119950) | 3.377039 / 2.268929 (1.108110) | 1.944614 / 55.444624 (-53.500010) | 1.677691 / 6.876477 (-5.198786) | 1.672828 / 2.142072 (-0.469244) | 0.468426 / 4.805227 (-4.336802) | 0.097290 / 6.500664 (-6.403374) | 0.040695 / 0.075469 (-0.034774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965778 / 1.841788 (-0.876010) | 12.092639 / 8.074308 (4.018331) | 11.210968 / 10.191392 (1.019576) | 0.131212 / 0.680424 (-0.549212) | 0.015865 / 0.534201 (-0.518336) | 0.285702 / 0.579283 (-0.293581) | 0.278319 / 0.434364 (-0.156045) | 0.336063 / 0.540337 (-0.204275) | 0.426265 / 1.386936 (-0.960671) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-15T14:57:12Z
| 2023-11-15T17:15:33Z
| 2023-11-15T17:09:24Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6423.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6423",
"merged_at": "2023-11-15T17:09:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6423.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6423"
}
|
Fix conda release by adding pyarrow-hotfix dependency.
Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/test_tmp/run_test.py", line 2, in <module>
import datasets
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 67, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import Features, Image, Value
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/features/__init__.py", line 18, in <module>
from .features import Array2D, Array3D, Array4D, Array5D, ClassLabel, Features, Sequence, Value
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/features/features.py", line 34, in <module>
import pyarrow_hotfix # noqa: F401 # to fix vulnerability on pyarrow<14.0.1
^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'pyarrow_hotfix'
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6423/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6423/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5171
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5171/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5171/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5171/events
|
https://github.com/huggingface/datasets/pull/5171
| 1,425,355,111
|
PR_kwDODunzps5BpsXf
| 5,171
|
Add PB and TB in convert_file_size_to_int
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-27T09:50:31Z
| 2022-10-27T12:14:27Z
| 2022-10-27T12:12:30Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5171",
"merged_at": "2022-10-27T12:12:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5171"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5171/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5171/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4509
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4509/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4509/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4509/events
|
https://github.com/huggingface/datasets/pull/4509
| 1,273,227,760
|
PR_kwDODunzps45wkDl
| 4,509
|
Support skipping Parquet to Arrow conversion when using Beam
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4509). All of your documentation changes will be reflected on that endpoint.",
"When #4724 is merged, we can just pass `file_format=\"parquet\"` to `download_and_prepare` and it will output parquet fiels without converting to arrow",
"I think we can close this one"
] | 2022-06-16T08:25:38Z
| 2022-11-07T16:22:41Z
| 2022-11-07T16:22:41Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4509.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4509",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4509.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4509"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4509/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4509/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4238
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4238/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4238/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4238/events
|
https://github.com/huggingface/datasets/issues/4238
| 1,217,168,123
|
I_kwDODunzps5IjIL7
| 4,238
|
Dataset caching policy
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loretoparisi",
"id": 163333,
"login": "loretoparisi",
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loretoparisi"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that your dataset loads without any problem for me:\r\n```python\r\nIn [2]: ds = load_dataset(\"loretoparisi/tatoeba-sentences\", data_files={\"train\": \"train.csv\", \"test\": \"test.csv\"}, delimiter=\"\\t\", column_names=['label', 'text'])\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 8256449\r\n })\r\n test: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 2061204\r\n })\r\n})\r\n``` ",
"@albertvillanova thank you, it seems it still does not work using:\r\n\r\n```python\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\"\r\n)\r\n```\r\n[This](https://colab.research.google.com/drive/1EA6FWo5pHxU8rPHHRn24NlHqRPiOlPTr?usp=sharing) is my notebook!\r\n\r\nThe problem is that the download file's revision for `test.csv` is not correctly parsed\r\n\r\n\r\n\r\nIf you download that file `test.csv` from the repo, the line `\\\\N` is not there anymore (it was there at the first file upload).\r\n\r\nMy impression is that the Apache Arrow file is still cached - so server side, despite of enabling a forced download. For what I can see I get those two arrow files, but I cannot grep the bad line (`\\\\N`) since are binary files:\r\n\r\n```\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/csv-test.arrow\r\n!head /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/dataset_info.json\r\n```\r\n",
"SOLVED! The problem was the with the file itself, using caching parameter helped indeed.\r\nThanks for helping!"
] | 2022-04-27T10:42:11Z
| 2022-04-27T16:29:25Z
| 2022-04-27T16:28:50Z
|
NONE
| null | null | null |
## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally:
```python
from datasets import load_dataset_builder
dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences")
print(dataset_builder.cache_dir)
print(dataset_builder.info.features)
print(dataset_builder.info.splits)
```
```
Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd
/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519
None
None
```
and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`.
Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it?
Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last.
Thank you.
## Steps to reproduce the bug
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
)
# You can make this part faster with num_proc=<some int>
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
sentences = sentences.shuffle()
```
## Expected results
Properly tokenize dataset file `test.csv` without issues.
## Actual results
Specify the actual results or traceback.
```
Downloading data files: 100%
2/2 [00:16<00:00, 7.34s/it]
Downloading data: 100%
391M/391M [00:12<00:00, 36.6MB/s]
Downloading data: 100%
92.4M/92.4M [00:02<00:00, 40.0MB/s]
Extracting data files: 100%
2/2 [00:00<00:00, 47.66it/s]
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%
2/2 [00:00<00:00, 25.94it/s]
11%
942339/8256449 [01:55<13:11, 9245.85ex/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>()
12 )
13 # You can make this part faster with num_proc=<some int>
---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
15 sentences = sentences.shuffle()
10 frames
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
- ```
```
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
- ```
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4238/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4238/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6312
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6312/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6312/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6312/events
|
https://github.com/huggingface/datasets/pull/6312
| 1,950,128,416
|
PR_kwDODunzps5dKWDF
| 6,312
|
docs: resolving namespace conflict, refactored variable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smty2018",
"id": 74114936,
"login": "smty2018",
"node_id": "MDQ6VXNlcjc0MTE0OTM2",
"organizations_url": "https://api.github.com/users/smty2018/orgs",
"received_events_url": "https://api.github.com/users/smty2018/received_events",
"repos_url": "https://api.github.com/users/smty2018/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smty2018/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smty2018"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006209 / 0.011353 (-0.005144) | 0.003708 / 0.011008 (-0.007300) | 0.080435 / 0.038508 (0.041926) | 0.060105 / 0.023109 (0.036995) | 0.392962 / 0.275898 (0.117064) | 0.429381 / 0.323480 (0.105902) | 0.003596 / 0.007986 (-0.004390) | 0.003849 / 0.004328 (-0.000480) | 0.062377 / 0.004250 (0.058127) | 0.048718 / 0.037052 (0.011666) | 0.400906 / 0.258489 (0.142417) | 0.440335 / 0.293841 (0.146494) | 0.027807 / 0.128546 (-0.100739) | 0.008066 / 0.075646 (-0.067580) | 0.262542 / 0.419271 (-0.156730) | 0.045513 / 0.043533 (0.001980) | 0.399608 / 0.255139 (0.144469) | 0.418007 / 0.283200 (0.134807) | 0.023475 / 0.141683 (-0.118208) | 1.476563 / 1.452155 (0.024409) | 1.528898 / 1.492716 (0.036182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223798 / 0.018006 (0.205792) | 0.430526 / 0.000490 (0.430036) | 0.009232 / 0.000200 (0.009032) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024921 / 0.037411 (-0.012490) | 0.077692 / 0.014526 (0.063166) | 0.085382 / 0.176557 (-0.091174) | 0.146220 / 0.737135 (-0.590915) | 0.086396 / 0.296338 (-0.209943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439986 / 0.215209 (0.224777) | 4.384552 / 2.077655 (2.306897) | 2.373697 / 1.504120 (0.869577) | 2.176138 / 1.541195 (0.634943) | 2.225914 / 1.468490 (0.757424) | 0.505776 / 4.584777 (-4.079001) | 3.053744 / 3.745712 (-0.691968) | 3.080443 / 5.269862 (-2.189419) | 1.904392 / 4.565676 (-2.661285) | 0.058112 / 0.424275 (-0.366163) | 0.006631 / 0.007607 (-0.000976) | 0.503409 / 0.226044 (0.277365) | 5.053375 / 2.268929 (2.784447) | 2.789963 / 55.444624 (-52.654661) | 2.452659 / 6.876477 (-4.423818) | 2.512353 / 2.142072 (0.370280) | 0.590095 / 4.805227 (-4.215132) | 0.126267 / 6.500664 (-6.374397) | 0.061246 / 0.075469 (-0.014223) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249884 / 1.841788 (-0.591903) | 17.684730 / 8.074308 (9.610422) | 13.967467 / 10.191392 (3.776075) | 0.144202 / 0.680424 (-0.536222) | 0.017004 / 0.534201 (-0.517197) | 0.333634 / 0.579283 (-0.245649) | 0.387251 / 0.434364 (-0.047113) | 0.390189 / 0.540337 (-0.150148) | 0.535662 / 1.386936 (-0.851274) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006379 / 0.011353 (-0.004974) | 0.003681 / 0.011008 (-0.007327) | 0.063005 / 0.038508 (0.024497) | 0.064221 / 0.023109 (0.041112) | 0.446074 / 0.275898 (0.170176) | 0.471997 / 0.323480 (0.148517) | 0.005074 / 0.007986 (-0.002911) | 0.002945 / 0.004328 (-0.001383) | 0.063305 / 0.004250 (0.059054) | 0.050608 / 0.037052 (0.013556) | 0.443260 / 0.258489 (0.184771) | 0.478497 / 0.293841 (0.184656) | 0.028980 / 0.128546 (-0.099566) | 0.008145 / 0.075646 (-0.067502) | 0.068412 / 0.419271 (-0.350859) | 0.041552 / 0.043533 (-0.001980) | 0.436649 / 0.255139 (0.181510) | 0.462397 / 0.283200 (0.179198) | 0.019929 / 0.141683 (-0.121753) | 1.530248 / 1.452155 (0.078093) | 1.611117 / 1.492716 (0.118401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232894 / 0.018006 (0.214888) | 0.421451 / 0.000490 (0.420961) | 0.003984 / 0.000200 (0.003784) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027776 / 0.037411 (-0.009635) | 0.081632 / 0.014526 (0.067106) | 0.094031 / 0.176557 (-0.082526) | 0.147930 / 0.737135 (-0.589206) | 0.094226 / 0.296338 (-0.202112) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471722 / 0.215209 (0.256513) | 4.713241 / 2.077655 (2.635587) | 2.662660 / 1.504120 (1.158540) | 2.490778 / 1.541195 (0.949583) | 2.555786 / 1.468490 (1.087296) | 0.512209 / 4.584777 (-4.072568) | 3.210612 / 3.745712 (-0.535100) | 2.863346 / 5.269862 (-2.406516) | 1.884664 / 4.565676 (-2.681012) | 0.058514 / 0.424275 (-0.365761) | 0.006473 / 0.007607 (-0.001134) | 0.543279 / 0.226044 (0.317235) | 5.441485 / 2.268929 (3.172556) | 3.145398 / 55.444624 (-52.299226) | 2.749603 / 6.876477 (-4.126874) | 2.925738 / 2.142072 (0.783666) | 0.598725 / 4.805227 (-4.206502) | 0.125616 / 6.500664 (-6.375048) | 0.061314 / 0.075469 (-0.014155) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.384270 / 1.841788 (-0.457518) | 18.307618 / 8.074308 (10.233310) | 14.635768 / 10.191392 (4.444376) | 0.148787 / 0.680424 (-0.531637) | 0.018191 / 0.534201 (-0.516010) | 0.333166 / 0.579283 (-0.246117) | 0.405116 / 0.434364 (-0.029247) | 0.392798 / 0.540337 (-0.147540) | 0.582299 / 1.386936 (-0.804637) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-18T16:10:59Z
| 2023-10-19T16:31:59Z
| 2023-10-19T16:23:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6312",
"merged_at": "2023-10-19T16:23:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6312"
}
|
In docs of about_arrow.md, in the below example code

The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good convention and can lead to unintended variable shadowing for any user re-using the example code.
To ensure code clarity, and prevent potential naming conflicts renamed the variable 'time' to 'elapsed_time' in the example code.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6312/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6312/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2473
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2473/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2473/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2473/events
|
https://github.com/huggingface/datasets/pull/2473
| 917,538,629
|
MDExOlB1bGxSZXF1ZXN0NjY3MDU5MjI5
| 2,473
|
Add Disfl-QA
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then.",
"I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)\r\n"
] | 2021-06-10T16:18:00Z
| 2021-07-29T11:56:19Z
| 2021-07-29T11:56:18Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2473",
"merged_at": "2021-07-29T11:56:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2473"
}
|
Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2473/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2473/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5556
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5556/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5556/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5556/events
|
https://github.com/huggingface/datasets/pull/5556
| 1,593,246,936
|
PR_kwDODunzps5KauVL
| 5,556
|
Use default audio resampling type
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008730 / 0.011353 (-0.002623) | 0.004551 / 0.011008 (-0.006457) | 0.100206 / 0.038508 (0.061698) | 0.030264 / 0.023109 (0.007154) | 0.303310 / 0.275898 (0.027412) | 0.339040 / 0.323480 (0.015560) | 0.006923 / 0.007986 (-0.001063) | 0.004707 / 0.004328 (0.000379) | 0.077822 / 0.004250 (0.073571) | 0.034368 / 0.037052 (-0.002684) | 0.303125 / 0.258489 (0.044636) | 0.348322 / 0.293841 (0.054481) | 0.033831 / 0.128546 (-0.094715) | 0.011459 / 0.075646 (-0.064187) | 0.322092 / 0.419271 (-0.097180) | 0.047720 / 0.043533 (0.004187) | 0.304849 / 0.255139 (0.049710) | 0.330767 / 0.283200 (0.047567) | 0.087362 / 0.141683 (-0.054321) | 1.536095 / 1.452155 (0.083941) | 1.599979 / 1.492716 (0.107263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188985 / 0.018006 (0.170979) | 0.410775 / 0.000490 (0.410286) | 0.004215 / 0.000200 (0.004015) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023124 / 0.037411 (-0.014287) | 0.096962 / 0.014526 (0.082436) | 0.104070 / 0.176557 (-0.072486) | 0.141248 / 0.737135 (-0.595887) | 0.108534 / 0.296338 (-0.187804) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417118 / 0.215209 (0.201909) | 4.167808 / 2.077655 (2.090154) | 2.016540 / 1.504120 (0.512420) | 1.847812 / 1.541195 (0.306617) | 1.967023 / 1.468490 (0.498532) | 0.689262 / 4.584777 (-3.895515) | 3.378747 / 3.745712 (-0.366965) | 1.854126 / 5.269862 (-3.415735) | 1.152102 / 4.565676 (-3.413575) | 0.081839 / 0.424275 (-0.342437) | 0.012426 / 0.007607 (0.004819) | 0.521334 / 0.226044 (0.295289) | 5.230593 / 2.268929 (2.961664) | 2.269386 / 55.444624 (-53.175238) | 1.965631 / 6.876477 (-4.910846) | 2.028994 / 2.142072 (-0.113079) | 0.802142 / 4.805227 (-4.003085) | 0.147954 / 6.500664 (-6.352710) | 0.065031 / 0.075469 (-0.010438) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235289 / 1.841788 (-0.606499) | 13.723507 / 8.074308 (5.649199) | 14.197923 / 10.191392 (4.006531) | 0.147950 / 0.680424 (-0.532473) | 0.028332 / 0.534201 (-0.505869) | 0.400180 / 0.579283 (-0.179103) | 0.418970 / 0.434364 (-0.015393) | 0.478381 / 0.540337 (-0.061957) | 0.576138 / 1.386936 (-0.810798) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006548 / 0.011353 (-0.004805) | 0.004567 / 0.011008 (-0.006441) | 0.075658 / 0.038508 (0.037150) | 0.027190 / 0.023109 (0.004080) | 0.363417 / 0.275898 (0.087518) | 0.399575 / 0.323480 (0.076095) | 0.004982 / 0.007986 (-0.003004) | 0.003364 / 0.004328 (-0.000964) | 0.074392 / 0.004250 (0.070142) | 0.038839 / 0.037052 (0.001787) | 0.361133 / 0.258489 (0.102644) | 0.408557 / 0.293841 (0.114717) | 0.031468 / 0.128546 (-0.097078) | 0.011645 / 0.075646 (-0.064001) | 0.085145 / 0.419271 (-0.334126) | 0.041775 / 0.043533 (-0.001758) | 0.348624 / 0.255139 (0.093485) | 0.389610 / 0.283200 (0.106410) | 0.088576 / 0.141683 (-0.053107) | 1.511208 / 1.452155 (0.059054) | 1.560568 / 1.492716 (0.067852) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185017 / 0.018006 (0.167011) | 0.407543 / 0.000490 (0.407053) | 0.002486 / 0.000200 (0.002286) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025181 / 0.037411 (-0.012231) | 0.099056 / 0.014526 (0.084530) | 0.108597 / 0.176557 (-0.067959) | 0.144664 / 0.737135 (-0.592471) | 0.110417 / 0.296338 (-0.185922) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434302 / 0.215209 (0.219093) | 4.327840 / 2.077655 (2.250185) | 2.059939 / 1.504120 (0.555819) | 1.853267 / 1.541195 (0.312072) | 1.906616 / 1.468490 (0.438126) | 0.700165 / 4.584777 (-3.884611) | 3.439216 / 3.745712 (-0.306496) | 2.792034 / 5.269862 (-2.477827) | 1.424852 / 4.565676 (-3.140824) | 0.083926 / 0.424275 (-0.340349) | 0.013943 / 0.007607 (0.006336) | 0.535964 / 0.226044 (0.309920) | 5.368671 / 2.268929 (3.099743) | 2.497027 / 55.444624 (-52.947597) | 2.166222 / 6.876477 (-4.710254) | 2.183766 / 2.142072 (0.041693) | 0.805886 / 4.805227 (-3.999341) | 0.152474 / 6.500664 (-6.348190) | 0.067354 / 0.075469 (-0.008115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284052 / 1.841788 (-0.557736) | 13.714066 / 8.074308 (5.639758) | 14.195212 / 10.191392 (4.003820) | 0.151815 / 0.680424 (-0.528609) | 0.016847 / 0.534201 (-0.517354) | 0.391174 / 0.579283 (-0.188109) | 0.409784 / 0.434364 (-0.024580) | 0.473880 / 0.540337 (-0.066458) | 0.561016 / 1.386936 (-0.825920) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010284 / 0.011353 (-0.001068) | 0.005654 / 0.011008 (-0.005355) | 0.100522 / 0.038508 (0.062014) | 0.039201 / 0.023109 (0.016092) | 0.320831 / 0.275898 (0.044933) | 0.365351 / 0.323480 (0.041871) | 0.009066 / 0.007986 (0.001080) | 0.005805 / 0.004328 (0.001476) | 0.076969 / 0.004250 (0.072719) | 0.045813 / 0.037052 (0.008760) | 0.327115 / 0.258489 (0.068626) | 0.362823 / 0.293841 (0.068982) | 0.040521 / 0.128546 (-0.088025) | 0.013166 / 0.075646 (-0.062481) | 0.358579 / 0.419271 (-0.060692) | 0.051753 / 0.043533 (0.008220) | 0.323741 / 0.255139 (0.068602) | 0.360211 / 0.283200 (0.077011) | 0.111534 / 0.141683 (-0.030149) | 1.594887 / 1.452155 (0.142732) | 1.651516 / 1.492716 (0.158799) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012051 / 0.018006 (-0.005956) | 0.475316 / 0.000490 (0.474826) | 0.004804 / 0.000200 (0.004604) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027480 / 0.037411 (-0.009931) | 0.112022 / 0.014526 (0.097496) | 0.121539 / 0.176557 (-0.055017) | 0.166327 / 0.737135 (-0.570809) | 0.132575 / 0.296338 (-0.163763) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418322 / 0.215209 (0.203113) | 4.149463 / 2.077655 (2.071808) | 1.890901 / 1.504120 (0.386781) | 1.682521 / 1.541195 (0.141327) | 1.716331 / 1.468490 (0.247841) | 0.729176 / 4.584777 (-3.855601) | 4.173303 / 3.745712 (0.427591) | 2.166249 / 5.269862 (-3.103612) | 1.384623 / 4.565676 (-3.181053) | 0.095486 / 0.424275 (-0.328789) | 0.013800 / 0.007607 (0.006193) | 0.573917 / 0.226044 (0.347872) | 5.348843 / 2.268929 (3.079914) | 2.421716 / 55.444624 (-53.022909) | 2.002048 / 6.876477 (-4.874428) | 2.079493 / 2.142072 (-0.062579) | 0.882818 / 4.805227 (-3.922409) | 0.172936 / 6.500664 (-6.327728) | 0.068384 / 0.075469 (-0.007085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285704 / 1.841788 (-0.556084) | 16.036346 / 8.074308 (7.962038) | 15.181557 / 10.191392 (4.990165) | 0.194044 / 0.680424 (-0.486380) | 0.033128 / 0.534201 (-0.501073) | 0.480290 / 0.579283 (-0.098993) | 0.497525 / 0.434364 (0.063161) | 0.602304 / 0.540337 (0.061966) | 0.754273 / 1.386936 (-0.632663) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007263 / 0.011353 (-0.004090) | 0.005164 / 0.011008 (-0.005845) | 0.079833 / 0.038508 (0.041325) | 0.033974 / 0.023109 (0.010865) | 0.382057 / 0.275898 (0.106159) | 0.402924 / 0.323480 (0.079444) | 0.007273 / 0.007986 (-0.000712) | 0.004378 / 0.004328 (0.000049) | 0.080556 / 0.004250 (0.076305) | 0.047376 / 0.037052 (0.010324) | 0.379044 / 0.258489 (0.120555) | 0.422135 / 0.293841 (0.128294) | 0.038294 / 0.128546 (-0.090252) | 0.013974 / 0.075646 (-0.061672) | 0.094936 / 0.419271 (-0.324335) | 0.051033 / 0.043533 (0.007501) | 0.368197 / 0.255139 (0.113058) | 0.409627 / 0.283200 (0.126427) | 0.107365 / 0.141683 (-0.034318) | 1.537501 / 1.452155 (0.085346) | 1.618021 / 1.492716 (0.125305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227187 / 0.018006 (0.209181) | 0.473226 / 0.000490 (0.472736) | 0.006532 / 0.000200 (0.006332) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029814 / 0.037411 (-0.007597) | 0.121113 / 0.014526 (0.106587) | 0.125107 / 0.176557 (-0.051450) | 0.167008 / 0.737135 (-0.570127) | 0.128720 / 0.296338 (-0.167619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452158 / 0.215209 (0.236949) | 4.507087 / 2.077655 (2.429433) | 2.193910 / 1.504120 (0.689790) | 1.991234 / 1.541195 (0.450039) | 2.120256 / 1.468490 (0.651766) | 0.726664 / 4.584777 (-3.858113) | 4.213148 / 3.745712 (0.467436) | 4.082857 / 5.269862 (-1.187005) | 1.741018 / 4.565676 (-2.824658) | 0.090176 / 0.424275 (-0.334099) | 0.013221 / 0.007607 (0.005614) | 0.558868 / 0.226044 (0.332824) | 5.617242 / 2.268929 (3.348313) | 2.985430 / 55.444624 (-52.459194) | 2.623136 / 6.876477 (-4.253341) | 2.383177 / 2.142072 (0.241105) | 0.917237 / 4.805227 (-3.887990) | 0.178774 / 6.500664 (-6.321890) | 0.064707 / 0.075469 (-0.010762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365402 / 1.841788 (-0.476385) | 16.035773 / 8.074308 (7.961465) | 13.917612 / 10.191392 (3.726220) | 0.152191 / 0.680424 (-0.528233) | 0.020734 / 0.534201 (-0.513467) | 0.442055 / 0.579283 (-0.137228) | 0.470588 / 0.434364 (0.036224) | 0.563433 / 0.540337 (0.023096) | 0.651161 / 1.386936 (-0.735775) |\n\n</details>\n</details>\n\n\n",
"If it's good for you @polinaeterna I'd like to merge it and then run the `transformers` CI to see if it changes anything",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008829 / 0.011353 (-0.002524) | 0.004652 / 0.011008 (-0.006356) | 0.102505 / 0.038508 (0.063997) | 0.030164 / 0.023109 (0.007054) | 0.306551 / 0.275898 (0.030653) | 0.368920 / 0.323480 (0.045440) | 0.007084 / 0.007986 (-0.000902) | 0.003545 / 0.004328 (-0.000783) | 0.079402 / 0.004250 (0.075152) | 0.035296 / 0.037052 (-0.001756) | 0.312010 / 0.258489 (0.053520) | 0.348773 / 0.293841 (0.054932) | 0.034622 / 0.128546 (-0.093924) | 0.011727 / 0.075646 (-0.063920) | 0.326911 / 0.419271 (-0.092361) | 0.043832 / 0.043533 (0.000300) | 0.306357 / 0.255139 (0.051218) | 0.328744 / 0.283200 (0.045544) | 0.091954 / 0.141683 (-0.049729) | 1.563989 / 1.452155 (0.111834) | 1.591901 / 1.492716 (0.099185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194955 / 0.018006 (0.176948) | 0.412864 / 0.000490 (0.412374) | 0.003710 / 0.000200 (0.003510) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023132 / 0.037411 (-0.014279) | 0.099586 / 0.014526 (0.085060) | 0.105031 / 0.176557 (-0.071525) | 0.141206 / 0.737135 (-0.595929) | 0.111978 / 0.296338 (-0.184360) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413729 / 0.215209 (0.198520) | 4.161713 / 2.077655 (2.084058) | 1.887442 / 1.504120 (0.383322) | 1.711847 / 1.541195 (0.170653) | 1.756833 / 1.468490 (0.288343) | 0.699239 / 4.584777 (-3.885538) | 3.346213 / 3.745712 (-0.399499) | 2.822289 / 5.269862 (-2.447573) | 1.475650 / 4.565676 (-3.090027) | 0.082800 / 0.424275 (-0.341475) | 0.012302 / 0.007607 (0.004695) | 0.523068 / 0.226044 (0.297024) | 5.242833 / 2.268929 (2.973904) | 2.310768 / 55.444624 (-53.133856) | 1.954629 / 6.876477 (-4.921847) | 2.015563 / 2.142072 (-0.126510) | 0.812613 / 4.805227 (-3.992614) | 0.149512 / 6.500664 (-6.351152) | 0.065162 / 0.075469 (-0.010307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270177 / 1.841788 (-0.571610) | 13.664765 / 8.074308 (5.590457) | 14.317968 / 10.191392 (4.126576) | 0.138135 / 0.680424 (-0.542289) | 0.028503 / 0.534201 (-0.505698) | 0.402921 / 0.579283 (-0.176362) | 0.400999 / 0.434364 (-0.033365) | 0.470983 / 0.540337 (-0.069355) | 0.544319 / 1.386936 (-0.842617) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006841 / 0.011353 (-0.004512) | 0.004570 / 0.011008 (-0.006439) | 0.076398 / 0.038508 (0.037890) | 0.028136 / 0.023109 (0.005027) | 0.339864 / 0.275898 (0.063966) | 0.375496 / 0.323480 (0.052016) | 0.004967 / 0.007986 (-0.003019) | 0.003411 / 0.004328 (-0.000917) | 0.075727 / 0.004250 (0.071476) | 0.040025 / 0.037052 (0.002973) | 0.340473 / 0.258489 (0.081984) | 0.384396 / 0.293841 (0.090555) | 0.031683 / 0.128546 (-0.096863) | 0.011752 / 0.075646 (-0.063894) | 0.085635 / 0.419271 (-0.333636) | 0.042764 / 0.043533 (-0.000769) | 0.339417 / 0.255139 (0.084278) | 0.364190 / 0.283200 (0.080991) | 0.093842 / 0.141683 (-0.047841) | 1.480999 / 1.452155 (0.028844) | 1.549752 / 1.492716 (0.057036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174146 / 0.018006 (0.156140) | 0.415459 / 0.000490 (0.414970) | 0.002854 / 0.000200 (0.002654) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024671 / 0.037411 (-0.012740) | 0.101229 / 0.014526 (0.086703) | 0.108841 / 0.176557 (-0.067716) | 0.144530 / 0.737135 (-0.592606) | 0.112509 / 0.296338 (-0.183829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460561 / 0.215209 (0.245352) | 4.591139 / 2.077655 (2.513484) | 2.275535 / 1.504120 (0.771415) | 2.070976 / 1.541195 (0.529781) | 2.028766 / 1.468490 (0.560276) | 0.706166 / 4.584777 (-3.878611) | 3.408498 / 3.745712 (-0.337215) | 3.034665 / 5.269862 (-2.235197) | 1.586805 / 4.565676 (-2.978872) | 0.083355 / 0.424275 (-0.340920) | 0.012460 / 0.007607 (0.004853) | 0.565256 / 0.226044 (0.339212) | 5.662643 / 2.268929 (3.393715) | 2.697019 / 55.444624 (-52.747605) | 2.302044 / 6.876477 (-4.574433) | 2.373081 / 2.142072 (0.231009) | 0.809804 / 4.805227 (-3.995423) | 0.151481 / 6.500664 (-6.349183) | 0.066870 / 0.075469 (-0.008599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257293 / 1.841788 (-0.584495) | 14.059454 / 8.074308 (5.985146) | 13.783251 / 10.191392 (3.591859) | 0.140007 / 0.680424 (-0.540417) | 0.016624 / 0.534201 (-0.517577) | 0.381703 / 0.579283 (-0.197580) | 0.389032 / 0.434364 (-0.045332) | 0.466127 / 0.540337 (-0.074211) | 0.551052 / 1.386936 (-0.835884) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-21T10:45:50Z
| 2023-02-21T12:49:50Z
| 2023-02-21T12:42:52Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5556.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5556",
"merged_at": "2023-02-21T12:42:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5556.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5556"
}
|
...instead of relying on the optional librosa dependency `resampy`.
It was only used for `_decode_non_mp3_file_like` anyway and not for the other ones - removing it fixes consistency between decoding methods (except torchaudio decoding)
Therefore I think it is a better solution than adding `resampy` as a dependency in https://github.com/huggingface/datasets/pull/5554
cc @polinaeterna
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5556/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5556/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3231
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3231/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3231/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3231/events
|
https://github.com/huggingface/datasets/pull/3231
| 1,047,170,906
|
PR_kwDODunzps4uNmWT
| 3,231
|
Group tests in multiprocessing workers by test file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-11-08T08:46:03Z
| 2021-11-08T13:19:18Z
| 2021-11-08T08:59:44Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3231",
"merged_at": "2021-11-08T08:59:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3231"
}
|
By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker.
Therefore, the fixture `hf_token` will be called only once (and from the same worker).
Related to: #3200.
Fix #3219.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3231/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3231/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4275
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4275/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4275/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4275/events
|
https://github.com/huggingface/datasets/issues/4275
| 1,224,943,414
|
I_kwDODunzps5JAyc2
| 4,275
|
CommonSenseQA has missing and inconsistent field names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vblagoje",
"id": 458335,
"login": "vblagoje",
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vblagoje"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. "
] | 2022-05-04T05:38:59Z
| 2022-05-04T11:41:18Z
| null |
CONTRIBUTOR
| null | null | null |
## Describe the bug
In short, CommonSenseQA implementation is inconsistent with the original dataset.
More precisely, we need to:
1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id.
2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it
3. Add the missing "question_concept" field in the question tree node
4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original
## Expected results
Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4275/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4275/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4862
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4862/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4862/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4862/events
|
https://github.com/huggingface/datasets/issues/4862
| 1,343,464,699
|
I_kwDODunzps5QE6T7
| 4,862
|
Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yana-xuyan",
"id": 38536635,
"login": "yana-xuyan",
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yana-xuyan"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"What's more, the downloaded data is actually a folder instead of an excel file.",
"Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets==2.4.0`. ",
"Hi @yana-xuyan, thanks for reporting.\r\n\r\nIndeed you already found the answer: an Excel file should be just downloaded and not downloaded-and-extracted.\r\n\r\nThe reason why is that if you call also extract, our library will try to infer the compression format (and extract it). And Excel files are viewed as ZIP files and extracted as so (into a directory). This is because the Office Open XML is indeed a zipped file under the hood): https://en.wikipedia.org/wiki/Office_Open_XML\r\n> Office Open XML (also informally known as OOXML) is a **zipped**, XML-based file format\r\n```python\r\nimport zipfile\r\n\r\nzipfile.is_zipfile(\"filename.xlsx\")\r\n```\r\nreturns `True`.",
"Hi @albertvillanova, thank you for your reply! Do you have any clue on why the same error still exists with `datasets==2.4.0` even after I don't extract the downloaded file? FYI, if I downgrade to `datasets==2.2.2`, the code works fine.",
"I guess this has to do with the cache: you should remove the previously-wrongly generated directory from the cache; otherwise `datasets` tries to re-use it."
] | 2022-08-18T18:36:14Z
| 2022-08-31T09:25:08Z
| 2022-08-31T09:25:08Z
|
NONE
| null | null | null |
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
# The dataset function is as follows:
from pathlib import Path
from typing import Dict, List, Tuple
import datasets
import pandas as pd
_CITATION = """\
"""
_DATASETNAME = "jadi_ide"
_DESCRIPTION = """\
"""
_HOMEPAGE = ""
_LICENSE = "Unknown"
_URLS = {
_DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx",
}
_SOURCE_VERSION = "1.0.0"
class JaDi_Ide(datasets.GeneratorBasedBuilder):
SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
BUILDER_CONFIGS = [
NusantaraConfig(
name="jadi_ide_source",
version=SOURCE_VERSION,
description="JaDi-Ide source schema",
schema="source",
subset_id="jadi_ide",
),
]
DEFAULT_CONFIG_NAME = "source"
def _info(self) -> datasets.DatasetInfo:
if self.config.schema == "source":
features = datasets.Features(
{
"id": datasets.Value("string"),
"text": datasets.Value("string"),
"label": datasets.Value("string")
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
"""Returns SplitGenerators."""
# Dataset does not have predetermined split, putting all as TRAIN
urls = _URLS[_DATASETNAME]
base_dir = Path(dl_manager.download_and_extract(urls))
data_files = {"train": base_dir}
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": data_files["train"],
"split": "train",
},
),
]
def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]:
"""Yields examples as (key, example) tuples."""
df = pd.read_excel(filepath, engine='openpyxl')
df.columns = ["id", "text", "label"]
if self.config.schema == "source":
for row in df.itertuples():
ex = {
"id": str(row.id),
"text": row.text,
"label": row.label,
}
yield row.id, ex
```
## Expected results
Expecting to load the dataset smoothly.
## Actual results
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples
df = pd.read_excel(filepath, engine='openpyxl')
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel
return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs)
AttributeError: 'xPath' object has no attribute 'read'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.4
- PyArrow version: 9.0.0
- Pandas version: 0.25.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4862/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4862/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2388
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2388/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2388/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2388/events
|
https://github.com/huggingface/datasets/issues/2388
| 897,767,470
|
MDU6SXNzdWU4OTc3Njc0NzA=
| 2,388
|
Incorrect URLs for some datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[] | 2021-05-21T07:22:35Z
| 2021-06-04T17:39:45Z
| 2021-06-04T17:39:45Z
|
MEMBER
| null | null | null |
## Describe the bug
It seems that the URLs for the following datasets are invalid:
- [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a
- [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/
As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# pick one of the datasets from the list above
ds = load_dataset("bn_hate_speech")
```
## Expected results
Dataset loads without error.
## Actual results
```
Downloading: 3.36kB [00:00, 1.07MB/s]
Downloading: 2.03kB [00:00, 678kB/s]
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators
train_path = dl_manager.download_and_extract(_URL)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path
output_path = get_from_cache(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 3.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2388/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2388/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6462
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6462/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6462/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6462/events
|
https://github.com/huggingface/datasets/pull/6462
| 2,019,238,388
|
PR_kwDODunzps5gz68T
| 6,462
|
Missing DatasetNotFoundError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005594 / 0.011353 (-0.005759) | 0.003672 / 0.011008 (-0.007337) | 0.062796 / 0.038508 (0.024288) | 0.059432 / 0.023109 (0.036323) | 0.253976 / 0.275898 (-0.021922) | 0.281155 / 0.323480 (-0.042325) | 0.003023 / 0.007986 (-0.004962) | 0.003320 / 0.004328 (-0.001008) | 0.049059 / 0.004250 (0.044809) | 0.040252 / 0.037052 (0.003200) | 0.259526 / 0.258489 (0.001037) | 0.318798 / 0.293841 (0.024957) | 0.027883 / 0.128546 (-0.100663) | 0.010883 / 0.075646 (-0.064763) | 0.206948 / 0.419271 (-0.212323) | 0.036335 / 0.043533 (-0.007198) | 0.253209 / 0.255139 (-0.001930) | 0.275173 / 0.283200 (-0.008026) | 0.020365 / 0.141683 (-0.121318) | 1.121630 / 1.452155 (-0.330524) | 1.174680 / 1.492716 (-0.318036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098372 / 0.018006 (0.080366) | 0.309949 / 0.000490 (0.309460) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019495 / 0.037411 (-0.017916) | 0.062321 / 0.014526 (0.047795) | 0.074525 / 0.176557 (-0.102031) | 0.121832 / 0.737135 (-0.615303) | 0.077612 / 0.296338 (-0.218727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288156 / 0.215209 (0.072947) | 2.816411 / 2.077655 (0.738756) | 1.497926 / 1.504120 (-0.006193) | 1.378137 / 1.541195 (-0.163058) | 1.446466 / 1.468490 (-0.022024) | 0.566195 / 4.584777 (-4.018582) | 2.391933 / 3.745712 (-1.353780) | 2.929290 / 5.269862 (-2.340572) | 1.828215 / 4.565676 (-2.737462) | 0.063312 / 0.424275 (-0.360963) | 0.005199 / 0.007607 (-0.002408) | 0.342883 / 0.226044 (0.116838) | 3.378388 / 2.268929 (1.109459) | 1.865710 / 55.444624 (-53.578915) | 1.573442 / 6.876477 (-5.303035) | 1.631228 / 2.142072 (-0.510845) | 0.651614 / 4.805227 (-4.153613) | 0.118177 / 6.500664 (-6.382487) | 0.043303 / 0.075469 (-0.032166) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950694 / 1.841788 (-0.891094) | 12.559851 / 8.074308 (4.485543) | 10.751123 / 10.191392 (0.559731) | 0.143107 / 0.680424 (-0.537317) | 0.014469 / 0.534201 (-0.519732) | 0.289531 / 0.579283 (-0.289752) | 0.267316 / 0.434364 (-0.167047) | 0.327748 / 0.540337 (-0.212590) | 0.437758 / 1.386936 (-0.949178) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005684) | 0.003831 / 0.011008 (-0.007177) | 0.049096 / 0.038508 (0.010588) | 0.061408 / 0.023109 (0.038299) | 0.274571 / 0.275898 (-0.001327) | 0.299978 / 0.323480 (-0.023501) | 0.004216 / 0.007986 (-0.003769) | 0.002848 / 0.004328 (-0.001480) | 0.048755 / 0.004250 (0.044504) | 0.042576 / 0.037052 (0.005524) | 0.276781 / 0.258489 (0.018292) | 0.300903 / 0.293841 (0.007062) | 0.030243 / 0.128546 (-0.098303) | 0.010967 / 0.075646 (-0.064679) | 0.057879 / 0.419271 (-0.361392) | 0.033206 / 0.043533 (-0.010327) | 0.277620 / 0.255139 (0.022481) | 0.296263 / 0.283200 (0.013064) | 0.019022 / 0.141683 (-0.122660) | 1.125615 / 1.452155 (-0.326539) | 1.278016 / 1.492716 (-0.214700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096836 / 0.018006 (0.078830) | 0.307491 / 0.000490 (0.307001) | 0.000230 / 0.000200 (0.000030) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021552 / 0.037411 (-0.015859) | 0.071099 / 0.014526 (0.056573) | 0.082432 / 0.176557 (-0.094124) | 0.121826 / 0.737135 (-0.615310) | 0.084902 / 0.296338 (-0.211437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.328113 / 0.215209 (0.112904) | 2.989613 / 2.077655 (0.911959) | 1.604904 / 1.504120 (0.100784) | 1.485459 / 1.541195 (-0.055735) | 1.524829 / 1.468490 (0.056339) | 0.580589 / 4.584777 (-4.004188) | 2.440087 / 3.745712 (-1.305625) | 2.944697 / 5.269862 (-2.325164) | 1.832728 / 4.565676 (-2.732949) | 0.064423 / 0.424275 (-0.359852) | 0.004991 / 0.007607 (-0.002616) | 0.357878 / 0.226044 (0.131834) | 3.515415 / 2.268929 (1.246487) | 1.964492 / 55.444624 (-53.480132) | 1.684058 / 6.876477 (-5.192418) | 1.730294 / 2.142072 (-0.411778) | 0.661228 / 4.805227 (-4.143999) | 0.122894 / 6.500664 (-6.377770) | 0.041776 / 0.075469 (-0.033693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969849 / 1.841788 (-0.871939) | 12.897067 / 8.074308 (4.822758) | 10.908200 / 10.191392 (0.716808) | 0.141139 / 0.680424 (-0.539285) | 0.015377 / 0.534201 (-0.518824) | 0.288625 / 0.579283 (-0.290658) | 0.279020 / 0.434364 (-0.155344) | 0.328386 / 0.540337 (-0.211951) | 0.590833 / 1.386936 (-0.796103) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004986 / 0.011353 (-0.006367) | 0.003070 / 0.011008 (-0.007938) | 0.062433 / 0.038508 (0.023925) | 0.050639 / 0.023109 (0.027530) | 0.241807 / 0.275898 (-0.034091) | 0.262517 / 0.323480 (-0.060963) | 0.003826 / 0.007986 (-0.004160) | 0.002602 / 0.004328 (-0.001727) | 0.048508 / 0.004250 (0.044257) | 0.037276 / 0.037052 (0.000224) | 0.245757 / 0.258489 (-0.012732) | 0.272969 / 0.293841 (-0.020871) | 0.027139 / 0.128546 (-0.101407) | 0.010265 / 0.075646 (-0.065381) | 0.207279 / 0.419271 (-0.211992) | 0.035312 / 0.043533 (-0.008221) | 0.247535 / 0.255139 (-0.007604) | 0.260668 / 0.283200 (-0.022532) | 0.016496 / 0.141683 (-0.125187) | 1.137510 / 1.452155 (-0.314645) | 1.167870 / 1.492716 (-0.324847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091743 / 0.018006 (0.073736) | 0.298649 / 0.000490 (0.298159) | 0.000208 / 0.000200 (0.000009) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019053 / 0.037411 (-0.018359) | 0.060300 / 0.014526 (0.045774) | 0.072154 / 0.176557 (-0.104402) | 0.120293 / 0.737135 (-0.616842) | 0.073923 / 0.296338 (-0.222415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283058 / 0.215209 (0.067849) | 2.769503 / 2.077655 (0.691849) | 1.457016 / 1.504120 (-0.047104) | 1.335753 / 1.541195 (-0.205441) | 1.325986 / 1.468490 (-0.142504) | 0.562553 / 4.584777 (-4.022224) | 2.406144 / 3.745712 (-1.339568) | 2.778063 / 5.269862 (-2.491799) | 1.782199 / 4.565676 (-2.783477) | 0.062490 / 0.424275 (-0.361785) | 0.004912 / 0.007607 (-0.002695) | 0.338500 / 0.226044 (0.112456) | 3.309746 / 2.268929 (1.040818) | 1.819693 / 55.444624 (-53.624931) | 1.510295 / 6.876477 (-5.366182) | 1.578402 / 2.142072 (-0.563671) | 0.637517 / 4.805227 (-4.167710) | 0.117018 / 6.500664 (-6.383647) | 0.048149 / 0.075469 (-0.027320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939424 / 1.841788 (-0.902364) | 11.494891 / 8.074308 (3.420583) | 10.115194 / 10.191392 (-0.076198) | 0.126751 / 0.680424 (-0.553673) | 0.013567 / 0.534201 (-0.520634) | 0.282501 / 0.579283 (-0.296782) | 0.260594 / 0.434364 (-0.173770) | 0.325940 / 0.540337 (-0.214397) | 0.426186 / 1.386936 (-0.960750) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005405 / 0.011353 (-0.005948) | 0.003557 / 0.011008 (-0.007451) | 0.051139 / 0.038508 (0.012631) | 0.053446 / 0.023109 (0.030337) | 0.268051 / 0.275898 (-0.007847) | 0.292343 / 0.323480 (-0.031136) | 0.004716 / 0.007986 (-0.003269) | 0.002677 / 0.004328 (-0.001651) | 0.047634 / 0.004250 (0.043384) | 0.041062 / 0.037052 (0.004009) | 0.269225 / 0.258489 (0.010736) | 0.297462 / 0.293841 (0.003621) | 0.029292 / 0.128546 (-0.099254) | 0.010947 / 0.075646 (-0.064699) | 0.057845 / 0.419271 (-0.361426) | 0.032793 / 0.043533 (-0.010740) | 0.265308 / 0.255139 (0.010169) | 0.288242 / 0.283200 (0.005043) | 0.018311 / 0.141683 (-0.123372) | 1.140957 / 1.452155 (-0.311197) | 1.204883 / 1.492716 (-0.287833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091375 / 0.018006 (0.073368) | 0.285922 / 0.000490 (0.285432) | 0.000238 / 0.000200 (0.000038) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021277 / 0.037411 (-0.016134) | 0.068853 / 0.014526 (0.054328) | 0.081002 / 0.176557 (-0.095555) | 0.120998 / 0.737135 (-0.616138) | 0.082741 / 0.296338 (-0.213598) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299398 / 0.215209 (0.084189) | 2.909622 / 2.077655 (0.831967) | 1.624381 / 1.504120 (0.120261) | 1.501683 / 1.541195 (-0.039512) | 1.523045 / 1.468490 (0.054555) | 0.548960 / 4.584777 (-4.035817) | 2.413297 / 3.745712 (-1.332415) | 2.817852 / 5.269862 (-2.452010) | 1.754407 / 4.565676 (-2.811270) | 0.061912 / 0.424275 (-0.362363) | 0.004880 / 0.007607 (-0.002727) | 0.353989 / 0.226044 (0.127944) | 3.496147 / 2.268929 (1.227219) | 2.003026 / 55.444624 (-53.441598) | 1.702013 / 6.876477 (-5.174463) | 1.680935 / 2.142072 (-0.461137) | 0.630183 / 4.805227 (-4.175044) | 0.113786 / 6.500664 (-6.386878) | 0.040061 / 0.075469 (-0.035408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957218 / 1.841788 (-0.884569) | 11.914469 / 8.074308 (3.840160) | 10.488896 / 10.191392 (0.297504) | 0.129292 / 0.680424 (-0.551132) | 0.016603 / 0.534201 (-0.517598) | 0.287367 / 0.579283 (-0.291916) | 0.271332 / 0.434364 (-0.163032) | 0.325577 / 0.540337 (-0.214761) | 0.560553 / 1.386936 (-0.826383) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-30T18:09:43Z
| 2023-11-30T18:36:40Z
| 2023-11-30T18:30:30Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6462",
"merged_at": "2023-11-30T18:30:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6462"
}
|
continuation of https://github.com/huggingface/datasets/pull/6431
this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6462/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6462/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3490
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3490/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3490/events
|
https://github.com/huggingface/datasets/issues/3490
| 1,089,730,181
|
I_kwDODunzps5A8_aF
| 3,490
|
Does datasets support load text from HDFS?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4",
"events_url": "https://api.github.com/users/dancingpipi/events{/privacy}",
"followers_url": "https://api.github.com/users/dancingpipi/followers",
"following_url": "https://api.github.com/users/dancingpipi/following{/other_user}",
"gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dancingpipi",
"id": 20511825,
"login": "dancingpipi",
"node_id": "MDQ6VXNlcjIwNTExODI1",
"organizations_url": "https://api.github.com/users/dancingpipi/orgs",
"received_events_url": "https://api.github.com/users/dancingpipi/received_events",
"repos_url": "https://api.github.com/users/dancingpipi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dancingpipi"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)"
] | 2021-12-28T08:56:02Z
| 2022-02-14T14:00:51Z
| null |
NONE
| null | null | null |
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine,
so I wander does datasets support read data from hdfs?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3490/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1881
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1881/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1881/events
|
https://github.com/huggingface/datasets/pull/1881
| 808,578,200
|
MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw
| 1,881
|
`list_datasets()` returns a list of strings, not objects
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pminervini",
"id": 227357,
"login": "pminervini",
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"repos_url": "https://api.github.com/users/pminervini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pminervini"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-15T14:20:15Z
| 2021-02-15T15:09:49Z
| 2021-02-15T15:09:48Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1881.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1881",
"merged_at": "2021-02-15T15:09:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1881.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1881"
}
|
Here and there in the docs there is still stuff like this:
```python
>>> datasets_list = list_datasets()
>>> print(', '.join(dataset.id for dataset in datasets_list))
```
However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1881/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/877/events
|
https://github.com/huggingface/datasets/issues/877
| 748,234,438
|
MDU6SXNzdWU3NDgyMzQ0Mzg=
| 877
|
DataLoader(datasets) become more and more slowly within iterations
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shexuan",
"id": 25664170,
"login": "shexuan",
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"repos_url": "https://api.github.com/users/shexuan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shexuan"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not",
"> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\n> It would be nice to know whether it comes from the dataloader or not\r\n\r\nI did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s."
] | 2020-11-22T12:41:10Z
| 2020-11-29T15:45:12Z
| 2020-11-29T15:45:12Z
|
NONE
| null | null | null |
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s.
And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s.
Could you please help me with this problem?
Thanks a lot!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/877/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6241
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6241/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6241/events
|
https://github.com/huggingface/datasets/pull/6241
| 1,896,429,694
|
PR_kwDODunzps5aVfl-
| 6,241
|
Remove unused global variables in `audio.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004027 / 0.011008 (-0.006982) | 0.084200 / 0.038508 (0.045692) | 0.072233 / 0.023109 (0.049124) | 0.361535 / 0.275898 (0.085637) | 0.386196 / 0.323480 (0.062716) | 0.004047 / 0.007986 (-0.003939) | 0.003416 / 0.004328 (-0.000912) | 0.064724 / 0.004250 (0.060474) | 0.055740 / 0.037052 (0.018688) | 0.360422 / 0.258489 (0.101933) | 0.399230 / 0.293841 (0.105389) | 0.031537 / 0.128546 (-0.097009) | 0.008630 / 0.075646 (-0.067016) | 0.289652 / 0.419271 (-0.129620) | 0.052881 / 0.043533 (0.009348) | 0.359538 / 0.255139 (0.104399) | 0.379410 / 0.283200 (0.096211) | 0.024539 / 0.141683 (-0.117144) | 1.470891 / 1.452155 (0.018736) | 1.578879 / 1.492716 (0.086163) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239200 / 0.018006 (0.221194) | 0.462100 / 0.000490 (0.461610) | 0.009055 / 0.000200 (0.008856) | 0.000406 / 0.000054 (0.000352) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028736 / 0.037411 (-0.008675) | 0.088051 / 0.014526 (0.073525) | 0.098101 / 0.176557 (-0.078456) | 0.152399 / 0.737135 (-0.584737) | 0.098776 / 0.296338 (-0.197563) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401761 / 0.215209 (0.186552) | 4.014143 / 2.077655 (1.936488) | 2.033255 / 1.504120 (0.529135) | 1.855347 / 1.541195 (0.314152) | 1.996144 / 1.468490 (0.527654) | 0.488545 / 4.584777 (-4.096232) | 3.712030 / 3.745712 (-0.033682) | 3.439725 / 5.269862 (-1.830137) | 2.119289 / 4.565676 (-2.446388) | 0.057523 / 0.424275 (-0.366752) | 0.007780 / 0.007607 (0.000173) | 0.479522 / 0.226044 (0.253477) | 4.798218 / 2.268929 (2.529290) | 2.543816 / 55.444624 (-52.900809) | 2.180392 / 6.876477 (-4.696085) | 2.427195 / 2.142072 (0.285122) | 0.602071 / 4.805227 (-4.203156) | 0.133450 / 6.500664 (-6.367214) | 0.061975 / 0.075469 (-0.013494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250040 / 1.841788 (-0.591748) | 19.532327 / 8.074308 (11.458019) | 14.200298 / 10.191392 (4.008906) | 0.165165 / 0.680424 (-0.515259) | 0.018326 / 0.534201 (-0.515875) | 0.389788 / 0.579283 (-0.189495) | 0.419301 / 0.434364 (-0.015063) | 0.452645 / 0.540337 (-0.087693) | 0.643409 / 1.386936 (-0.743527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007040 / 0.011353 (-0.004313) | 0.004157 / 0.011008 (-0.006851) | 0.065439 / 0.038508 (0.026931) | 0.083210 / 0.023109 (0.060101) | 0.406707 / 0.275898 (0.130809) | 0.442759 / 0.323480 (0.119279) | 0.006321 / 0.007986 (-0.001665) | 0.003684 / 0.004328 (-0.000645) | 0.064517 / 0.004250 (0.060266) | 0.060676 / 0.037052 (0.023624) | 0.413395 / 0.258489 (0.154906) | 0.446776 / 0.293841 (0.152935) | 0.032542 / 0.128546 (-0.096004) | 0.008614 / 0.075646 (-0.067033) | 0.071760 / 0.419271 (-0.347511) | 0.049646 / 0.043533 (0.006113) | 0.402409 / 0.255139 (0.147270) | 0.422775 / 0.283200 (0.139575) | 0.024846 / 0.141683 (-0.116836) | 1.522915 / 1.452155 (0.070761) | 1.566518 / 1.492716 (0.073802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234478 / 0.018006 (0.216472) | 0.461318 / 0.000490 (0.460828) | 0.006304 / 0.000200 (0.006105) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036904 / 0.037411 (-0.000508) | 0.102144 / 0.014526 (0.087619) | 0.108985 / 0.176557 (-0.067572) | 0.162609 / 0.737135 (-0.574526) | 0.110295 / 0.296338 (-0.186044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438735 / 0.215209 (0.223526) | 4.377602 / 2.077655 (2.299948) | 2.375305 / 1.504120 (0.871185) | 2.215877 / 1.541195 (0.674682) | 2.317468 / 1.468490 (0.848978) | 0.495137 / 4.584777 (-4.089640) | 3.726323 / 3.745712 (-0.019389) | 3.493785 / 5.269862 (-1.776077) | 2.177891 / 4.565676 (-2.387785) | 0.058975 / 0.424275 (-0.365300) | 0.007897 / 0.007607 (0.000290) | 0.514063 / 0.226044 (0.288019) | 5.132714 / 2.268929 (2.863786) | 2.914125 / 55.444624 (-52.530499) | 2.532912 / 6.876477 (-4.343564) | 2.776438 / 2.142072 (0.634365) | 0.624831 / 4.805227 (-4.180396) | 0.135023 / 6.500664 (-6.365641) | 0.062040 / 0.075469 (-0.013429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359970 / 1.841788 (-0.481818) | 20.816464 / 8.074308 (12.742156) | 16.103544 / 10.191392 (5.912152) | 0.149120 / 0.680424 (-0.531304) | 0.020279 / 0.534201 (-0.513922) | 0.408727 / 0.579283 (-0.170556) | 0.436191 / 0.434364 (0.001827) | 0.485056 / 0.540337 (-0.055281) | 0.737727 / 1.386936 (-0.649209) |\n\n</details>\n</details>\n\n\n",
"CI failures are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008102 / 0.011353 (-0.003251) | 0.004886 / 0.011008 (-0.006123) | 0.090482 / 0.038508 (0.051974) | 0.071594 / 0.023109 (0.048485) | 0.428678 / 0.275898 (0.152780) | 0.442179 / 0.323480 (0.118699) | 0.004329 / 0.007986 (-0.003657) | 0.003756 / 0.004328 (-0.000573) | 0.087125 / 0.004250 (0.082874) | 0.055159 / 0.037052 (0.018107) | 0.437646 / 0.258489 (0.179157) | 0.446665 / 0.293841 (0.152824) | 0.046402 / 0.128546 (-0.082145) | 0.014248 / 0.075646 (-0.061398) | 0.331401 / 0.419271 (-0.087871) | 0.062010 / 0.043533 (0.018478) | 0.434774 / 0.255139 (0.179635) | 0.441063 / 0.283200 (0.157863) | 0.037424 / 0.141683 (-0.104258) | 1.720276 / 1.452155 (0.268121) | 1.731491 / 1.492716 (0.238775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302935 / 0.018006 (0.284929) | 0.590556 / 0.000490 (0.590067) | 0.014473 / 0.000200 (0.014274) | 0.000712 / 0.000054 (0.000658) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031289 / 0.037411 (-0.006122) | 0.091175 / 0.014526 (0.076649) | 0.112895 / 0.176557 (-0.063661) | 0.199558 / 0.737135 (-0.537577) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571586 / 0.215209 (0.356377) | 5.706894 / 2.077655 (3.629240) | 2.512701 / 1.504120 (1.008581) | 2.151705 / 1.541195 (0.610510) | 2.252738 / 1.468490 (0.784248) | 0.857524 / 4.584777 (-3.727253) | 5.189027 / 3.745712 (1.443315) | 4.464979 / 5.269862 (-0.804882) | 2.787486 / 4.565676 (-1.778190) | 0.090161 / 0.424275 (-0.334115) | 0.008649 / 0.007607 (0.001042) | 0.703367 / 0.226044 (0.477322) | 7.128971 / 2.268929 (4.860043) | 3.437475 / 55.444624 (-52.007149) | 2.562291 / 6.876477 (-4.314186) | 2.753419 / 2.142072 (0.611346) | 0.981964 / 4.805227 (-3.823263) | 0.194533 / 6.500664 (-6.306131) | 0.069659 / 0.075469 (-0.005810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510356 / 1.841788 (-0.331431) | 22.414117 / 8.074308 (14.339809) | 20.325418 / 10.191392 (10.134025) | 0.226823 / 0.680424 (-0.453601) | 0.029123 / 0.534201 (-0.505078) | 0.454656 / 0.579283 (-0.124627) | 0.559588 / 0.434364 (0.125224) | 0.547386 / 0.540337 (0.007048) | 0.770169 / 1.386936 (-0.616767) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010167 / 0.011353 (-0.001186) | 0.005164 / 0.011008 (-0.005844) | 0.094897 / 0.038508 (0.056388) | 0.078027 / 0.023109 (0.054918) | 0.474442 / 0.275898 (0.198544) | 0.503362 / 0.323480 (0.179882) | 0.006988 / 0.007986 (-0.000998) | 0.005369 / 0.004328 (0.001041) | 0.079547 / 0.004250 (0.075297) | 0.059382 / 0.037052 (0.022329) | 0.468759 / 0.258489 (0.210270) | 0.566780 / 0.293841 (0.272939) | 0.050791 / 0.128546 (-0.077755) | 0.013191 / 0.075646 (-0.062455) | 0.086086 / 0.419271 (-0.333186) | 0.060399 / 0.043533 (0.016866) | 0.492985 / 0.255139 (0.237846) | 0.509139 / 0.283200 (0.225940) | 0.034537 / 0.141683 (-0.107146) | 1.699166 / 1.452155 (0.247011) | 1.789781 / 1.492716 (0.297065) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278776 / 0.018006 (0.260769) | 0.615877 / 0.000490 (0.615387) | 0.009062 / 0.000200 (0.008862) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032931 / 0.037411 (-0.004481) | 0.094796 / 0.014526 (0.080270) | 0.126697 / 0.176557 (-0.049859) | 0.168172 / 0.737135 (-0.568963) | 0.113906 / 0.296338 (-0.182433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602378 / 0.215209 (0.387169) | 5.987708 / 2.077655 (3.910054) | 2.800339 / 1.504120 (1.296219) | 2.474127 / 1.541195 (0.932932) | 2.502387 / 1.468490 (1.033897) | 0.808147 / 4.584777 (-3.776630) | 5.212691 / 3.745712 (1.466979) | 4.479452 / 5.269862 (-0.790409) | 2.831960 / 4.565676 (-1.733717) | 0.086777 / 0.424275 (-0.337498) | 0.009492 / 0.007607 (0.001885) | 0.716848 / 0.226044 (0.490803) | 7.099904 / 2.268929 (4.830975) | 3.794708 / 55.444624 (-51.649916) | 2.859826 / 6.876477 (-4.016650) | 3.109673 / 2.142072 (0.967600) | 0.936776 / 4.805227 (-3.868451) | 0.195152 / 6.500664 (-6.305512) | 0.074184 / 0.075469 (-0.001285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585419 / 1.841788 (-0.256369) | 22.420377 / 8.074308 (14.346068) | 20.761533 / 10.191392 (10.570141) | 0.228480 / 0.680424 (-0.451943) | 0.030944 / 0.534201 (-0.503257) | 0.444717 / 0.579283 (-0.134566) | 0.579632 / 0.434364 (0.145268) | 0.521669 / 0.540337 (-0.018669) | 0.748274 / 1.386936 (-0.638662) |\n\n</details>\n</details>\n\n\n"
] | 2023-09-14T12:06:32Z
| 2023-09-15T15:57:10Z
| 2023-09-15T15:46:07Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6241",
"merged_at": "2023-09-15T15:46:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6241"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6241/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4729/events
|
https://github.com/huggingface/datasets/pull/4729
| 1,313,374,015
|
PR_kwDODunzps473GmR
| 4,729
|
Refactor Hub tests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-21T14:43:13Z
| 2022-07-22T15:09:49Z
| 2022-07-22T14:56:29Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4729.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4729",
"merged_at": "2022-07-22T14:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4729.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4729"
}
|
This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:
- `ci_hub_config`
- `set_ci_hub_access_token`: to replace setUp/tearDown
- `temporary_repo` context manager: to replace `try... finally`
- `cleanup_repo`: to delete repo accidentally created if one of the tests fails
This is a preliminary work done to manage unit/integration tests separately.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4729/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5819
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5819/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5819/events
|
https://github.com/huggingface/datasets/issues/5819
| 1,695,536,738
|
I_kwDODunzps5lD9Zi
| 5,819
|
Cannot pickle error in Dataset.from_generator()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xinghaow99",
"id": 50691954,
"login": "xinghaow99",
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xinghaow99"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ",
"> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions).\r\n\r\nHi! Thank you for your reply! Everything works perfectly with your suggestion!\r\n\r\nClosing the issue.\r\n"
] | 2023-05-04T08:39:09Z
| 2023-05-05T19:20:59Z
| 2023-05-05T19:20:58Z
|
NONE
| null | null | null |
### Describe the bug
I'm trying to use Dataset.from_generator() to generate a large dataset.
### Steps to reproduce the bug
Code to reproduce:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig
import torch
from tqdm import tqdm
from datasets import load_dataset
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
model = torch.compile(model)
def generate_data(data_loader):
model.eval()
for batch in tqdm(data_loader):
input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0")
with torch.no_grad():
outputs = model.generate(input_ids, generation_config=generation_config)
decoder_hidden_states = outputs.decoder_hidden_states
for i, h in zip(batch['instruction'], decoder_hidden_states):
yield {"instruction": i, "decoder_hidden_states": h}
generation_config = GenerationConfig(
temperature=1,
max_new_tokens=1024,
do_sample=False,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
)
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k")
train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True)
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
dataset.save_to_disk("data/flant5_small_generation")
```
### Expected behavior
The dataset should be generated and saved.
But the following error occurred:
```
Traceback (most recent call last):
File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module>
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator
return GeneratorDatasetInputStream(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__
self.builder = Generator(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__
self.config, self.config_id = self._create_builder_config(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config
config_id = builder_config.create_config_id(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash
return cls.hash_default(value)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default
return cls.hash_bytes(dumps(value))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps
dump(obj, file)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump
Pickler(file, recurse=True).dump(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump
self.save(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ConfigModuleInstance' object
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5819/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.