url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.1B
node_id
stringlengths
18
32
number
int64
1
3.54k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
int64
1,587B
1,642B
updated_at
int64
1,587B
1,642B
closed_at
int64
1,587B
1,641B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1917/comments
https://api.github.com/repos/huggingface/datasets/issues/1917/events
https://github.com/huggingface/datasets/issues/1917
812,390,178
MDU6SXNzdWU4MTIzOTAxNzg=
1,917
UnicodeDecodeError: windows 10 machine
{ "login": "yosiasz", "id": 900951, "node_id": "MDQ6VXNlcjkwMDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yosiasz", "html_url": "https://github.com/yosiasz", "followers_url": "https://api.github.com/users/yosiasz/followers", "following_url": "https://api.github.com/users/yosiasz/following{/other_user}", "gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}", "starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions", "organizations_url": "https://api.github.com/users/yosiasz/orgs", "repos_url": "https://api.github.com/users/yosiasz/repos", "events_url": "https://api.github.com/users/yosiasz/events{/privacy}", "received_events_url": "https://api.github.com/users/yosiasz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "upgraded to php 3.9.2 and it works!" ]
1,613,772,785,000
1,613,774,471,000
1,613,774,428,000
NONE
null
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined> ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1917/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1916/comments
https://api.github.com/repos/huggingface/datasets/issues/1916/events
https://github.com/huggingface/datasets/pull/1916
812,291,984
MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5
1,916
Remove unused py_utils objects
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?", "Sorry @lhoestq, I forgot to update the imports... :/", "It's fine, the CI should have caught this tbh. Not sure why it did't fail" ]
1,613,764,285,000
1,614,005,816,000
1,614,000,769,000
MEMBER
null
Remove unused/unnecessary py_utils functions/classes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1916/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1916", "html_url": "https://github.com/huggingface/datasets/pull/1916", "diff_url": "https://github.com/huggingface/datasets/pull/1916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1916.patch", "merged_at": 1614000769000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1915/comments
https://api.github.com/repos/huggingface/datasets/issues/1915/events
https://github.com/huggingface/datasets/issues/1915
812,229,654
MDU6SXNzdWU4MTIyMjk2NTQ=
1,915
Unable to download `wiki_dpr`
{ "login": "nitarakad", "id": 18504534, "node_id": "MDQ6VXNlcjE4NTA0NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nitarakad", "html_url": "https://github.com/nitarakad", "followers_url": "https://api.github.com/users/nitarakad/followers", "following_url": "https://api.github.com/users/nitarakad/following{/other_user}", "gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}", "starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions", "organizations_url": "https://api.github.com/users/nitarakad/orgs", "repos_url": "https://api.github.com/users/nitarakad/repos", "events_url": "https://api.github.com/users/nitarakad/events{/privacy}", "received_events_url": "https://api.github.com/users/nitarakad/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix", "I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !", "Closing since this...
1,613,758,292,000
1,614,793,248,000
1,614,793,248,000
NONE
null
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}` I tried adding in flags `with_embeddings=False` and `with_index=False`: `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")` But I got the following error: `raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}` Is there anything else I need to set to download the dataset? **UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1915/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1914/comments
https://api.github.com/repos/huggingface/datasets/issues/1914/events
https://github.com/huggingface/datasets/pull/1914
812,149,201
MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz
1,914
Fix logging imports and make all datasets use library logger
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,751,154,000
1,613,936,883,000
1,613,936,883,000
MEMBER
null
Fix library relative logging imports and make all datasets use library logger.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1914/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1914", "html_url": "https://github.com/huggingface/datasets/pull/1914", "diff_url": "https://github.com/huggingface/datasets/pull/1914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1914.patch", "merged_at": 1613936883000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1913/comments
https://api.github.com/repos/huggingface/datasets/issues/1913/events
https://github.com/huggingface/datasets/pull/1913
812,127,307
MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw
1,913
Add keep_linebreaks parameter to text loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?", "Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the docume...
1,613,749,425,000
1,613,759,772,000
1,613,759,771,000
MEMBER
null
As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset. cc @sgugger @jncasey
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1913/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1913/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1913", "html_url": "https://github.com/huggingface/datasets/pull/1913", "diff_url": "https://github.com/huggingface/datasets/pull/1913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1913.patch", "merged_at": 1613759771000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1912/comments
https://api.github.com/repos/huggingface/datasets/issues/1912/events
https://github.com/huggingface/datasets/pull/1912
812,034,140
MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx
1,912
Update: WMT - use mirror links
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "So much better - thank you for doing that, @lhoestq!", "Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893", "Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well." ]
1,613,742,154,000
1,614,174,293,000
1,614,174,293,000
MEMBER
null
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1912/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1912", "html_url": "https://github.com/huggingface/datasets/pull/1912", "diff_url": "https://github.com/huggingface/datasets/pull/1912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1912.patch", "merged_at": 1614174293000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1911/comments
https://api.github.com/repos/huggingface/datasets/issues/1911/events
https://github.com/huggingface/datasets/issues/1911
812,009,956
MDU6SXNzdWU4MTIwMDk5NTY=
1,911
Saving processed dataset running infinitely
{ "login": "ayubSubhaniya", "id": 20911334, "node_id": "MDQ6VXNlcjIwOTExMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayubSubhaniya", "html_url": "https://github.com/ayubSubhaniya", "followers_url": "https://api.github.com/users/ayubSubhaniya/followers", "following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}", "gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions", "organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs", "repos_url": "https://api.github.com/users/ayubSubhaniya/repos", "events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}", "received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@thomwolf @lhoestq can you guys please take a look and recommend some solution.", "am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Save...
1,613,740,159,000
1,614,065,684,000
null
NONE
null
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796) ```dataset._data = dataset._data.filter(...)``` It took 1 hr for the filter. Then i use `save_to_disk()` on processed dataset and it is running forever. I have been waiting since 8 hrs, it has not written a single byte. Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`. Second process is the one. <img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png"> I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1911/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1910/comments
https://api.github.com/repos/huggingface/datasets/issues/1910/events
https://github.com/huggingface/datasets/pull/1910
811,697,108
MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3
1,910
Adding CoNLLpp dataset.
{ "login": "ZihanWangKi", "id": 21319243, "node_id": "MDQ6VXNlcjIxMzE5MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZihanWangKi", "html_url": "https://github.com/ZihanWangKi", "followers_url": "https://api.github.com/users/ZihanWangKi/followers", "following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}", "gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions", "organizations_url": "https://api.github.com/users/ZihanWangKi/orgs", "repos_url": "https://api.github.com/users/ZihanWangKi/repos", "events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}", "received_events_url": "https://api.github.com/users/ZihanWangKi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch." ]
1,613,711,550,000
1,614,895,367,000
1,614,895,367,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1910/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1910", "html_url": "https://github.com/huggingface/datasets/pull/1910", "diff_url": "https://github.com/huggingface/datasets/pull/1910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1910.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1907/comments
https://api.github.com/repos/huggingface/datasets/issues/1907/events
https://github.com/huggingface/datasets/issues/1907
811,520,569
MDU6SXNzdWU4MTE1MjA1Njk=
1,907
DBPedia14 Dataset Checksum bug?
{ "login": "francisco-perez-sorrosal", "id": 918006, "node_id": "MDQ6VXNlcjkxODAwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francisco-perez-sorrosal", "html_url": "https://github.com/francisco-perez-sorrosal", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe er...
1,613,687,148,000
1,614,036,125,000
1,614,036,124,000
CONTRIBUTOR
null
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1907/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1906/comments
https://api.github.com/repos/huggingface/datasets/issues/1906/events
https://github.com/huggingface/datasets/issues/1906
811,405,274
MDU6SXNzdWU4MTE0MDUyNzQ=
1,906
Feature Request: Support for Pandas `Categorical`
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6...
open
false
null
[]
null
[ "We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corre...
1,613,677,565,000
1,614,091,130,000
null
CONTRIBUTOR
null
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table ``` I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`? e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept: ``` index_type = generate_from_arrow_type(pa_type.index_type) value_type = generate_from_arrow_type(pa_type.value_type) ``` and then additional code points to modify: - FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694 - A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719 - I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755 - Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775 I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1906/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1905/comments
https://api.github.com/repos/huggingface/datasets/issues/1905/events
https://github.com/huggingface/datasets/pull/1905
811,384,174
MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1
1,905
Standardizing datasets.dtypes
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly." ]
1,613,675,731,000
1,613,858,490,000
1,613,858,490,000
CONTRIBUTOR
null
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1905/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1905", "html_url": "https://github.com/huggingface/datasets/pull/1905", "diff_url": "https://github.com/huggingface/datasets/pull/1905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1905.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1904/comments
https://api.github.com/repos/huggingface/datasets/issues/1904/events
https://github.com/huggingface/datasets/pull/1904
811,260,904
MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0
1,904
Fix to_pandas for boolean ArrayXD
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks!" ]
1,613,665,846,000
1,613,668,203,000
1,613,668,201,000
MEMBER
null
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1904/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1904", "html_url": "https://github.com/huggingface/datasets/pull/1904", "diff_url": "https://github.com/huggingface/datasets/pull/1904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1904.patch", "merged_at": 1613668200000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1903/comments
https://api.github.com/repos/huggingface/datasets/issues/1903/events
https://github.com/huggingface/datasets/pull/1903
811,145,531
MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2
1,903
Initial commit for the addition of TIMIT dataset
{ "login": "vrindaprabhu", "id": 16264631, "node_id": "MDQ6VXNlcjE2MjY0NjMx", "avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrindaprabhu", "html_url": "https://github.com/vrindaprabhu", "followers_url": "https://api.github.com/users/vrindaprabhu/followers", "following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions", "organizations_url": "https://api.github.com/users/vrindaprabhu/orgs", "repos_url": "https://api.github.com/users/vrindaprabhu/repos", "events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/vrindaprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@patrickvonplaten could you please review and help me close this PR?", "@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my sid...
1,613,658,192,000
1,614,591,552,000
1,614,591,552,000
CONTRIBUTOR
null
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten Requesting your comments, will be happy to address them!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1903", "html_url": "https://github.com/huggingface/datasets/pull/1903", "diff_url": "https://github.com/huggingface/datasets/pull/1903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1903.patch", "merged_at": 1614591552000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1902/comments
https://api.github.com/repos/huggingface/datasets/issues/1902/events
https://github.com/huggingface/datasets/pull/1902
810,931,171
MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1
1,902
Fix setimes_2 wmt urls
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,641,346,000
1,613,642,141,000
1,613,642,141,000
MEMBER
null
Continuation of #1901 Some other urls were missing https
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1902/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1902", "html_url": "https://github.com/huggingface/datasets/pull/1902", "diff_url": "https://github.com/huggingface/datasets/pull/1902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1902.patch", "merged_at": 1613642141000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1901/comments
https://api.github.com/repos/huggingface/datasets/issues/1901/events
https://github.com/huggingface/datasets/pull/1901
810,845,605
MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy
1,901
Fix OPUS dataset download errors
{ "login": "YangWang92", "id": 3883941, "node_id": "MDQ6VXNlcjM4ODM5NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YangWang92", "html_url": "https://github.com/YangWang92", "followers_url": "https://api.github.com/users/YangWang92/followers", "following_url": "https://api.github.com/users/YangWang92/following{/other_user}", "gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}", "starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions", "organizations_url": "https://api.github.com/users/YangWang92/orgs", "repos_url": "https://api.github.com/users/YangWang92/repos", "events_url": "https://api.github.com/users/YangWang92/events{/privacy}", "received_events_url": "https://api.github.com/users/YangWang92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,633,981,000
1,613,660,840,000
1,613,641,161,000
CONTRIBUTOR
null
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1901/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1901", "html_url": "https://github.com/huggingface/datasets/pull/1901", "diff_url": "https://github.com/huggingface/datasets/pull/1901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1901.patch", "merged_at": 1613641161000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1900/comments
https://api.github.com/repos/huggingface/datasets/issues/1900/events
https://github.com/huggingface/datasets/pull/1900
810,512,488
MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3
1,900
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!" ]
1,613,593,564,000
1,613,759,231,000
1,613,759,231,000
CONTRIBUTOR
null
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant: ``` def __post_init__(self): if self.dtype == "double": # fix inferred type self.dtype = "float64" if self.dtype == "float": # fix inferred type self.dtype = "float32" ``` However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that. The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1900", "html_url": "https://github.com/huggingface/datasets/pull/1900", "diff_url": "https://github.com/huggingface/datasets/pull/1900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1900.patch", "merged_at": 1613759231000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1899/comments
https://api.github.com/repos/huggingface/datasets/issues/1899/events
https://github.com/huggingface/datasets/pull/1899
810,308,332
MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4
1,899
Fix: ALT - fix duplicated examples in alt-parallel
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,577,236,000
1,613,582,449,000
1,613,582,449,000
MEMBER
null
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1899/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1899", "html_url": "https://github.com/huggingface/datasets/pull/1899", "diff_url": "https://github.com/huggingface/datasets/pull/1899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1899.patch", "merged_at": 1613582449000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1898/comments
https://api.github.com/repos/huggingface/datasets/issues/1898/events
https://github.com/huggingface/datasets/issues/1898
810,157,251
MDU6SXNzdWU4MTAxNTcyNTE=
1,898
ALT dataset has repeating instances in all splits
{ "login": "10-zin", "id": 33179372, "node_id": "MDQ6VXNlcjMzMTc5Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/10-zin", "html_url": "https://github.com/10-zin", "followers_url": "https://api.github.com/users/10-zin/followers", "following_url": "https://api.github.com/users/10-zin/following{/other_user}", "gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}", "starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/10-zin/subscriptions", "organizations_url": "https://api.github.com/users/10-zin/orgs", "repos_url": "https://api.github.com/users/10-zin/repos", "events_url": "https://api.github.com/users/10-zin/events{/privacy}", "received_events_url": "https://api.github.com/users/10-zin/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for reporting. This looks like a very bad issue. I'm looking into it", "I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch", "Thanks!!! works perfectly in the blead...
1,613,566,302,000
1,613,715,526,000
1,613,715,526,000
NONE
null
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1898/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1897/comments
https://api.github.com/repos/huggingface/datasets/issues/1897/events
https://github.com/huggingface/datasets/pull/1897
810,113,263
MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy
1,897
Fix PandasArrayExtensionArray conversion to native type
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,562,504,000
1,613,567,716,000
1,613,567,715,000
MEMBER
null
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna method was wrong 2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray)) I fixed these two issues and now the conversion to native types works, and so is the export to csv. cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1897/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1897", "html_url": "https://github.com/huggingface/datasets/pull/1897", "diff_url": "https://github.com/huggingface/datasets/pull/1897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1897.patch", "merged_at": 1613567715000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1895/comments
https://api.github.com/repos/huggingface/datasets/issues/1895/events
https://github.com/huggingface/datasets/issues/1895
809,630,271
MDU6SXNzdWU4MDk2MzAyNzE=
1,895
Bug Report: timestamp[ns] not recognized
{ "login": "justin-yan", "id": 7731709, "node_id": "MDQ6VXNlcjc3MzE3MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/justin-yan", "html_url": "https://github.com/justin-yan", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "repos_url": "https://api.github.com/users/justin-yan/repos", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more cont...
1,613,507,884,000
1,613,759,231,000
1,613,759,231,000
CONTRIBUTOR
null
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method. Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well! ``` $ pip list # only the relevant libraries/versions datasets 1.2.1 pandas 1.0.3 pyarrow 3.0.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1895/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1894/comments
https://api.github.com/repos/huggingface/datasets/issues/1894/events
https://github.com/huggingface/datasets/issues/1894
809,609,654
MDU6SXNzdWU4MDk2MDk2NTQ=
1,894
benchmarking against MMapIndexedDataset
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for read...
1,613,505,898,000
1,613,587,948,000
null
CONTRIBUTOR
null
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens). Questions: 1) Is this (basically identical) performance expected? 2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?) 3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks? Thanks in advance! Sam
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1894/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1893/comments
https://api.github.com/repos/huggingface/datasets/issues/1893/events
https://github.com/huggingface/datasets/issues/1893
809,556,503
MDU6SXNzdWU4MDk1NTY1MDM=
1,893
wmt19 is broken
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?", "Closing since this has been fixed by #1912" ]
1,613,500,798,000
1,614,793,322,000
1,614,793,322,000
CONTRIBUTOR
null
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract return self.extract(self.download(url_or_urls)) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested mapped = [ File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested return function(data_struct) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download return cached_path(url_or_filename, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1893/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1892/comments
https://api.github.com/repos/huggingface/datasets/issues/1892/events
https://github.com/huggingface/datasets/issues/1892
809,554,174
MDU6SXNzdWU4MDk1NTQxNzQ=
1,892
request to mirror wmt datasets, as they are really slow to download
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check)...
1,613,500,571,000
1,635,231,342,000
1,616,673,203,000
CONTRIBUTOR
null
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1892/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1891/comments
https://api.github.com/repos/huggingface/datasets/issues/1891/events
https://github.com/huggingface/datasets/issues/1891
809,550,001
MDU6SXNzdWU4MDk1NTAwMDE=
1,891
suggestion to improve a missing dataset error
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,613,500,153,000
1,613,500,214,000
null
CONTRIBUTOR
null
I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`: ``` True, predict_with_generate=True) Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module raise FileNotFoundError( FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py. The file is also not present on the master branch on github. ``` Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there. The error occured when running: ``` cd examples/seq2seq export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " ``` Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1891/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1890/comments
https://api.github.com/repos/huggingface/datasets/issues/1890/events
https://github.com/huggingface/datasets/pull/1890
809,395,586
MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx
1,890
Reformat dataset cards section titles
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,488,307,000
1,613,488,354,000
1,613,488,353,000
MEMBER
null
Titles are formatted like [Foo](#foo) instead of just Foo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1890", "html_url": "https://github.com/huggingface/datasets/pull/1890", "diff_url": "https://github.com/huggingface/datasets/pull/1890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1890.patch", "merged_at": 1613488353000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1889/comments
https://api.github.com/repos/huggingface/datasets/issues/1889/events
https://github.com/huggingface/datasets/pull/1889
809,276,015
MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz
1,889
Implement to_dict and to_pandas for Dataset
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Next step is going to add these two in the documentation ^^" ]
1,613,479,099,000
1,613,673,757,000
1,613,673,754,000
CONTRIBUTOR
null
With options to return a generator or the full dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1889", "html_url": "https://github.com/huggingface/datasets/pull/1889", "diff_url": "https://github.com/huggingface/datasets/pull/1889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1889.patch", "merged_at": 1613673754000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1888/comments
https://api.github.com/repos/huggingface/datasets/issues/1888/events
https://github.com/huggingface/datasets/pull/1888
809,241,123
MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4
1,888
Docs for adding new column on formatted dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Close #1872" ]
1,613,475,900,000
1,617,112,863,000
1,613,476,737,000
MEMBER
null
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1888/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888", "html_url": "https://github.com/huggingface/datasets/pull/1888", "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "merged_at": 1613476737000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1887/comments
https://api.github.com/repos/huggingface/datasets/issues/1887/events
https://github.com/huggingface/datasets/pull/1887
809,229,809
MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy
1,887
Implement to_csv for Dataset
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.ht...
1,613,474,849,000
1,613,727,719,000
1,613,727,719,000
CONTRIBUTOR
null
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1887/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1887", "html_url": "https://github.com/huggingface/datasets/pull/1887", "diff_url": "https://github.com/huggingface/datasets/pull/1887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1887.patch", "merged_at": 1613727719000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1886/comments
https://api.github.com/repos/huggingface/datasets/issues/1886/events
https://github.com/huggingface/datasets/pull/1886
809,221,885
MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz
1,886
Common voice
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have ...
1,613,474,170,000
1,615,315,891,000
1,615,315,891,000
CONTRIBUTOR
null
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1886/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1886", "html_url": "https://github.com/huggingface/datasets/pull/1886", "diff_url": "https://github.com/huggingface/datasets/pull/1886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1886.patch", "merged_at": 1615315891000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1885/comments
https://api.github.com/repos/huggingface/datasets/issues/1885/events
https://github.com/huggingface/datasets/pull/1885
808,881,501
MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz
1,885
add missing info on how to add large files
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,432,799,000
1,613,492,539,000
1,613,475,852,000
CONTRIBUTOR
null
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1885/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1885", "html_url": "https://github.com/huggingface/datasets/pull/1885", "diff_url": "https://github.com/huggingface/datasets/pull/1885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1885.patch", "merged_at": 1613475852000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1884/comments
https://api.github.com/repos/huggingface/datasets/issues/1884/events
https://github.com/huggingface/datasets/pull/1884
808,755,894
MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5
1,884
dtype fix when using numpy arrays
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,415,325,000
1,627,642,878,000
1,627,642,878,000
CONTRIBUTOR
null
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1884/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1884", "html_url": "https://github.com/huggingface/datasets/pull/1884", "diff_url": "https://github.com/huggingface/datasets/pull/1884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1884.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1883/comments
https://api.github.com/repos/huggingface/datasets/issues/1883/events
https://github.com/huggingface/datasets/pull/1883
808,750,623
MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz
1,883
Add not-in-place implementations for several dataset transforms
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)", "I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.", "Now let's update the ...
1,613,414,666,000
1,614,178,489,000
1,614,178,406,000
CONTRIBUTOR
null
Should we deprecate in-place versions of such methods?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1883", "html_url": "https://github.com/huggingface/datasets/pull/1883", "diff_url": "https://github.com/huggingface/datasets/pull/1883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1883.patch", "merged_at": 1614178406000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1882/comments
https://api.github.com/repos/huggingface/datasets/issues/1882/events
https://github.com/huggingface/datasets/pull/1882
808,716,576
MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw
1,882
Create Remote Manager
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_fil...
1,613,410,584,000
1,615,220,110,000
null
MEMBER
null
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1882/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1882", "html_url": "https://github.com/huggingface/datasets/pull/1882", "diff_url": "https://github.com/huggingface/datasets/pull/1882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1882.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1881/comments
https://api.github.com/repos/huggingface/datasets/issues/1881/events
https://github.com/huggingface/datasets/pull/1881
808,578,200
MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw
1,881
`list_datasets()` returns a list of strings, not objects
{ "login": "pminervini", "id": 227357, "node_id": "MDQ6VXNlcjIyNzM1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pminervini", "html_url": "https://github.com/pminervini", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "organizations_url": "https://api.github.com/users/pminervini/orgs", "repos_url": "https://api.github.com/users/pminervini/repos", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "received_events_url": "https://api.github.com/users/pminervini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,398,815,000
1,613,401,789,000
1,613,401,788,000
CONTRIBUTOR
null
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1881", "html_url": "https://github.com/huggingface/datasets/pull/1881", "diff_url": "https://github.com/huggingface/datasets/pull/1881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1881.patch", "merged_at": 1613401788000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1880/comments
https://api.github.com/repos/huggingface/datasets/issues/1880/events
https://github.com/huggingface/datasets/pull/1880
808,563,439
MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0
1,880
Update multi_woz_v22 checksums
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,397,618,000
1,613,398,699,000
1,613,398,698,000
MEMBER
null
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1880", "html_url": "https://github.com/huggingface/datasets/pull/1880", "diff_url": "https://github.com/huggingface/datasets/pull/1880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1880.patch", "merged_at": 1613398698000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1879/comments
https://api.github.com/repos/huggingface/datasets/issues/1879/events
https://github.com/huggingface/datasets/pull/1879
808,541,442
MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx
1,879
Replace flatten_nested
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)" ]
1,613,395,780,000
1,613,759,714,000
1,613,759,714,000
MEMBER
null
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I have also generalized the flattening, and now it handles multiple levels of nesting.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1879", "html_url": "https://github.com/huggingface/datasets/pull/1879", "diff_url": "https://github.com/huggingface/datasets/pull/1879.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1879.patch", "merged_at": 1613759714000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1878/comments
https://api.github.com/repos/huggingface/datasets/issues/1878/events
https://github.com/huggingface/datasets/pull/1878
808,526,883
MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3
1,878
Add LJ Speech dataset
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n...
1,613,394,642,000
1,613,417,981,000
1,613,398,689,000
CONTRIBUTOR
null
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list? - Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo? - The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well? Pinging @patrickvonplaten to review
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1878", "html_url": "https://github.com/huggingface/datasets/pull/1878", "diff_url": "https://github.com/huggingface/datasets/pull/1878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1878.patch", "merged_at": 1613398689000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1877/comments
https://api.github.com/repos/huggingface/datasets/issues/1877/events
https://github.com/huggingface/datasets/issues/1877
808,462,272
MDU6SXNzdWU4MDg0NjIyNzI=
1,877
Allow concatenation of both in-memory and on-disk datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that conca...
1,613,389,186,000
1,616,777,518,000
1,616,777,518,000
MEMBER
null
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table. Then the dataset would be the concatenation of all these tables. Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data. If you have some ideas you would like to share about the design/API feel free to do so :) cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1877/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1876/comments
https://api.github.com/repos/huggingface/datasets/issues/1876/events
https://github.com/huggingface/datasets/issues/1876
808,025,859
MDU6SXNzdWU4MDgwMjU4NTk=
1,876
load_dataset("multi_woz_v22") NonMatchingChecksumError
{ "login": "Vincent950129", "id": 5945326, "node_id": "MDQ6VXNlcjU5NDUzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vincent950129", "html_url": "https://github.com/Vincent950129", "followers_url": "https://api.github.com/users/Vincent950129/followers", "following_url": "https://api.github.com/users/Vincent950129/following{/other_user}", "gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}", "starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions", "organizations_url": "https://api.github.com/users/Vincent950129/orgs", "repos_url": "https://api.github.com/users/Vincent950129/repos", "events_url": "https://api.github.com/users/Vincent950129/events{/privacy}", "received_events_url": "https://api.github.com/users/Vincent950129/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.", "I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll ...
1,613,330,088,000
1,628,100,480,000
1,628,100,480,000
NONE
null
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1876/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1875/comments
https://api.github.com/repos/huggingface/datasets/issues/1875/events
https://github.com/huggingface/datasets/pull/1875
807,887,267
MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0
1,875
Adding sari metric
{ "login": "ddhruvkr", "id": 6061911, "node_id": "MDQ6VXNlcjYwNjE5MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ddhruvkr", "html_url": "https://github.com/ddhruvkr", "followers_url": "https://api.github.com/users/ddhruvkr/followers", "following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}", "gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}", "starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions", "organizations_url": "https://api.github.com/users/ddhruvkr/orgs", "repos_url": "https://api.github.com/users/ddhruvkr/repos", "events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}", "received_events_url": "https://api.github.com/users/ddhruvkr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,277,515,000
1,613,577,387,000
1,613,577,387,000
CONTRIBUTOR
null
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1875", "html_url": "https://github.com/huggingface/datasets/pull/1875", "diff_url": "https://github.com/huggingface/datasets/pull/1875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1875.patch", "merged_at": 1613577386000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1874/comments
https://api.github.com/repos/huggingface/datasets/issues/1874/events
https://github.com/huggingface/datasets/pull/1874
807,786,094
MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy
1,874
Adding Europarl Bilingual dataset
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.", "I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos", "I...
1,613,235,724,000
1,614,854,302,000
1,614,854,302,000
CONTRIBUTOR
null
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences). I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1874/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874", "html_url": "https://github.com/huggingface/datasets/pull/1874", "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "merged_at": 1614854302000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1873/comments
https://api.github.com/repos/huggingface/datasets/issues/1873/events
https://github.com/huggingface/datasets/pull/1873
807,750,745
MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy
1,873
add iapp_wiki_qa_squad
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,223,267,000
1,613,485,318,000
1,613,485,318,000
CONTRIBUTOR
null
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1873/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1873", "html_url": "https://github.com/huggingface/datasets/pull/1873", "diff_url": "https://github.com/huggingface/datasets/pull/1873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1873.patch", "merged_at": 1613485318000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1872/comments
https://api.github.com/repos/huggingface/datasets/issues/1872/events
https://github.com/huggingface/datasets/issues/1872
807,711,935
MDU6SXNzdWU4MDc3MTE5MzU=
1,872
Adding a new column to the dataset after set_format was called
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column ...
1,613,207,675,000
1,617,112,905,000
1,617,112,905,000
NONE
null
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`). Below some pseudo code: ```python def augment_func(sample: Dict) -> Dict: # do something return { "some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor "some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor "NEW_COLUMN": targets, # <-- list of strings } data = datasets.load_dataset(__file__, data_dir="...", split="train") data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True) augmented_dataset = data.map(augment_func, batched=False) for sample in augmented_dataset: print(sample) # fails ``` and the exception: ```python Traceback (most recent call last): File "dataset.py", line 487, in <module> main() File "dataset.py", line 471, in main for sample in augmented_dataset: File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__ yield self._getitem( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem outputs = self._convert_outputs( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) TypeError: new(): invalid data type 'str' ``` Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/1872/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1871/comments
https://api.github.com/repos/huggingface/datasets/issues/1871/events
https://github.com/huggingface/datasets/pull/1871
807,697,671
MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz
1,871
Add newspop dataset
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the changes :)\r\nmerging" ]
1,613,201,483,000
1,615,198,365,000
1,615,198,365,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1871/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1871", "html_url": "https://github.com/huggingface/datasets/pull/1871", "diff_url": "https://github.com/huggingface/datasets/pull/1871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1871.patch", "merged_at": 1615198365000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1870/comments
https://api.github.com/repos/huggingface/datasets/issues/1870/events
https://github.com/huggingface/datasets/pull/1870
807,306,564
MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4
1,870
Implement Dataset add_item
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/3", "html_url": "https://github.com/huggingface/datasets/milestone/3", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "id": 6644287, "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "title": "1.7", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 3, "state": "closed", "created_at": 1617974191000, "updated_at": 1622478053000, "due_on": 1620975600000, "closed_at": 1622478053000 }
[ "Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.", "Sure ! I opened an issue #1877 so we can discuss this specific aspect :)", "I am going to implement this consolidation step ...
1,613,142,226,000
1,619,172,091,000
1,619,172,091,000
MEMBER
null
Implement `Dataset.add_item`. Close #1854.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1870", "html_url": "https://github.com/huggingface/datasets/pull/1870", "diff_url": "https://github.com/huggingface/datasets/pull/1870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1870.patch", "merged_at": 1619172090000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1869/comments
https://api.github.com/repos/huggingface/datasets/issues/1869/events
https://github.com/huggingface/datasets/pull/1869
807,159,835
MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy
1,869
Remove outdated commands in favor of huggingface-cli
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,129,290,000
1,613,146,389,000
1,613,146,388,000
MEMBER
null
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1869/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1869", "html_url": "https://github.com/huggingface/datasets/pull/1869", "diff_url": "https://github.com/huggingface/datasets/pull/1869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1869.patch", "merged_at": 1613146388000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1868/comments
https://api.github.com/repos/huggingface/datasets/issues/1868/events
https://github.com/huggingface/datasets/pull/1868
807,138,159
MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0
1,868
Update oscar sizes
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,613,127,335,000
1,613,127,787,000
1,613,127,786,000
MEMBER
null
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1868", "html_url": "https://github.com/huggingface/datasets/pull/1868", "diff_url": "https://github.com/huggingface/datasets/pull/1868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1868.patch", "merged_at": 1613127786000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1867/comments
https://api.github.com/repos/huggingface/datasets/issues/1867/events
https://github.com/huggingface/datasets/issues/1867
807,127,181
MDU6SXNzdWU4MDcxMjcxODE=
1,867
ERROR WHEN USING SET_TRANSFORM()
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/alexvaca0/followers", "following_url": "https://api.github.com/users/alexvaca0/following{/other_user}", "gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions", "organizations_url": "https://api.github.com/users/alexvaca0/orgs", "repos_url": "https://api.github.com/users/alexvaca0/repos", "events_url": "https://api.github.com/users/alexvaca0/events{/privacy}", "received_events_url": "https://api.github.com/users/alexvaca0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/...
1,613,126,311,000
1,614,607,464,000
1,614,168,043,000
NONE
null
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional argument: 'transform' [INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text. Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main data_collator=data_collator, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__ self._remove_unused_columns(self.train_dataset, description="training") File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns dataset.set_format(type=dataset.format["type"], columns=columns) File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper out = func(self, *args, **kwargs) File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format _ = get_formatter(type, **format_kwargs) File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter return _FORMAT_TYPES[format_type](**format_kwargs) TypeError: __init__() missing 1 required positional argument: 'transform' ``` The code I'm using: ```{python} def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length) datasets.set_transform(tokenize_function) data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=datasets["train"] if training_args.do_train else None, eval_dataset=datasets["val"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) ``` I've installed from source, master branch.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1867/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1866/comments
https://api.github.com/repos/huggingface/datasets/issues/1866/events
https://github.com/huggingface/datasets/pull/1866
807,017,816
MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1
1,866
Add dataset for Financial PhraseBank
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "organizations_url": "https://api.github.com/users/frankier/orgs", "repos_url": "https://api.github.com/users/frankier/repos", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "received_events_url": "https://api.github.com/users/frankier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the feedback. All accepted and metadata regenerated." ]
1,613,115,056,000
1,613,571,756,000
1,613,571,756,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1866", "html_url": "https://github.com/huggingface/datasets/pull/1866", "diff_url": "https://github.com/huggingface/datasets/pull/1866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1866.patch", "merged_at": 1613571756000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1865/comments
https://api.github.com/repos/huggingface/datasets/issues/1865/events
https://github.com/huggingface/datasets/pull/1865
806,388,290
MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2
1,865
Updated OPUS Open Subtitles Dataset with metadata information
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "repos_url": "https://api.github.com/users/Valahaar/repos", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of th...
1,613,049,986,000
1,613,738,289,000
1,613,149,184,000
CONTRIBUTOR
null
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss? Questions: - Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1865/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1865", "html_url": "https://github.com/huggingface/datasets/pull/1865", "diff_url": "https://github.com/huggingface/datasets/pull/1865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1865.patch", "merged_at": 1613149184000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1864/comments
https://api.github.com/repos/huggingface/datasets/issues/1864/events
https://github.com/huggingface/datasets/issues/1864
806,172,843
MDU6SXNzdWU4MDYxNzI4NDM=
1,864
Add Winogender Schemas
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias" ]
1,613,031,518,000
1,613,031,591,000
1,613,031,591,000
NONE
null
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper:** https://arxiv.org/abs/1804.09301 - **Data:** https://github.com/rudinger/winogender-schemas (see data directory) - **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1864/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1863/comments
https://api.github.com/repos/huggingface/datasets/issues/1863/events
https://github.com/huggingface/datasets/issues/1863
806,171,311
MDU6SXNzdWU4MDYxNzEzMTE=
1,863
Add WikiCREM
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!", "Hi @udapy, are you working on this?" ]
1,613,031,360,000
1,615,102,033,000
null
NONE
null
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **Motivation:** Coreference resolution, common sense reasoning Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1863/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1862/comments
https://api.github.com/repos/huggingface/datasets/issues/1862/events
https://github.com/huggingface/datasets/pull/1862
805,722,293
MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx
1,862
Fix writing GPU Faiss index
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,978,323,000
1,612,981,068,000
1,612,981,067,000
MEMBER
null
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1862/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1862", "html_url": "https://github.com/huggingface/datasets/pull/1862", "diff_url": "https://github.com/huggingface/datasets/pull/1862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1862.patch", "merged_at": 1612981067000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1861/comments
https://api.github.com/repos/huggingface/datasets/issues/1861/events
https://github.com/huggingface/datasets/pull/1861
805,631,215
MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1
1,861
Fix Limit url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,971,896,000
1,612,973,700,000
1,612,973,699,000
MEMBER
null
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861", "html_url": "https://github.com/huggingface/datasets/pull/1861", "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "merged_at": 1612973698000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1860/comments
https://api.github.com/repos/huggingface/datasets/issues/1860/events
https://github.com/huggingface/datasets/pull/1860
805,510,037
MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz
1,860
Add loading from the Datasets Hub + add relative paths in download manager
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documen...
1,612,963,451,000
1,613,157,210,000
1,613,157,209,000
MEMBER
null
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_dataset("lhoestq/custom_squad") ``` To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via ```python _URLS = { "train": "train-v1.1.json", "dev": "dev-v1.1.json", } downloaded_files = dl_manager.download_and_extract(_URLS) ``` To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url). I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1860/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1860", "html_url": "https://github.com/huggingface/datasets/pull/1860", "diff_url": "https://github.com/huggingface/datasets/pull/1860.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1860.patch", "merged_at": 1613157209000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1859/comments
https://api.github.com/repos/huggingface/datasets/issues/1859/events
https://github.com/huggingface/datasets/issues/1859
805,479,025
MDU6SXNzdWU4MDU0NzkwMjU=
1,859
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
{ "login": "corticalstack", "id": 3995321, "node_id": "MDQ6VXNlcjM5OTUzMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/corticalstack", "html_url": "https://github.com/corticalstack", "followers_url": "https://api.github.com/users/corticalstack/followers", "following_url": "https://api.github.com/users/corticalstack/following{/other_user}", "gists_url": "https://api.github.com/users/corticalstack/gists{/gist_id}", "starred_url": "https://api.github.com/users/corticalstack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/corticalstack/subscriptions", "organizations_url": "https://api.github.com/users/corticalstack/orgs", "repos_url": "https://api.github.com/users/corticalstack/repos", "events_url": "https://api.github.com/users/corticalstack/events{/privacy}", "received_events_url": "https://api.github.com/users/corticalstack/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR", "I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next...
1,612,960,860,000
1,612,981,932,000
1,612,981,067,000
NONE
null
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_available()` reports: ``` Cuda is available cuda:0 ``` Adding index, device=0 for GPU. `dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)` However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK. ``` def save(self, file: str): """Serialize the FaissIndex on disk""" import faiss # noqa: F811 if ( hasattr(self.faiss_index, "device") and self.faiss_index.device is not None and self.faiss_index.device > -1 ): index = faiss.index_gpu_to_cpu(self.faiss_index) else: index = self.faiss_index faiss.write_index(index, file) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1859/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1858/comments
https://api.github.com/repos/huggingface/datasets/issues/1858/events
https://github.com/huggingface/datasets/pull/1858
805,477,774
MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx
1,858
Clean config getenvs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,960,754,000
1,612,972,350,000
1,612,972,349,000
MEMBER
null
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1858/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1858", "html_url": "https://github.com/huggingface/datasets/pull/1858", "diff_url": "https://github.com/huggingface/datasets/pull/1858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1858.patch", "merged_at": 1612972349000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1857/comments
https://api.github.com/repos/huggingface/datasets/issues/1857/events
https://github.com/huggingface/datasets/issues/1857
805,391,107
MDU6SXNzdWU4MDUzOTExMDc=
1,857
Unable to upload "community provided" dataset - 400 Client Error
{ "login": "mwrzalik", "id": 1376337, "node_id": "MDQ6VXNlcjEzNzYzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mwrzalik", "html_url": "https://github.com/mwrzalik", "followers_url": "https://api.github.com/users/mwrzalik/followers", "following_url": "https://api.github.com/users/mwrzalik/following{/other_user}", "gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions", "organizations_url": "https://api.github.com/users/mwrzalik/orgs", "repos_url": "https://api.github.com/users/mwrzalik/repos", "events_url": "https://api.github.com/users/mwrzalik/events{/privacy}", "received_events_url": "https://api.github.com/users/mwrzalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c ma...
1,612,953,541,000
1,627,967,173,000
1,627,967,173,000
CONTRIBUTOR
null
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username Proceed? [Y/n] Y Uploading... This might take a while if files are large 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign huggingface.co migrated to a new model hosting system. You need to upgrade to transformers v3.5+ to upload new models. More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you! ``` I'm using the latest releases of datasets and transformers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1857/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1856/comments
https://api.github.com/repos/huggingface/datasets/issues/1856/events
https://github.com/huggingface/datasets/issues/1856
805,360,200
MDU6SXNzdWU4MDUzNjAyMDA=
1,856
load_dataset("amazon_polarity") NonMatchingChecksumError
{ "login": "yanxi0830", "id": 19946372, "node_id": "MDQ6VXNlcjE5OTQ2Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanxi0830", "html_url": "https://github.com/yanxi0830", "followers_url": "https://api.github.com/users/yanxi0830/followers", "following_url": "https://api.github.com/users/yanxi0830/following{/other_user}", "gists_url": "https://api.github.com/users/yanxi0830/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanxi0830/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanxi0830/subscriptions", "organizations_url": "https://api.github.com/users/yanxi0830/orgs", "repos_url": "https://api.github.com/users/yanxi0830/repos", "events_url": "https://api.github.com/users/yanxi0830/events{/privacy}", "received_events_url": "https://api.github.com/users/yanxi0830/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`", "+1 encountering this issue as well", "@l...
1,612,951,256,000
1,626,872,391,000
null
NONE
null
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-8559a03fe0f8> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1856/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1855/comments
https://api.github.com/repos/huggingface/datasets/issues/1855/events
https://github.com/huggingface/datasets/pull/1855
805,256,579
MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3
1,855
Minor fix in the docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,942,063,000
1,612,960,389,000
1,612,960,389,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1855/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855", "html_url": "https://github.com/huggingface/datasets/pull/1855", "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "merged_at": 1612960389000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1854/comments
https://api.github.com/repos/huggingface/datasets/issues/1854/events
https://github.com/huggingface/datasets/issues/1854
805,204,397
MDU6SXNzdWU4MDUyMDQzOTc=
1,854
Feature Request: Dataset.add_item
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\...
1,612,937,160,000
1,619,172,090,000
1,619,172,090,000
CONTRIBUTOR
null
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`. Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries. ### Desired API ```python import numpy as np tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5]) def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset: """FIXME""" dataset = EmptyDataset() for t in tokenized: dataset.append(t) return dataset ds = build_dataset_from_tokenized(tokenized) assert (ds[0] == np.array([4,4,2])).all() ``` ### What I tried grep, google for "add one entry at a time", "datasets.append" ### Current Code This code achieves the same result but doesn't fit into the `add_item` abstraction. ```python dataset = load_dataset('text', data_files={'train': 'train.txt'}) tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096) def tokenize_function(examples): ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids'] return {'input_ids': [x[1:] for x in ids]} ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache) print(ds['train'][0]) => np array ``` Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1854/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1853/comments
https://api.github.com/repos/huggingface/datasets/issues/1853/events
https://github.com/huggingface/datasets/pull/1853
804,791,166
MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4
1,853
Configure library root logger at the module level
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,894,272,000
1,612,960,354,000
1,612,960,354,000
MEMBER
null
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1853/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1853", "html_url": "https://github.com/huggingface/datasets/pull/1853", "diff_url": "https://github.com/huggingface/datasets/pull/1853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1853.patch", "merged_at": 1612960354000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1852/comments
https://api.github.com/repos/huggingface/datasets/issues/1852/events
https://github.com/huggingface/datasets/pull/1852
804,633,033
MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1
1,852
Add Arabic Speech Corpus
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,882,946,000
1,613,038,735,000
1,613,038,735,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1852", "html_url": "https://github.com/huggingface/datasets/pull/1852", "diff_url": "https://github.com/huggingface/datasets/pull/1852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1852.patch", "merged_at": 1613038734000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1851/comments
https://api.github.com/repos/huggingface/datasets/issues/1851/events
https://github.com/huggingface/datasets/pull/1851
804,523,174
MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5
1,851
set bert_score version dependency
{ "login": "pvl", "id": 3596, "node_id": "MDQ6VXNlcjM1OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvl", "html_url": "https://github.com/pvl", "followers_url": "https://api.github.com/users/pvl/followers", "following_url": "https://api.github.com/users/pvl/following{/other_user}", "gists_url": "https://api.github.com/users/pvl/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvl/subscriptions", "organizations_url": "https://api.github.com/users/pvl/orgs", "repos_url": "https://api.github.com/users/pvl/repos", "events_url": "https://api.github.com/users/pvl/events{/privacy}", "received_events_url": "https://api.github.com/users/pvl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,875,067,000
1,612,880,508,000
1,612,880,508,000
CONTRIBUTOR
null
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851", "html_url": "https://github.com/huggingface/datasets/pull/1851", "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "merged_at": 1612880508000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
https://api.github.com/repos/huggingface/datasets/issues/1850/events
https://github.com/huggingface/datasets/pull/1850
804,412,249
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
1,850
Add cord 19 dataset
{ "login": "ggdupont", "id": 5583410, "node_id": "MDQ6VXNlcjU1ODM0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggdupont", "html_url": "https://github.com/ggdupont", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "repos_url": "https://api.github.com/users/ggdupont/repos", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129", "@lhoestq FYI", "Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today", "Looks all good now ! Thanks...
1,612,866,128,000
1,612,883,786,000
1,612,883,786,000
CONTRIBUTOR
null
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### Extras: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850", "html_url": "https://github.com/huggingface/datasets/pull/1850", "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "merged_at": 1612883785000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1849/comments
https://api.github.com/repos/huggingface/datasets/issues/1849/events
https://github.com/huggingface/datasets/issues/1849
804,292,971
MDU6SXNzdWU4MDQyOTI5NzE=
1,849
Add TIMIT
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[ "@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n", "Hey @vrindaprabhu - sure I'...
1,612,855,781,000
1,615,787,977,000
1,615,787,977,000
MEMBER
null
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT - **Data:** *https://deepai.org/dataset/timit* - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1849/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1848/comments
https://api.github.com/repos/huggingface/datasets/issues/1848/events
https://github.com/huggingface/datasets/pull/1848
803,826,506
MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1
1,848
Refactoring: Create config module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,809,831,000
1,612,960,175,000
1,612,960,175,000
MEMBER
null
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1848/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848", "html_url": "https://github.com/huggingface/datasets/pull/1848", "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "merged_at": 1612960175000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1847/comments
https://api.github.com/repos/huggingface/datasets/issues/1847/events
https://github.com/huggingface/datasets/pull/1847
803,824,694
MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0
1,847
[Metrics] Add word error metric metric
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Feel free to merge once the CI is all green ;)" ]
1,612,809,675,000
1,612,893,201,000
1,612,893,201,000
MEMBER
null
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1847/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847", "html_url": "https://github.com/huggingface/datasets/pull/1847", "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "merged_at": 1612893201000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1846/comments
https://api.github.com/repos/huggingface/datasets/issues/1846/events
https://github.com/huggingface/datasets/pull/1846
803,806,380
MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy
1,846
Make DownloadManager downloaded/extracted paths accessible
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...", "There could ...
1,612,808,082,000
1,614,262,218,000
1,614,262,218,000
MEMBER
null
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1846/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1846", "html_url": "https://github.com/huggingface/datasets/pull/1846", "diff_url": "https://github.com/huggingface/datasets/pull/1846.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1846.patch", "merged_at": 1614262218000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1845/comments
https://api.github.com/repos/huggingface/datasets/issues/1845/events
https://github.com/huggingface/datasets/pull/1845
803,714,493
MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz
1,845
Enable logging propagation and remove logging handler
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- ...
1,612,801,333,000
1,612,880,558,000
1,612,880,557,000
MEMBER
null
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library): > It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements. It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management. Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`. cc @albertvillanova this should let you use capsys/caplog in pytest cc @LysandreJik @sgugger if you want to do the same in `transformers`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1845", "html_url": "https://github.com/huggingface/datasets/pull/1845", "diff_url": "https://github.com/huggingface/datasets/pull/1845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1845.patch", "merged_at": 1612880557000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1844/comments
https://api.github.com/repos/huggingface/datasets/issues/1844/events
https://github.com/huggingface/datasets/issues/1844
803,588,125
MDU6SXNzdWU4MDM1ODgxMjU=
1,844
Update Open Subtitles corpus with original sentence IDs
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "repos_url": "https://api.github.com/users/Valahaar/repos", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles...
1,612,792,513,000
1,613,151,538,000
1,613,151,538,000
CONTRIBUTOR
null
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts. I think I should tag @abhishekkrthakur as he's the one who added it in the first place. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1844/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1843/comments
https://api.github.com/repos/huggingface/datasets/issues/1843/events
https://github.com/huggingface/datasets/issues/1843
803,565,393
MDU6SXNzdWU4MDM1NjUzOTM=
1,843
MustC Speech Translation
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[ "Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ", "That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `d...
1,612,790,865,000
1,621,004,014,000
null
MEMBER
null
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - all data under "Allowed Training Data" and "Development and Evalutaion Data for TED/How2" - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1843/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1842/comments
https://api.github.com/repos/huggingface/datasets/issues/1842/events
https://github.com/huggingface/datasets/issues/1842
803,563,149
MDU6SXNzdWU4MDM1NjMxNDk=
1,842
Add AMI Corpus
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[]
1,612,790,700,000
1,612,855,576,000
null
MEMBER
null
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ - **Data:** *http://groups.inf.ed.ac.uk/ami/download/* - Select all cases in 1) and select "Individual Headsets" & "Microphone array" for 2) - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1842/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1841/comments
https://api.github.com/repos/huggingface/datasets/issues/1841/events
https://github.com/huggingface/datasets/issues/1841
803,561,123
MDU6SXNzdWU4MDM1NjExMjM=
1,841
Add ljspeech
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[]
1,612,790,546,000
1,615,787,942,000
1,615,787,942,000
MEMBER
null
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)* - **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/ - **Data:** *https://keithito.com/LJ-Speech-Dataset/* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1841/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1840/comments
https://api.github.com/repos/huggingface/datasets/issues/1840/events
https://github.com/huggingface/datasets/issues/1840
803,560,039
MDU6SXNzdWU4MDM1NjAwMzk=
1,840
Add common voice
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[ "I have started working on adding this dataset.", "Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the downloa...
1,612,790,465,000
1,641,399,591,000
1,615,787,781,000
MEMBER
null
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1840/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1839/comments
https://api.github.com/repos/huggingface/datasets/issues/1839/events
https://github.com/huggingface/datasets/issues/1839
803,559,164
MDU6SXNzdWU4MDM1NTkxNjQ=
1,839
Add Voxforge
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[]
1,612,790,396,000
1,612,790,911,000
null
MEMBER
null
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.* - **Paper:** *Homepage*: http://www.voxforge.org/ - **Data:** *http://www.voxforge.org/home/downloads* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/voxforge If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1839/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1838/comments
https://api.github.com/repos/huggingface/datasets/issues/1838/events
https://github.com/huggingface/datasets/issues/1838
803,557,521
MDU6SXNzdWU4MDM1NTc1MjE=
1,838
Add tedlium
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[ "Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0" ]
1,612,790,272,000
1,617,983,861,000
null
MEMBER
null
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51/ - **Data:** http://www.openslr.org/7/ - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/tedlium If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1838/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1837/comments
https://api.github.com/repos/huggingface/datasets/issues/1837/events
https://github.com/huggingface/datasets/issues/1837
803,555,650
MDU6SXNzdWU4MDM1NTU2NTA=
1,837
Add VCTK
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[ "@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me k...
1,612,790,128,000
1,640,703,908,000
1,640,703,908,000
MEMBER
null
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.* - **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443 - **Data:** https://datashare.ed.ac.uk/handle/10283/3443 - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1837/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1836/comments
https://api.github.com/repos/huggingface/datasets/issues/1836/events
https://github.com/huggingface/datasets/issues/1836
803,531,837
MDU6SXNzdWU4MDM1MzE4Mzc=
1,836
test.json has been removed from the limit dataset repo (breaks dataset)
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "organizations_url": "https://api.github.com/users/Paethon/orgs", "repos_url": "https://api.github.com/users/Paethon/repos", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "received_events_url": "https://api.github.com/users/Paethon/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Thanks for the heads up ! I'm opening a PR to fix that" ]
1,612,788,353,000
1,612,973,698,000
1,612,973,698,000
NONE
null
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd8848f0f11527c77dcf168fefd2b23/data`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1836/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1835/comments
https://api.github.com/repos/huggingface/datasets/issues/1835/events
https://github.com/huggingface/datasets/issues/1835
803,524,790
MDU6SXNzdWU4MDM1MjQ3OTA=
1,835
Add CHiME4 dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[]
1,612,787,798,000
1,612,790,011,000
null
MEMBER
null
## Adding a Dataset - **Name:** Chime4 - **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR - **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results paper: - **Data:** http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/download.html - **Motivation:** So far there are very little datasets for speech in `datasets`. Only `lbirispeech_asr` so far. If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1835/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1834/comments
https://api.github.com/repos/huggingface/datasets/issues/1834/events
https://github.com/huggingface/datasets/pull/1834
803,517,094
MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4
1,834
Fixes base_url of limit dataset
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "organizations_url": "https://api.github.com/users/Paethon/orgs", "repos_url": "https://api.github.com/users/Paethon/repos", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "received_events_url": "https://api.github.com/users/Paethon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue." ]
1,612,787,195,000
1,612,788,170,000
1,612,788,170,000
NONE
null
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1834", "html_url": "https://github.com/huggingface/datasets/pull/1834", "diff_url": "https://github.com/huggingface/datasets/pull/1834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1834.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1833/comments
https://api.github.com/repos/huggingface/datasets/issues/1833/events
https://github.com/huggingface/datasets/pull/1833
803,120,978
MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx
1,833
Add OSCAR dataset card
{ "login": "pjox", "id": 635220, "node_id": "MDQ6VXNlcjYzNTIyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pjox", "html_url": "https://github.com/pjox", "followers_url": "https://api.github.com/users/pjox/followers", "following_url": "https://api.github.com/users/pjox/following{/other_user}", "gists_url": "https://api.github.com/users/pjox/gists{/gist_id}", "starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjox/subscriptions", "organizations_url": "https://api.github.com/users/pjox/orgs", "repos_url": "https://api.github.com/users/pjox/repos", "events_url": "https://api.github.com/users/pjox/events{/privacy}", "received_events_url": "https://api.github.com/users/pjox/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ", "I just merged the tables as suggested 😄 . However I noticed somet...
1,612,748,389,000
1,613,138,965,000
1,613,138,904,000
CONTRIBUTOR
null
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833", "html_url": "https://github.com/huggingface/datasets/pull/1833", "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "merged_at": 1613138904000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1832/comments
https://api.github.com/repos/huggingface/datasets/issues/1832/events
https://github.com/huggingface/datasets/issues/1832
802,880,897
MDU6SXNzdWU4MDI4ODA4OTc=
1,832
Looks like nokogumbo is up-to-date now, so this is no longer needed.
{ "login": "JimmyJim1", "id": 68724553, "node_id": "MDQ6VXNlcjY4NzI0NTUz", "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JimmyJim1", "html_url": "https://github.com/JimmyJim1", "followers_url": "https://api.github.com/users/JimmyJim1/followers", "following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}", "gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}", "starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions", "organizations_url": "https://api.github.com/users/JimmyJim1/orgs", "repos_url": "https://api.github.com/users/JimmyJim1/repos", "events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}", "received_events_url": "https://api.github.com/users/JimmyJim1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,680,727,000
1,612,805,249,000
1,612,805,249,000
NONE
null
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1832/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1831/comments
https://api.github.com/repos/huggingface/datasets/issues/1831/events
https://github.com/huggingface/datasets/issues/1831
802,868,854
MDU6SXNzdWU4MDI4Njg4NTQ=
1,831
Some question about raw dataset download info in the project .
{ "login": "svjack", "id": 27874014, "node_id": "MDQ6VXNlcjI3ODc0MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/svjack", "html_url": "https://github.com/svjack", "followers_url": "https://api.github.com/users/svjack/followers", "following_url": "https://api.github.com/users/svjack/following{/other_user}", "gists_url": "https://api.github.com/users/svjack/gists{/gist_id}", "starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/svjack/subscriptions", "organizations_url": "https://api.github.com/users/svjack/orgs", "repos_url": "https://api.github.com/users/svjack/repos", "events_url": "https://api.github.com/users/svjack/events{/privacy}", "received_events_url": "https://api.github.com/users/svjack/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so ...
1,612,676,016,000
1,614,262,218,000
1,614,262,218,000
NONE
null
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic it seems that i can not have the raw dataset download location in variable in downloaded_files in _split_generators. If someone also want use huggingface datasets as raw dataset downloader, how can he retrieve the raw dataset download path from attributes in datasets.dataset_dict.DatasetDict ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1831/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1830/comments
https://api.github.com/repos/huggingface/datasets/issues/1830/events
https://github.com/huggingface/datasets/issues/1830
802,790,075
MDU6SXNzdWU4MDI3OTAwNzU=
1,830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
{ "login": "wumpusman", "id": 7662740, "node_id": "MDQ6VXNlcjc2NjI3NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wumpusman", "html_url": "https://github.com/wumpusman", "followers_url": "https://api.github.com/users/wumpusman/followers", "following_url": "https://api.github.com/users/wumpusman/following{/other_user}", "gists_url": "https://api.github.com/users/wumpusman/gists{/gist_id}", "starred_url": "https://api.github.com/users/wumpusman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wumpusman/subscriptions", "organizations_url": "https://api.github.com/users/wumpusman/orgs", "repos_url": "https://api.github.com/users/wumpusman/repos", "events_url": "https://api.github.com/users/wumpusman/events{/privacy}", "received_events_url": "https://api.github.com/users/wumpusman/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your `map` for the cache\r\n2. apply your function on e...
1,612,645,226,000
1,614,203,774,000
null
NONE
null
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_unique = set(text.split(" ")) for i in words_unique: original_tokenizer.add_tokens(i) original_tokenizer.save_pretrained(path) tokenizer2 = GPT2Tokenizer.from_pretrained(os.path.join(experiment_path,experiment_name,"tokenizer_squad")) train_set_baby=Dataset.from_dict({"text":[train_set["text"][0][0:50]]}) ```` I then applied the dataset map function on a fairly small set of text: ``` %%time train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True) ``` The run time for train_set_baby.map was 6 seconds, and the batch itself was 2.6 seconds **100% 1/1 [00:02<00:00, 2.60s/ba] CPU times: user 5.96 s, sys: 36 ms, total: 5.99 s Wall time: 5.99 s** In comparison using (even after adding additional tokens): ` tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")` ``` %%time train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True) ``` The time is **100% 1/1 [00:00<00:00, 34.09ba/s] CPU times: user 68.1 ms, sys: 16 µs, total: 68.1 ms Wall time: 62.9 ms** It seems this might relate to the tokenizer save or load function, however, the issue appears to come up when I apply the loaded tokenizer to the map function. I should also add that playing around with the amount of words I add to the tokenizer before I save it to disk and load it into memory appears to impact the time it takes to run the map function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1830/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1829/comments
https://api.github.com/repos/huggingface/datasets/issues/1829/events
https://github.com/huggingface/datasets/pull/1829
802,693,600
MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5
1,829
Add Tweet Eval Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,614,985,000
1,612,790,274,000
1,612,790,273,000
CONTRIBUTOR
null
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/mapping.txt). 3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset. Requesting @lhoestq to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1829/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829", "html_url": "https://github.com/huggingface/datasets/pull/1829", "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "merged_at": 1612790273000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1828/comments
https://api.github.com/repos/huggingface/datasets/issues/1828/events
https://github.com/huggingface/datasets/pull/1828
802,449,234
MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2
1,828
Add CelebA Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification...
1,612,556,455,000
1,613,657,827,000
1,613,657,827,000
CONTRIBUTOR
null
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1828/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1828", "html_url": "https://github.com/huggingface/datasets/pull/1828", "diff_url": "https://github.com/huggingface/datasets/pull/1828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1828.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1827/comments
https://api.github.com/repos/huggingface/datasets/issues/1827/events
https://github.com/huggingface/datasets/issues/1827
802,353,974
MDU6SXNzdWU4MDIzNTM5NzQ=
1,827
Regarding On-the-fly Data Loading
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature", "Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using t...
1,612,547,028,000
1,613,656,516,000
1,613,656,516,000
CONTRIBUTOR
null
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1827/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1826/comments
https://api.github.com/repos/huggingface/datasets/issues/1826/events
https://github.com/huggingface/datasets/pull/1826
802,074,744
MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2
1,826
Print error message with filename when malformed CSV
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,523,279,000
1,612,892,367,000
1,612,892,367,000
MEMBER
null
Print error message specifying filename when malformed CSV file. Close #1821
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1826/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1826", "html_url": "https://github.com/huggingface/datasets/pull/1826", "diff_url": "https://github.com/huggingface/datasets/pull/1826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1826.patch", "merged_at": 1612892366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1825/comments
https://api.github.com/repos/huggingface/datasets/issues/1825/events
https://github.com/huggingface/datasets/issues/1825
802,073,925
MDU6SXNzdWU4MDIwNzM5MjU=
1,825
Datasets library not suitable for huge text datasets.
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/alexvaca0/followers", "following_url": "https://api.github.com/users/alexvaca0/following{/other_user}", "gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions", "organizations_url": "https://api.github.com/users/alexvaca0/orgs", "repos_url": "https://api.github.com/users/alexvaca0/repos", "events_url": "https://api.github.com/users/alexvaca0/events{/privacy}", "received_events_url": "https://api.github.com/users/alexvaca0/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which...
1,612,523,210,000
1,617,113,041,000
1,615,887,840,000
NONE
null
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this big, but for fine-tuning datasets, as this process alone takes so much time, usually in expensive machines (due to the need of tpus - gpus) which is not being used for training. It would possibly be more efficient in such cases to tokenize each batch at training time (receive batch - tokenize batch - train with batch), so that the whole time the machine is up it's being used for training. Moreover, the pyarrow objects created from a 187 GB datasets are huge, I mean, we always receive OOM, or No Space left on device errors when only 10-12% of the dataset has been processed, and only that part occupies 2.1TB in disk, which is so many times the disk usage of the pure text (and this doesn't make sense, as tokenized texts should be lighter than pure texts). Any suggestions??
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1825/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1824/comments
https://api.github.com/repos/huggingface/datasets/issues/1824/events
https://github.com/huggingface/datasets/pull/1824
802,048,281
MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3
1,824
Add OSCAR dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:", "Next week !", "Closing in favor of #1833" ]
1,612,521,026,000
1,620,239,054,000
1,612,783,833,000
MEMBER
null
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D Cc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1824/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1824", "html_url": "https://github.com/huggingface/datasets/pull/1824", "diff_url": "https://github.com/huggingface/datasets/pull/1824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1824.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1823/comments
https://api.github.com/repos/huggingface/datasets/issues/1823/events
https://github.com/huggingface/datasets/pull/1823
802,042,181
MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx
1,823
Add FewRel Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?", "Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What d...
1,612,520,523,000
1,614,599,780,000
1,614,594,099,000
CONTRIBUTOR
null
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `"relation"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `"relation":""` in the dictionary. Please recommend better alternatives, if any. Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1823/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1823", "html_url": "https://github.com/huggingface/datasets/pull/1823", "diff_url": "https://github.com/huggingface/datasets/pull/1823.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1823.patch", "merged_at": 1614594099000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1822/comments
https://api.github.com/repos/huggingface/datasets/issues/1822/events
https://github.com/huggingface/datasets/pull/1822
802,003,835
MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz
1,822
Add Hindi Discourse Analysis Natural Language Inference Dataset
{ "login": "avinsit123", "id": 33565881, "node_id": "MDQ6VXNlcjMzNTY1ODgx", "avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinsit123", "html_url": "https://github.com/avinsit123", "followers_url": "https://api.github.com/users/avinsit123/followers", "following_url": "https://api.github.com/users/avinsit123/following{/other_user}", "gists_url": "https://api.github.com/users/avinsit123/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinsit123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinsit123/subscriptions", "organizations_url": "https://api.github.com/users/avinsit123/orgs", "repos_url": "https://api.github.com/users/avinsit123/repos", "events_url": "https://api.github.com/users/avinsit123/events{/privacy}", "received_events_url": "https://api.github.com/users/avinsit123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Could you also run `make style` to fix the CI check on code formatting ?", "@lhoestq completed and resolved all comments." ]
1,612,517,454,000
1,613,383,059,000
1,613,383,059,000
CONTRIBUTOR
null
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : https://www.aclweb.org/anthology/2020.aacl-main.71 - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Premise and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages - Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - train, test and dev files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1} ``` ### Data Fields - Each row contatins 4 columns - premise, hypothesis, label and topic. ### Data Splits - Train : 31892 - Valid : 9460 - Test : 9970 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71 ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ - The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases. #### Who are the source language producers? Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically by machine and corresponding recasting process. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases No known bias exist in the dataset. Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations . Size of data may not be enough to train large models ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/midas-research/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ``` ### Contributions Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1822/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1822", "html_url": "https://github.com/huggingface/datasets/pull/1822", "diff_url": "https://github.com/huggingface/datasets/pull/1822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1822.patch", "merged_at": 1613383059000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1821/comments
https://api.github.com/repos/huggingface/datasets/issues/1821/events
https://github.com/huggingface/datasets/issues/1821
801,747,647
MDU6SXNzdWU4MDE3NDc2NDc=
1,821
Provide better exception message when one of many files results in an exception
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pand...
1,612,486,143,000
1,612,892,367,000
1,612,892,367,000
NONE
null
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no data, or a comma in a field that isn't quoted, etc). For example, this is the tail of an exception which I suspect is due to a stray comma. > File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read > File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory > File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows > File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows > File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error > pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 559, saw 3 It would be nice if the exception trace contained the name of the file being processed (I have 250 separate files!)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1821/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1820/comments
https://api.github.com/repos/huggingface/datasets/issues/1820/events
https://github.com/huggingface/datasets/pull/1820
801,529,936
MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1
1,820
Add metrics usage examples and tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,463,030,000
1,612,533,601,000
1,612,533,600,000
MEMBER
null
All metrics finally have usage examples and proper fast + slow tests :) I added examples of usage for every metric, and I use doctest to make sure they all work as expected. For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only done in the slow test. In the fast test on the other hand, the download + forward pass are monkey patched. Metrics that need to be installed from github are not added to setup.py because it prevents uploading the `datasets` package to pypi. An additional-test-requirements.txt file is used instead. This file also include `comet` in order to not have to resolve its *impossible* dependencies. Also `comet` is not tested on windows because one of its dependencies (fairseq) can't be installed in the CI for some reason.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1820", "html_url": "https://github.com/huggingface/datasets/pull/1820", "diff_url": "https://github.com/huggingface/datasets/pull/1820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1820.patch", "merged_at": 1612533600000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1819/comments
https://api.github.com/repos/huggingface/datasets/issues/1819/events
https://github.com/huggingface/datasets/pull/1819
801,448,670
MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,612,456,606,000
1,612,457,547,000
1,612,457,546,000
MEMBER
null
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819", "html_url": "https://github.com/huggingface/datasets/pull/1819", "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "merged_at": 1612457546000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1818/comments
https://api.github.com/repos/huggingface/datasets/issues/1818/events
https://github.com/huggingface/datasets/issues/1818
800,958,776
MDU6SXNzdWU4MDA5NTg3NzY=
1,818
Loading local dataset raise requests.exceptions.ConnectTimeout
{ "login": "Alxe1", "id": 15032072, "node_id": "MDQ6VXNlcjE1MDMyMDcy", "avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alxe1", "html_url": "https://github.com/Alxe1", "followers_url": "https://api.github.com/users/Alxe1/followers", "following_url": "https://api.github.com/users/Alxe1/following{/other_user}", "gists_url": "https://api.github.com/users/Alxe1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Alxe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alxe1/subscriptions", "organizations_url": "https://api.github.com/users/Alxe1/orgs", "repos_url": "https://api.github.com/users/Alxe1/repos", "events_url": "https://api.github.com/users/Alxe1/events{/privacy}", "received_events_url": "https://api.github.com/users/Alxe1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts).\r\n\r\nThis should be fixed on master now. Feel free to install `datasets` from source to try it o...
1,612,418,123,000
1,612,531,415,000
null
NONE
null
Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.ConnectTimeout: ``` /Users/littlely/myvirtual/tf2/bin/python3.7 /Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py Traceback (most recent call last): File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) socket.timeout: timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 167, in _new_conn % (self.host, self.timeout), urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py", line 12, in <module> dataset = load_dataset('json', data_files=["../../data/json.json"]) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset, max_retries=download_config.max_retries) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 232, in head_hf_s3 max_retries=max_retries, File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 523, in http_head max_retries=max_retries, File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 458, in _request_with_retry raise err File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 454, in _request_with_retry response = requests.request(verb.upper(), url, **params) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) Process finished with exit code 1 ``` Why it want to connect a remote url when I load local datasets, and how can I fix it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1818/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1817/comments
https://api.github.com/repos/huggingface/datasets/issues/1817/events
https://github.com/huggingface/datasets/issues/1817
800,870,652
MDU6SXNzdWU4MDA4NzA2NTI=
1,817
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500
{ "login": "LuCeHe", "id": 9610770, "node_id": "MDQ6VXNlcjk2MTA3NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LuCeHe", "html_url": "https://github.com/LuCeHe", "followers_url": "https://api.github.com/users/LuCeHe/followers", "following_url": "https://api.github.com/users/LuCeHe/following{/other_user}", "gists_url": "https://api.github.com/users/LuCeHe/gists{/gist_id}", "starred_url": "https://api.github.com/users/LuCeHe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LuCeHe/subscriptions", "organizations_url": "https://api.github.com/users/LuCeHe/orgs", "repos_url": "https://api.github.com/users/LuCeHe/repos", "events_url": "https://api.github.com/users/LuCeHe/events{/privacy}", "received_events_url": "https://api.github.com/users/LuCeHe/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi !\r\nThe error you have is due to the `input_ids` column not having the same number of examples as the other columns.\r\nIndeed you're concatenating the `input_ids` at this line:\r\n\r\nhttps://github.com/LuCeHe/GenericTools/blob/431835d8e13ec24dceb5ee4dc4ae58f0e873b091/KerasTools/lm_preprocessing.py#L134\r\n\r...
1,612,405,823,000
1,612,706,664,000
null
NONE
null
I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end https://github.com/LuCeHe/GenericTools/blob/master/KerasTools/lm_preprocessing.py In the last iteration of the last dset.map, it gives the error that I copied in the title. Another issue that I have, if I leave the batch_size set as 1000 in the last .map, I'm afraid it's going to lose most text, so I'm considering setting both writer_batch_size and batch_size to 300 K, but I'm not sure it's the best way to go. Can you help me? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1817/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1816/comments
https://api.github.com/repos/huggingface/datasets/issues/1816/events
https://github.com/huggingface/datasets/pull/1816
800,660,995
MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx
1,816
Doc2dial rc update to latest version
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "- update data loader and readme for latest version 1.0.1" ]
1,612,382,934,000
1,613,402,124,000
1,613,401,473,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1816", "html_url": "https://github.com/huggingface/datasets/pull/1816", "diff_url": "https://github.com/huggingface/datasets/pull/1816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1816.patch", "merged_at": 1613401473000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1815/comments
https://api.github.com/repos/huggingface/datasets/issues/1815/events
https://github.com/huggingface/datasets/pull/1815
800,610,017
MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1
1,815
Add CCAligned Multilingual Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For example the [bible_para](https://github.com/huggi...
1,612,378,792,000
1,614,601,983,000
1,614,594,981,000
CONTRIBUTOR
null
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to download one particular language and not all. To provide this feature, `load_dataset`'s `**config_kwargs` should allow some random keyword args, in this case -`language_code`. This will be needed before the dataset is downloaded and extracted. I'm expecting the usage to be something like - `load_dataset('ccaligned_multilingual','documents',language_code='en_XX-af_ZA')`. Ofcourse, at a later stage we can provide just two character language codes. This also has an issue where one language has multiple files (`my_MM` and `my_MM_zaw` on the link), but before that the required functionality must be added to `load_dataset`. It would be great if someone could either tell me an alternative way to do this, or point me to where changes need to be made, if any, apart from the `BuilderConfig` definition. Additionally, I believe the tests will also have to be modified if this change is made, since it would not be possible to test for any random keyword arguments. A decent way to go about this would be to provide all the options in a list/dictionary for `language_code` and use that to test the arguments. In essence, this is similar to the pre-trained checkpoint dictionary as `transformers`. That means writing dataset specific tests, or adding something new to dataset generation script to make it easier for everyone to add keyword arguments without having to worry about the tests. Thanks, Gunjan Requesting @lhoestq / @yjernite to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1815", "html_url": "https://github.com/huggingface/datasets/pull/1815", "diff_url": "https://github.com/huggingface/datasets/pull/1815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1815.patch", "merged_at": 1614594981000 }
true