Unnamed: 0 int64 0 4.89k | url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.34B | node_id stringlengths 18 32 | number int64 1 4.87k | title stringlengths 1 276 | user stringlengths 870 1.16k | labels stringclasses 64
values | state stringclasses 2
values | locked bool 1
class | assignee stringclasses 38
values | assignees stringclasses 48
values | milestone stringclasses 7
values | comments stringlengths 2 53k | created_at stringlengths 19 19 | updated_at stringlengths 19 19 | closed_at stringlengths 19 19 ⌀ | author_association stringclasses 3
values | active_lock_reason float64 | draft bool 2
classes | pull_request stringlengths 289 336 ⌀ | body stringlengths 1 228k ⌀ | reactions stringlengths 191 196 | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 3
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,600 | https://api.github.com/repos/huggingface/datasets/issues/292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/292/comments | https://api.github.com/repos/huggingface/datasets/issues/292/events | https://github.com/huggingface/datasets/pull/292 | 642,897,797 | MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2 | 292 | Update metadata for x_stance dataset | {'login': 'jvamvas', 'id': 5830820, 'node_id': 'MDQ6VXNlcjU4MzA4MjA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/5830820?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jvamvas', 'html_url': 'https://github.com/jvamvas', 'followers_url': 'https://api.github.com/users/jvamvas/followers', 'foll... | [] | closed | false | null | [] | null | ['Great! Thanks @jvamvas for these updates.\r\n'
'I have fixed a warning. The remaining test failure is due to an unrelated dataset.'
'We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?'] | 2020-06-22 09:13:26 | 2020-06-23 08:07:24 | 2020-06-23 08:07:24 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/292', 'html_url': 'https://github.com/huggingface/datasets/pull/292', 'diff_url': 'https://github.com/huggingface/datasets/pull/292.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/292.patch', 'merged_at': datetime.datetime(2020, 6, 23, 8... | Thank you for featuring the x_stance dataset in your library. This PR updates some metadata:
- Citation: Replace preprint with proceedings
- URL: Use a URL with long-term availability
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/292/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/292/timeline | null | null | true |
4,601 | https://api.github.com/repos/huggingface/datasets/issues/291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/291/comments | https://api.github.com/repos/huggingface/datasets/issues/291/events | https://github.com/huggingface/datasets/pull/291 | 642,688,450 | MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy | 291 | break statement not required | {'login': 'mayurnewase', 'id': 12967587, 'node_id': 'MDQ6VXNlcjEyOTY3NTg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/12967587?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mayurnewase', 'html_url': 'https://github.com/mayurnewase', 'followers_url': 'https://api.github.com/users/mayurnewase... | [] | closed | false | null | [] | null | ['I guess,test failing due to connection error?'
'We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?'
"If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r\nI gue... | 2020-06-22 01:40:55 | 2020-06-23 17:57:58 | 2020-06-23 09:37:02 | NONE | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/291', 'html_url': 'https://github.com/huggingface/datasets/pull/291', 'diff_url': 'https://github.com/huggingface/datasets/pull/291.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/291.patch', 'merged_at': None} | null | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/291/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/291/timeline | null | null | true |
4,602 | https://api.github.com/repos/huggingface/datasets/issues/290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/290/comments | https://api.github.com/repos/huggingface/datasets/issues/290/events | https://github.com/huggingface/datasets/issues/290 | 641,978,286 | MDU6SXNzdWU2NDE5NzgyODY= | 290 | ConnectionError - Eli5 dataset download | {'login': 'JovanNj', 'id': 8490096, 'node_id': 'MDQ6VXNlcjg0OTAwOTY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8490096?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/JovanNj', 'html_url': 'https://github.com/JovanNj', 'followers_url': 'https://api.github.com/users/JovanNj/followers', 'foll... | [] | closed | false | null | [] | null | ["It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue."
'It works now, thanks for prompt help!'] | 2020-06-19 13:40:33 | 2020-06-20 13:22:24 | 2020-06-20 13:22:24 | NONE | null | null | null | Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow
I would appreciate if you could help me with this issue. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/290/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/290/timeline | null | completed | false |
4,603 | https://api.github.com/repos/huggingface/datasets/issues/289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/289/comments | https://api.github.com/repos/huggingface/datasets/issues/289/events | https://github.com/huggingface/datasets/pull/289 | 641,934,194 | MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3 | 289 | update xsum | {'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/maria... | [] | closed | false | null | [] | null | ['Looks cool!\r\n@mariamabarham can you add a detailed description here what exactly is changed and how the user can load xsum now?'
'And a rebase should solve the conflicts'
'This is a super useful PR :-) @sshleifer - maybe you can take a look at the updated version of xsum if you can use it for your use case. Now, ... | 2020-06-19 12:28:32 | 2020-06-22 13:27:26 | 2020-06-22 07:20:07 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/289', 'html_url': 'https://github.com/huggingface/datasets/pull/289', 'diff_url': 'https://github.com/huggingface/datasets/pull/289.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/289.patch', 'merged_at': datetime.datetime(2020, 6, 22, 7... | This PR makes the following update to the xsum dataset:
- Manual download is not required anymore
- dataset can be loaded as follow: `nlp.load_dataset('xsum')`
**Important**
Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/289/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/289/timeline | null | null | true |
4,604 | https://api.github.com/repos/huggingface/datasets/issues/288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/288/comments | https://api.github.com/repos/huggingface/datasets/issues/288/events | https://github.com/huggingface/datasets/issues/288 | 641,888,610 | MDU6SXNzdWU2NDE4ODg2MTA= | 288 | Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill' | {'login': 'wutong8023', 'id': 14964542, 'node_id': 'MDQ6VXNlcjE0OTY0NTQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/14964542?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/wutong8023', 'html_url': 'https://github.com/wutong8023', 'followers_url': 'https://api.github.com/users/wutong8023/fol... | [] | closed | false | null | [] | null | ['It looks like the bug comes from `dill`. Which version of `dill` are you using ?'
'Thank you. It is version 0.2.6, which version is better?'
'0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?'
'Thanks guys! I upgraded dill and it works.' 'Awesome'] | 2020-06-19 11:01:22 | 2020-06-21 09:05:11 | 2020-06-21 09:05:11 | NONE | null | null | null | /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/288/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/288/timeline | null | completed | false |
4,605 | https://api.github.com/repos/huggingface/datasets/issues/287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/287/comments | https://api.github.com/repos/huggingface/datasets/issues/287/events | https://github.com/huggingface/datasets/pull/287 | 641,800,227 | MDExOlB1bGxSZXF1ZXN0NDM2OTY0NTg0 | 287 | fix squad_v2 metric | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-19 08:24:46 | 2020-06-19 08:33:43 | 2020-06-19 08:33:41 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/287', 'html_url': 'https://github.com/huggingface/datasets/pull/287', 'diff_url': 'https://github.com/huggingface/datasets/pull/287.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/287.patch', 'merged_at': datetime.datetime(2020, 6, 19, 8... | Fix #280
The imports were wrong | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/287/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/287/timeline | null | null | true |
4,606 | https://api.github.com/repos/huggingface/datasets/issues/286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/286/comments | https://api.github.com/repos/huggingface/datasets/issues/286/events | https://github.com/huggingface/datasets/pull/286 | 641,585,758 | MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4 | 286 | Add ANLI dataset. | {'login': 'easonnie', 'id': 11016329, 'node_id': 'MDQ6VXNlcjExMDE2MzI5', 'avatar_url': 'https://avatars.githubusercontent.com/u/11016329?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/easonnie', 'html_url': 'https://github.com/easonnie', 'followers_url': 'https://api.github.com/users/easonnie/followers',... | [] | closed | false | null | [] | null | ["Awesome!! Thanks @easonnie.\r\nLet's wait for additional reviews maybe from @lhoestq @patrickvonplaten @jplu"] | 2020-06-18 22:27:30 | 2020-06-22 12:23:27 | 2020-06-22 12:23:27 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/286', 'html_url': 'https://github.com/huggingface/datasets/pull/286', 'diff_url': 'https://github.com/huggingface/datasets/pull/286.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/286.patch', 'merged_at': datetime.datetime(2020, 6, 22, 1... | I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/286/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/286/timeline | null | null | true |
4,607 | https://api.github.com/repos/huggingface/datasets/issues/285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/285/comments | https://api.github.com/repos/huggingface/datasets/issues/285/events | https://github.com/huggingface/datasets/pull/285 | 641,360,702 | MDExOlB1bGxSZXF1ZXN0NDM2NjAyMjk4 | 285 | Consistent formatting of citations | {'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/maria... | [] | closed | false | null | [] | null | ['Circle CI shuold be green :-) '] | 2020-06-18 16:25:23 | 2020-06-22 08:09:25 | 2020-06-22 08:09:24 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/285', 'html_url': 'https://github.com/huggingface/datasets/pull/285', 'diff_url': 'https://github.com/huggingface/datasets/pull/285.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/285.patch', 'merged_at': datetime.datetime(2020, 6, 22, 8... | #283 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/285/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/285/timeline | null | null | true |
4,608 | https://api.github.com/repos/huggingface/datasets/issues/284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/284/comments | https://api.github.com/repos/huggingface/datasets/issues/284/events | https://github.com/huggingface/datasets/pull/284 | 641,337,217 | MDExOlB1bGxSZXF1ZXN0NDM2NTgxODQ2 | 284 | Fix manual download instructions | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | ['Verified that this works, thanks!'
"But I get\r\n```python\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py\r\n```\r\nWhen I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n\r\n\r\nBoth machines can ru... | 2020-06-18 15:59:57 | 2020-06-19 08:24:21 | 2020-06-19 08:24:19 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/284', 'html_url': 'https://github.com/huggingface/datasets/pull/284', 'diff_url': 'https://github.com/huggingface/datasets/pull/284.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/284.patch', 'merged_at': datetime.datetime(2020, 6, 19, 8... | This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`.
Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs.
After some brainstorming with @mariamabarham and @lhoestq... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/284/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/284/timeline | null | null | true |
4,609 | https://api.github.com/repos/huggingface/datasets/issues/283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/283/comments | https://api.github.com/repos/huggingface/datasets/issues/283/events | https://github.com/huggingface/datasets/issues/283 | 641,270,439 | MDU6SXNzdWU2NDEyNzA0Mzk= | 283 | Consistent formatting of citations | {'login': 'srush', 'id': 35882, 'node_id': 'MDQ6VXNlcjM1ODgy', 'avatar_url': 'https://avatars.githubusercontent.com/u/35882?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/srush', 'html_url': 'https://github.com/srush', 'followers_url': 'https://api.github.com/users/srush/followers', 'following_url': 'htt... | [] | closed | false | {'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/maria... | [{'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/mari... | null | [] | 2020-06-18 14:48:45 | 2020-06-22 17:30:46 | 2020-06-22 17:30:46 | CONTRIBUTOR | null | null | null | The citations are all of a different format, some have "```" and have text inside, others are proper bibtex.
Can we make it so that they all are proper citations, i.e. parse by the bibtex spec:
https://bibtexparser.readthedocs.io/en/master/ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/283/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/283/timeline | null | completed | false |
4,610 | https://api.github.com/repos/huggingface/datasets/issues/282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/282/comments | https://api.github.com/repos/huggingface/datasets/issues/282/events | https://github.com/huggingface/datasets/pull/282 | 641,217,759 | MDExOlB1bGxSZXF1ZXN0NDM2NDgxNzMy | 282 | Update dataset_info from gcs | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-18 13:41:15 | 2020-06-18 16:24:52 | 2020-06-18 16:24:51 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/282', 'html_url': 'https://github.com/huggingface/datasets/pull/282', 'diff_url': 'https://github.com/huggingface/datasets/pull/282.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/282.patch', 'merged_at': datetime.datetime(2020, 6, 18, 1... | Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local fi... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/282/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/282/timeline | null | null | true |
4,611 | https://api.github.com/repos/huggingface/datasets/issues/281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/281/comments | https://api.github.com/repos/huggingface/datasets/issues/281/events | https://github.com/huggingface/datasets/issues/281 | 641,067,856 | MDU6SXNzdWU2NDEwNjc4NTY= | 281 | Private/sensitive data | {'login': 'MFreidank', 'id': 6368040, 'node_id': 'MDQ6VXNlcjYzNjgwNDA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6368040?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/MFreidank', 'html_url': 'https://github.com/MFreidank', 'followers_url': 'https://api.github.com/users/MFreidank/followers... | [] | closed | false | null | [] | null | ["Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road."
'Hi @MFreidank, it is possible to load a dataset fr... | 2020-06-18 09:47:27 | 2020-06-20 13:15:12 | 2020-06-20 13:15:12 | NONE | null | null | null | Hi all,
Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch.
Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information.
Is there support/a plan to support such data with NLP, e.g. by readin... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/281/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/281/timeline | null | completed | false |
4,612 | https://api.github.com/repos/huggingface/datasets/issues/280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/280/comments | https://api.github.com/repos/huggingface/datasets/issues/280/events | https://github.com/huggingface/datasets/issues/280 | 640,677,615 | MDU6SXNzdWU2NDA2Nzc2MTU= | 280 | Error with SquadV2 Metrics | {'login': 'avinregmi', 'id': 32203792, 'node_id': 'MDQ6VXNlcjMyMjAzNzky', 'avatar_url': 'https://avatars.githubusercontent.com/u/32203792?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/avinregmi', 'html_url': 'https://github.com/avinregmi', 'followers_url': 'https://api.github.com/users/avinregmi/followe... | [] | closed | false | null | [] | null | [] | 2020-06-17 19:10:54 | 2020-06-19 08:33:41 | 2020-06-19 08:33:41 | NONE | null | null | null | I can't seem to import squad v2 metrics.
**squad_metric = nlp.load_metric('squad_v2')**
**This throws me an error.:**
```
ImportError Traceback (most recent call last)
<ipython-input-8-170b6a170555> in <module>
----> 1 squad_metric = nlp.load_metric('squad_v2')
~/env/lib6... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/280/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/280/timeline | null | completed | false |
4,613 | https://api.github.com/repos/huggingface/datasets/issues/279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/279/comments | https://api.github.com/repos/huggingface/datasets/issues/279/events | https://github.com/huggingface/datasets/issues/279 | 640,611,692 | MDU6SXNzdWU2NDA2MTE2OTI= | 279 | Dataset Preprocessing Cache with .map() function not working as expected | {'login': 'sarahwie', 'id': 8027676, 'node_id': 'MDQ6VXNlcjgwMjc2NzY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8027676?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sarahwie', 'html_url': 'https://github.com/sarahwie', 'followers_url': 'https://api.github.com/users/sarahwie/followers', '... | [] | closed | false | null | [] | null | ["When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re-pr... | 2020-06-17 17:17:21 | 2021-07-06 21:43:28 | 2021-04-18 23:43:49 | NONE | null | null | null | I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system.
Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I ... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/279/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/279/timeline | null | completed | false |
4,614 | https://api.github.com/repos/huggingface/datasets/issues/278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/278/comments | https://api.github.com/repos/huggingface/datasets/issues/278/events | https://github.com/huggingface/datasets/issues/278 | 640,518,917 | MDU6SXNzdWU2NDA1MTg5MTc= | 278 | MemoryError when loading German Wikipedia | {'login': 'gregburman', 'id': 4698028, 'node_id': 'MDQ6VXNlcjQ2OTgwMjg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/4698028?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/gregburman', 'html_url': 'https://github.com/gregburman', 'followers_url': 'https://api.github.com/users/gregburman/follo... | [] | closed | false | null | [] | null | ['Hi !\r\n\r\nAs you noticed, "big" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don\'t have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download and ... | 2020-06-17 15:06:21 | 2020-06-19 12:53:02 | 2020-06-19 12:53:02 | NONE | null | null | null | Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :)
I'm trying to download the German Wikipedia dataset as follows:
```
wiki = nlp.load_dataset("wikip... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/278/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/278/timeline | null | completed | false |
4,615 | https://api.github.com/repos/huggingface/datasets/issues/277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/277/comments | https://api.github.com/repos/huggingface/datasets/issues/277/events | https://github.com/huggingface/datasets/issues/277 | 640,163,053 | MDU6SXNzdWU2NDAxNjMwNTM= | 277 | Empty samples in glue/qqp | {'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richardd... | [] | closed | false | null | [] | null | ['We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?'
"Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. "] | 2020-06-17 05:54:52 | 2020-06-21 00:21:45 | 2020-06-21 00:21:45 | CONTRIBUTOR | null | null | null | ```
qqp = nlp.load_dataset('glue', 'qqp')
print(qqp['train'][310121])
print(qqp['train'][362225])
```
```
{'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137}
{'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246}
```
Notice that que... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/277/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/277/timeline | null | completed | false |
4,616 | https://api.github.com/repos/huggingface/datasets/issues/276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/276/comments | https://api.github.com/repos/huggingface/datasets/issues/276/events | https://github.com/huggingface/datasets/pull/276 | 639,490,858 | MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5 | 276 | Fix metric compute (original_instructions missing) | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | ['Awesome! This is working now:\r\n\r\n```python\r\nimport nlp \r\nseqeval = nlp.load_metric("seqeval") \r\ny_true = [[\'O\', \'O\', \'O\', \'B-MISC\', \'I-MISC\', \'I-MISC\', \'O\'], [\'B-PER\', \'I-PER\', \'O\']] \r\ny_pred = [[\'O\', \'O\', \'B-MISC\', \'I-MISC\', \'I-MISC\', \'I-MISC\', \'O\'], [\'B-PER\', \'I-PER\... | 2020-06-16 08:52:01 | 2020-06-18 07:41:45 | 2020-06-18 07:41:44 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/276', 'html_url': 'https://github.com/huggingface/datasets/pull/276', 'diff_url': 'https://github.com/huggingface/datasets/pull/276.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/276.patch', 'merged_at': datetime.datetime(2020, 6, 18, 7... | When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset.
However metrics load data the same way but don't need instructions (we use one single file).
In this PR I just make `original_instructions` optional when reading files to load a `Datas... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/276/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/276/timeline | null | null | true |
4,617 | https://api.github.com/repos/huggingface/datasets/issues/275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/275/comments | https://api.github.com/repos/huggingface/datasets/issues/275/events | https://github.com/huggingface/datasets/issues/275 | 639,439,052 | MDU6SXNzdWU2Mzk0MzkwNTI= | 275 | NonMatchingChecksumError when loading pubmed dataset | {'login': 'DavideStenner', 'id': 48441753, 'node_id': 'MDQ6VXNlcjQ4NDQxNzUz', 'avatar_url': 'https://avatars.githubusercontent.com/u/48441753?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/DavideStenner', 'html_url': 'https://github.com/DavideStenner', 'followers_url': 'https://api.github.com/users/David... | [{'id': 2067388877, 'node_id': 'MDU6TGFiZWwyMDY3Mzg4ODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug', 'name': 'dataset bug', 'color': '2edb81', 'default': False, 'description': 'A bug in a dataset script provided in the library'}] | closed | false | null | [] | null | ['For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n`.
The error is:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-2-7742dea167d0> in <module... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/275/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/275/timeline | null | completed | false |
4,618 | https://api.github.com/repos/huggingface/datasets/issues/274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/274/comments | https://api.github.com/repos/huggingface/datasets/issues/274/events | https://github.com/huggingface/datasets/issues/274 | 639,156,625 | MDU6SXNzdWU2MzkxNTY2MjU= | 274 | PG-19 | {'login': 'lucidrains', 'id': 108653, 'node_id': 'MDQ6VXNlcjEwODY1Mw==', 'avatar_url': 'https://avatars.githubusercontent.com/u/108653?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lucidrains', 'html_url': 'https://github.com/lucidrains', 'followers_url': 'https://api.github.com/users/lucidrains/followe... | [{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}] | closed | false | null | [] | null | ['Sounds good! Do you want to give it a try?'
"Ok, I'll see if I can figure it out tomorrow!"
"Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each boo... | 2020-06-15 21:02:26 | 2020-07-06 15:35:02 | 2020-07-06 15:35:02 | CONTRIBUTOR | null | null | null | Hi, and thanks for all your open-sourced work, as always!
I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/274/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/274/timeline | null | completed | false |
4,619 | https://api.github.com/repos/huggingface/datasets/issues/273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/273/comments | https://api.github.com/repos/huggingface/datasets/issues/273/events | https://github.com/huggingface/datasets/pull/273 | 638,968,054 | MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4 | 273 | update cos_e to add cos_e v1.0 | {'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/maria... | [] | closed | false | null | [] | null | [] | 2020-06-15 16:03:22 | 2020-06-16 08:25:54 | 2020-06-16 08:25:52 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/273', 'html_url': 'https://github.com/huggingface/datasets/pull/273', 'diff_url': 'https://github.com/huggingface/datasets/pull/273.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/273.patch', 'merged_at': datetime.datetime(2020, 6, 16, 8... | This PR updates the cos_e dataset to add v1.0 as requested here #163
@nazneenrajani | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/273/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/273/timeline | null | null | true |
4,620 | https://api.github.com/repos/huggingface/datasets/issues/272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/272/comments | https://api.github.com/repos/huggingface/datasets/issues/272/events | https://github.com/huggingface/datasets/pull/272 | 638,307,313 | MDExOlB1bGxSZXF1ZXN0NDM0MTExOTQ3 | 272 | asd | {'login': 'sn696', 'id': 66900970, 'node_id': 'MDQ6VXNlcjY2OTAwOTcw', 'avatar_url': 'https://avatars.githubusercontent.com/u/66900970?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sn696', 'html_url': 'https://github.com/sn696', 'followers_url': 'https://api.github.com/users/sn696/followers', 'following_... | [] | closed | false | null | [] | null | [] | 2020-06-14 08:20:38 | 2020-06-14 09:16:41 | 2020-06-14 09:16:41 | NONE | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/272', 'html_url': 'https://github.com/huggingface/datasets/pull/272', 'diff_url': 'https://github.com/huggingface/datasets/pull/272.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/272.patch', 'merged_at': None} | null | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/272/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/272/timeline | null | null | true |
4,621 | https://api.github.com/repos/huggingface/datasets/issues/271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/271/comments | https://api.github.com/repos/huggingface/datasets/issues/271/events | https://github.com/huggingface/datasets/pull/271 | 638,135,754 | MDExOlB1bGxSZXF1ZXN0NDMzOTg3NDkw | 271 | Fix allociné dataset configuration | {'login': 'TheophileBlard', 'id': 37028092, 'node_id': 'MDQ6VXNlcjM3MDI4MDky', 'avatar_url': 'https://avatars.githubusercontent.com/u/37028092?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/TheophileBlard', 'html_url': 'https://github.com/TheophileBlard', 'followers_url': 'https://api.github.com/users/Th... | [] | closed | false | null | [] | null | ["Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n```python\r\ndataset = load_dataset('allocine')\r\n```\r\nand it works.\r\n\r\nMaybe we should take that into account in the nlp viewer @srush ?"
'@lhoestq Just to understand the exact s... | 2020-06-13 10:12:10 | 2020-06-18 07:41:21 | 2020-06-18 07:41:20 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/271', 'html_url': 'https://github.com/huggingface/datasets/pull/271', 'diff_url': 'https://github.com/huggingface/datasets/pull/271.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/271.patch', 'merged_at': None} | This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with :
```python
dataset = load_dataset('allocine', 'allocine')
```
This is redundant, as there is only one "dataset configuration", and should only be:
```python
dataset = load_dataset('allocine')
```
This ... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/271/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/271/timeline | null | null | true |
4,622 | https://api.github.com/repos/huggingface/datasets/issues/270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/270/comments | https://api.github.com/repos/huggingface/datasets/issues/270/events | https://github.com/huggingface/datasets/issues/270 | 638,121,617 | MDU6SXNzdWU2MzgxMjE2MTc= | 270 | c4 dataset is not viewable in nlpviewer demo | {'login': 'rajarsheem', 'id': 6441313, 'node_id': 'MDQ6VXNlcjY0NDEzMTM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6441313?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/rajarsheem', 'html_url': 'https://github.com/rajarsheem', 'followers_url': 'https://api.github.com/users/rajarsheem/follo... | [{'id': 2107841032, 'node_id': 'MDU6TGFiZWwyMTA3ODQxMDMy', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer', 'name': 'nlp-viewer', 'color': '94203D', 'default': False, 'description': ''}] | closed | false | null | [] | null | ['C4 is too large to be shown in the viewer'] | 2020-06-13 08:26:16 | 2020-10-27 15:35:29 | 2020-10-27 15:35:13 | NONE | null | null | null | I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/)
```python
ModuleNotFoundError: No module named 'langdetect'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__d... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/270/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/270/timeline | null | completed | false |
4,623 | https://api.github.com/repos/huggingface/datasets/issues/269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/269/comments | https://api.github.com/repos/huggingface/datasets/issues/269/events | https://github.com/huggingface/datasets/issues/269 | 638,106,774 | MDU6SXNzdWU2MzgxMDY3NzQ= | 269 | Error in metric.compute: missing `original_instructions` argument | {'login': 'zphang', 'id': 1668462, 'node_id': 'MDQ6VXNlcjE2Njg0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1668462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/zphang', 'html_url': 'https://github.com/zphang', 'followers_url': 'https://api.github.com/users/zphang/followers', 'followin... | [{'id': 2067393914, 'node_id': 'MDU6TGFiZWwyMDY3MzkzOTE0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/metric%20bug', 'name': 'metric bug', 'color': '25b21e', 'default': False, 'description': 'A bug in a metric script'}] | closed | false | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'f... | null | [] | 2020-06-13 06:26:54 | 2020-06-18 07:41:44 | 2020-06-18 07:41:44 | NONE | null | null | null | I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example:
```python
import nlp
rte_metric = nlp.load_metric('glue', name="rte")
rte_metric.compute(
[0, 0, 1, 1],
[0, 1, 0, 1],
)
```
```
181 # Read the predictio... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/269/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/269/timeline | null | completed | false |
4,624 | https://api.github.com/repos/huggingface/datasets/issues/268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/268/comments | https://api.github.com/repos/huggingface/datasets/issues/268/events | https://github.com/huggingface/datasets/pull/268 | 637,848,056 | MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1 | 268 | add Rotten Tomatoes Movie Review sentences sentiment dataset | {'login': 'jxmorris12', 'id': 13238952, 'node_id': 'MDQ6VXNlcjEzMjM4OTUy', 'avatar_url': 'https://avatars.githubusercontent.com/u/13238952?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jxmorris12', 'html_url': 'https://github.com/jxmorris12', 'followers_url': 'https://api.github.com/users/jxmorris12/fol... | [] | closed | false | null | [] | null | ['@jplu @thomwolf @patrickvonplaten @lhoestq -- How do I request reviewers? Thanks.'] | 2020-06-12 15:53:59 | 2020-06-18 07:46:24 | 2020-06-18 07:46:23 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/268', 'html_url': 'https://github.com/huggingface/datasets/pull/268', 'diff_url': 'https://github.com/huggingface/datasets/pull/268.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/268.patch', 'merged_at': datetime.datetime(2020, 6, 18, 7... | Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/ | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/268/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/268/timeline | null | null | true |
4,625 | https://api.github.com/repos/huggingface/datasets/issues/267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/267/comments | https://api.github.com/repos/huggingface/datasets/issues/267/events | https://github.com/huggingface/datasets/issues/267 | 637,415,545 | MDU6SXNzdWU2Mzc0MTU1NDU= | 267 | How can I load/find WMT en-romanian? | {'login': 'sshleifer', 'id': 6045025, 'node_id': 'MDQ6VXNlcjYwNDUwMjU=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6045025?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sshleifer', 'html_url': 'https://github.com/sshleifer', 'followers_url': 'https://api.github.com/users/sshleifer/followers... | [] | closed | false | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [{'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/u... | null | ['I will take a look :-) '] | 2020-06-12 01:09:37 | 2020-06-19 08:24:19 | 2020-06-19 08:24:19 | CONTRIBUTOR | null | null | null | I believe it is from `wmt16`
When I run
```python
wmt = nlp.load_dataset('wmt16')
```
I get:
```python
AssertionError: The dataset wmt16 with config cs-en requires manual data.
Please follow the manual download instructions: Some of the wmt configs here, require a manual download.
Please look into wm... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/267/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/267/timeline | null | completed | false |
4,626 | https://api.github.com/repos/huggingface/datasets/issues/266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/266/comments | https://api.github.com/repos/huggingface/datasets/issues/266/events | https://github.com/huggingface/datasets/pull/266 | 637,156,392 | MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw | 266 | Add sort, shuffle, test_train_split and select methods | {'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', '... | [] | closed | false | null | [] | null | ['Nice !\r\n\r\nAlso it looks like we can have a train_test_split method for free:\r\n```python\r\ntrain_indices, test_indices = train_test_split(range(len(dataset)))\r\ntrain = dataset.sort(indices=train_indices)\r\ntest = dataset.sort(indices=test_indices)\r\n```\r\n\r\nand a shuffling method for free:\r\n```python\r... | 2020-06-11 16:22:20 | 2020-06-18 16:23:25 | 2020-06-18 16:23:24 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/266', 'html_url': 'https://github.com/huggingface/datasets/pull/266', 'diff_url': 'https://github.com/huggingface/datasets/pull/266.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/266.patch', 'merged_at': datetime.datetime(2020, 6, 18, 1... | Add a bunch of methods to reorder/split/select rows in a dataset:
- `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be sm... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/266/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/266/timeline | null | null | true |
4,627 | https://api.github.com/repos/huggingface/datasets/issues/265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/265/comments | https://api.github.com/repos/huggingface/datasets/issues/265/events | https://github.com/huggingface/datasets/pull/265 | 637,139,220 | MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz | 265 | Add pyarrow warning colab | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-11 15:57:51 | 2020-08-02 18:14:36 | 2020-06-12 08:14:16 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/265', 'html_url': 'https://github.com/huggingface/datasets/pull/265', 'diff_url': 'https://github.com/huggingface/datasets/pull/265.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/265.patch', 'merged_at': datetime.datetime(2020, 6, 12, 8... | When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow.
This is an issue because `nlp` requires the updated version to work correctly.
In this PR I added en error that is shown to the user in google colab if... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/265/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/265/timeline | null | null | true |
4,628 | https://api.github.com/repos/huggingface/datasets/issues/264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/264/comments | https://api.github.com/repos/huggingface/datasets/issues/264/events | https://github.com/huggingface/datasets/pull/264 | 637,106,170 | MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4 | 264 | Fix small issues creating dataset | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-11 15:20:16 | 2020-06-12 08:15:57 | 2020-06-12 08:15:56 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/264', 'html_url': 'https://github.com/huggingface/datasets/pull/264', 'diff_url': 'https://github.com/huggingface/datasets/pull/264.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/264.patch', 'merged_at': datetime.datetime(2020, 6, 12, 8... | Fix many small issues mentioned in #249:
- don't force to install apache beam for commands
- fix None cache dir when using `dl_manager.download_custom`
- added new extras in `setup.py` named `dev` that contains tests and quality dependencies
- mock dataset sizes when running tests with dummy data
- add a note abou... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/264/reactions', 'total_count': 2, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/264/timeline | null | null | true |
4,629 | https://api.github.com/repos/huggingface/datasets/issues/263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/263/comments | https://api.github.com/repos/huggingface/datasets/issues/263/events | https://github.com/huggingface/datasets/issues/263 | 637,028,015 | MDU6SXNzdWU2MzcwMjgwMTU= | 263 | [Feature request] Support for external modality for language datasets | {'login': 'aleSuglia', 'id': 1479733, 'node_id': 'MDQ6VXNlcjE0Nzk3MzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1479733?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/aleSuglia', 'html_url': 'https://github.com/aleSuglia', 'followers_url': 'https://api.github.com/users/aleSuglia/followers... | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}
{'id': 2067400324, 'node_id': 'MDU6TGFiZWwyMDY3NDAwMzI0', 'url': 'https://api.git... | closed | false | null | [] | null | ['Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn\'t have built-in support for generic "tensors" in records but there might be ways to do that in a clean way. We\'ll... | 2020-06-11 13:42:18 | 2022-02-10 13:26:35 | 2022-02-10 13:26:35 | CONTRIBUTOR | null | null | null | # Background
In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/263/reactions', 'total_count': 23, '+1': 18, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 4} | https://api.github.com/repos/huggingface/datasets/issues/263/timeline | null | completed | false |
4,630 | https://api.github.com/repos/huggingface/datasets/issues/262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/262/comments | https://api.github.com/repos/huggingface/datasets/issues/262/events | https://github.com/huggingface/datasets/pull/262 | 636,702,849 | MDExOlB1bGxSZXF1ZXN0NDMyODI3Mzcz | 262 | Add new dataset ANLI Round 1 | {'login': 'easonnie', 'id': 11016329, 'node_id': 'MDQ6VXNlcjExMDE2MzI5', 'avatar_url': 'https://avatars.githubusercontent.com/u/11016329?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/easonnie', 'html_url': 'https://github.com/easonnie', 'followers_url': 'https://api.github.com/users/easonnie/followers',... | [] | closed | false | null | [] | null | ['Hello ! Thanks for adding this one :)\r\n\r\nThis looks great, you just have to do the last steps to make the CI pass.\r\nI can see that two things are missing:\r\n1. the dummy data that is used to test that the script is working as expected\r\n2. the json file with all the infos about the dataset\r\n\r\nYou can see ... | 2020-06-11 04:14:57 | 2020-06-12 22:03:03 | 2020-06-12 22:03:03 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/262', 'html_url': 'https://github.com/huggingface/datasets/pull/262', 'diff_url': 'https://github.com/huggingface/datasets/pull/262.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/262.patch', 'merged_at': None} | Adding new dataset [ANLI](https://github.com/facebookresearch/anli/).
I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/262/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/262/timeline | null | null | true |
4,631 | https://api.github.com/repos/huggingface/datasets/issues/261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/261/comments | https://api.github.com/repos/huggingface/datasets/issues/261/events | https://github.com/huggingface/datasets/issues/261 | 636,372,380 | MDU6SXNzdWU2MzYzNzIzODA= | 261 | Downloading dataset error with pyarrow.lib.RecordBatch | {'login': 'cuent', 'id': 5248968, 'node_id': 'MDQ6VXNlcjUyNDg5Njg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/5248968?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/cuent', 'html_url': 'https://github.com/cuent', 'followers_url': 'https://api.github.com/users/cuent/followers', 'following_ur... | [] | closed | false | null | [] | null | ["When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly in... | 2020-06-10 16:04:19 | 2020-06-11 14:35:12 | 2020-06-11 14:35:12 | NONE | null | null | null | I am trying to download `sentiment140` and I have the following error
```
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/261/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/261/timeline | null | completed | false |
4,632 | https://api.github.com/repos/huggingface/datasets/issues/260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/260/comments | https://api.github.com/repos/huggingface/datasets/issues/260/events | https://github.com/huggingface/datasets/pull/260 | 636,261,118 | MDExOlB1bGxSZXF1ZXN0NDMyNDY3NDM5 | 260 | Consistency fixes | {'login': 'julien-c', 'id': 326577, 'node_id': 'MDQ6VXNlcjMyNjU3Nw==', 'avatar_url': 'https://avatars.githubusercontent.com/u/326577?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/julien-c', 'html_url': 'https://github.com/julien-c', 'followers_url': 'https://api.github.com/users/julien-c/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-10 13:44:42 | 2020-06-11 10:34:37 | 2020-06-11 10:34:36 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/260', 'html_url': 'https://github.com/huggingface/datasets/pull/260', 'diff_url': 'https://github.com/huggingface/datasets/pull/260.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/260.patch', 'merged_at': datetime.datetime(2020, 6, 11, 1... | A few bugs I've found while hacking | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/260/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/260/timeline | null | null | true |
4,633 | https://api.github.com/repos/huggingface/datasets/issues/259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/259/comments | https://api.github.com/repos/huggingface/datasets/issues/259/events | https://github.com/huggingface/datasets/issues/259 | 636,239,529 | MDU6SXNzdWU2MzYyMzk1Mjk= | 259 | documentation missing how to split a dataset | {'login': 'fotisj', 'id': 2873355, 'node_id': 'MDQ6VXNlcjI4NzMzNTU=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2873355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/fotisj', 'html_url': 'https://github.com/fotisj', 'followers_url': 'https://api.github.com/users/fotisj/followers', 'followin... | [] | closed | false | null | [] | null | ["this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`"
"Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHowever... | 2020-06-10 13:18:13 | 2020-06-18 22:20:24 | 2020-06-18 22:20:24 | NONE | null | null | null | I am trying to understand how to split a dataset ( as arrow_dataset).
I know I can do something like this to access a split which is already in the original dataset :
`ds_test = nlp.load_dataset('imdb, split='test') `
But how can I split ds_test into a test and a validation set (without reading the data into m... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/259/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/259/timeline | null | completed | false |
4,634 | https://api.github.com/repos/huggingface/datasets/issues/258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/258/comments | https://api.github.com/repos/huggingface/datasets/issues/258/events | https://github.com/huggingface/datasets/issues/258 | 635,859,525 | MDU6SXNzdWU2MzU4NTk1MjU= | 258 | Why is dataset after tokenization far more larger than the orginal one ? | {'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richardd... | [] | closed | false | null | [] | null | ['Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=["title", "text"]` in the arguments of `.map`... | 2020-06-10 01:27:07 | 2020-06-10 12:46:34 | 2020-06-10 12:46:34 | CONTRIBUTOR | null | null | null | I tokenize wiki dataset by `map` and cache the results.
```
def tokenize_tfm(example):
example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text']))
return example
wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train']
wiki.map(token... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/258/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/258/timeline | null | completed | false |
4,635 | https://api.github.com/repos/huggingface/datasets/issues/257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/257/comments | https://api.github.com/repos/huggingface/datasets/issues/257/events | https://github.com/huggingface/datasets/issues/257 | 635,620,979 | MDU6SXNzdWU2MzU2MjA5Nzk= | 257 | Tokenizer pickling issue fix not landed in `nlp` yet? | {'login': 'sarahwie', 'id': 8027676, 'node_id': 'MDQ6VXNlcjgwMjc2NzY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8027676?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sarahwie', 'html_url': 'https://github.com/sarahwie', 'followers_url': 'https://api.github.com/users/sarahwie/followers', '... | [] | closed | false | null | [] | null | ['Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`'
'If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6 and ... | 2020-06-09 17:12:34 | 2020-06-10 21:45:32 | 2020-06-09 17:26:53 | NONE | null | null | null | Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function:
```
dataset = nlp.load_dataset('cos_e')
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir)
for split in datase... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/257/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/257/timeline | null | completed | false |
4,636 | https://api.github.com/repos/huggingface/datasets/issues/256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/256/comments | https://api.github.com/repos/huggingface/datasets/issues/256/events | https://github.com/huggingface/datasets/issues/256 | 635,596,295 | MDU6SXNzdWU2MzU1OTYyOTU= | 256 | [Feature request] Add a feature to dataset | {'login': 'sarahwie', 'id': 8027676, 'node_id': 'MDQ6VXNlcjgwMjc2NzY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8027676?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sarahwie', 'html_url': 'https://github.com/sarahwie', 'followers_url': 'https://api.github.com/users/sarahwie/followers', '... | [] | closed | false | null | [] | null | ['Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)'
"Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prior to ... | 2020-06-09 16:38:12 | 2020-06-09 16:51:42 | 2020-06-09 16:51:42 | NONE | null | null | null | Is there a straightforward way to add a field to the arrow_dataset, prior to performing map? | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/256/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/256/timeline | null | completed | false |
4,637 | https://api.github.com/repos/huggingface/datasets/issues/255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/255/comments | https://api.github.com/repos/huggingface/datasets/issues/255/events | https://github.com/huggingface/datasets/pull/255 | 635,300,822 | MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0 | 255 | Add dataset/piaf | {'login': 'RachelKer', 'id': 36986299, 'node_id': 'MDQ6VXNlcjM2OTg2Mjk5', 'avatar_url': 'https://avatars.githubusercontent.com/u/36986299?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/RachelKer', 'html_url': 'https://github.com/RachelKer', 'followers_url': 'https://api.github.com/users/RachelKer/followe... | [] | closed | false | null | [] | null | ['Very nice !'] | 2020-06-09 10:16:01 | 2020-06-12 08:31:27 | 2020-06-12 08:31:27 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/255', 'html_url': 'https://github.com/huggingface/datasets/pull/255', 'diff_url': 'https://github.com/huggingface/datasets/pull/255.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/255.patch', 'merged_at': datetime.datetime(2020, 6, 12, 8... | Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf) | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/255/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/255/timeline | null | null | true |
4,638 | https://api.github.com/repos/huggingface/datasets/issues/254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/254/comments | https://api.github.com/repos/huggingface/datasets/issues/254/events | https://github.com/huggingface/datasets/issues/254 | 635,057,568 | MDU6SXNzdWU2MzUwNTc1Njg= | 254 | [Feature request] Be able to remove a specific sample of the dataset | {'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers',... | [] | closed | false | null | [] | null | ['Oh yes you can now do that with the `dataset.filter()` method that was added in #214 '] | 2020-06-09 02:22:13 | 2020-06-09 08:41:38 | 2020-06-09 08:41:38 | NONE | null | null | null | As mentioned in #117, it's currently not possible to remove a sample of the dataset.
But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the datase... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/254/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/254/timeline | null | completed | false |
4,639 | https://api.github.com/repos/huggingface/datasets/issues/253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/253/comments | https://api.github.com/repos/huggingface/datasets/issues/253/events | https://github.com/huggingface/datasets/pull/253 | 634,791,939 | MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz | 253 | add flue dataset | {'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/maria... | [] | closed | false | null | [] | null | ['The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? '
"Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortunatel... | 2020-06-08 17:11:09 | 2020-07-16 07:50:59 | 2020-07-16 07:50:59 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/253', 'html_url': 'https://github.com/huggingface/datasets/pull/253', 'diff_url': 'https://github.com/huggingface/datasets/pull/253.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/253.patch', 'merged_at': None} | This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/253/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/253/timeline | null | null | true |
4,640 | https://api.github.com/repos/huggingface/datasets/issues/252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/252/comments | https://api.github.com/repos/huggingface/datasets/issues/252/events | https://github.com/huggingface/datasets/issues/252 | 634,563,239 | MDU6SXNzdWU2MzQ1NjMyMzk= | 252 | NonMatchingSplitsSizesError error when reading the IMDB dataset | {'login': 'antmarakis', 'id': 17463361, 'node_id': 'MDQ6VXNlcjE3NDYzMzYx', 'avatar_url': 'https://avatars.githubusercontent.com/u/17463361?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/antmarakis', 'html_url': 'https://github.com/antmarakis', 'followers_url': 'https://api.github.com/users/antmarakis/fol... | [] | closed | false | null | [] | null | ["I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?"
'I updated it, that was it, thanks!'
'Hello, I am facing the same... | 2020-06-08 12:26:24 | 2021-08-27 15:20:58 | 2020-06-08 14:01:26 | NONE | null | null | null | Hi!
I am trying to load the `imdb` dataset with this line:
`dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')`
but I am getting the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mounts/Users/cisintern/antmarakis/anaconda3/... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/252/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/252/timeline | null | completed | false |
4,641 | https://api.github.com/repos/huggingface/datasets/issues/251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/251/comments | https://api.github.com/repos/huggingface/datasets/issues/251/events | https://github.com/huggingface/datasets/pull/251 | 634,544,977 | MDExOlB1bGxSZXF1ZXN0NDMxMDgwMDkw | 251 | Better access to all dataset information | {'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', '... | [] | closed | false | null | [] | null | [] | 2020-06-08 11:56:50 | 2020-06-12 08:13:00 | 2020-06-12 08:12:58 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/251', 'html_url': 'https://github.com/huggingface/datasets/pull/251', 'diff_url': 'https://github.com/huggingface/datasets/pull/251.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/251.patch', 'merged_at': datetime.datetime(2020, 6, 12, 8... | Moves all the dataset info down one level from `dataset.info.XXX` to `dataset.XXX`
This way it's easier to access `dataset.feature['label']` for instance
Also, add the original split instructions used to create the dataset in `dataset.split`
Ex:
```
from nlp import load_dataset
stsb = load_dataset('glue', name=... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/251/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/251/timeline | null | null | true |
4,642 | https://api.github.com/repos/huggingface/datasets/issues/250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/250/comments | https://api.github.com/repos/huggingface/datasets/issues/250/events | https://github.com/huggingface/datasets/pull/250 | 634,416,751 | MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4 | 250 | Remove checksum download in c4 | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | ['Commenting again in case [previous thread](https://github.com/huggingface/nlp/pull/233) was inactive.\r\n\r\n@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI\'m using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset("c4", \'en\', dat... | 2020-06-08 09:13:00 | 2020-08-25 07:04:56 | 2020-06-08 09:16:59 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/250', 'html_url': 'https://github.com/huggingface/datasets/pull/250', 'diff_url': 'https://github.com/huggingface/datasets/pull/250.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/250.patch', 'merged_at': datetime.datetime(2020, 6, 8, 9,... | There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/250/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/250/timeline | null | null | true |
4,643 | https://api.github.com/repos/huggingface/datasets/issues/249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/249/comments | https://api.github.com/repos/huggingface/datasets/issues/249/events | https://github.com/huggingface/datasets/issues/249 | 633,393,443 | MDU6SXNzdWU2MzMzOTM0NDM= | 249 | [Dataset created] some critical small issues when I was creating a dataset | {'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richardd... | [] | closed | false | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'f... | null | ['Thanks for noticing all these :) They should be easy to fix indeed'
'Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon.'] | 2020-06-07 12:58:54 | 2020-06-12 08:28:51 | 2020-06-12 08:28:51 | CONTRIBUTOR | null | null | null | Hi, I successfully created a dataset and has made a pr #248.
But I have encountered several problems when I was creating it, and those should be easy to fix.
1. Not found dataset_info.json
should be fixed by #241 , eager to wait it be merged.
2. Forced to install `apach_beam`
If we should install it, then it m... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/249/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/249/timeline | null | completed | false |
4,644 | https://api.github.com/repos/huggingface/datasets/issues/248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/248/comments | https://api.github.com/repos/huggingface/datasets/issues/248/events | https://github.com/huggingface/datasets/pull/248 | 633,390,427 | MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0 | 248 | add Toronto BooksCorpus | {'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richardd... | [] | closed | false | null | [] | null | ['Thanks for adding this one !\r\n\r\nAbout the three points you mentioned:\r\n1. I think the `toronto_books_corpus` branch can be removed @mariamabarham ? \r\n2. You can use the download manager to download from google drive. For you case you can just do something like \r\n```python\r\nURL = "https://drive.google.com/... | 2020-06-07 12:54:56 | 2020-06-12 08:45:03 | 2020-06-12 08:45:02 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/248', 'html_url': 'https://github.com/huggingface/datasets/pull/248', 'diff_url': 'https://github.com/huggingface/datasets/pull/248.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/248.patch', 'merged_at': datetime.datetime(2020, 6, 12, 8... | 1. I knew there is a branch `toronto_books_corpus`
- After I downloaded it, I found it is all non-english, and only have one row.
- It seems that it cites the wrong paper
- according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus`
2. It use a text mirror in google drive
- `bookscorpu... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/248/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/248/timeline | null | null | true |
4,645 | https://api.github.com/repos/huggingface/datasets/issues/247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/247/comments | https://api.github.com/repos/huggingface/datasets/issues/247/events | https://github.com/huggingface/datasets/pull/247 | 632,380,078 | MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2 | 247 | Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | ['That\'s great!\r\n\r\nI think it would be nice to test "deterministic-ness" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/trac... | 2020-06-06 11:02:10 | 2020-06-08 09:18:16 | 2020-06-08 09:18:14 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/247', 'html_url': 'https://github.com/huggingface/datasets/pull/247', 'diff_url': 'https://github.com/huggingface/datasets/pull/247.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/247.patch', 'merged_at': datetime.datetime(2020, 6, 8, 9,... | This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements.
Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ?
**Important**
It does break backward c... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/247/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/247/timeline | null | null | true |
4,646 | https://api.github.com/repos/huggingface/datasets/issues/246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/246/comments | https://api.github.com/repos/huggingface/datasets/issues/246/events | https://github.com/huggingface/datasets/issues/246 | 632,380,054 | MDU6SXNzdWU2MzIzODAwNTQ= | 246 | What is the best way to cache a dataset? | {'login': 'Mistobaan', 'id': 112599, 'node_id': 'MDQ6VXNlcjExMjU5OQ==', 'avatar_url': 'https://avatars.githubusercontent.com/u/112599?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Mistobaan', 'html_url': 'https://github.com/Mistobaan', 'followers_url': 'https://api.github.com/users/Mistobaan/followers',... | [] | closed | false | null | [] | null | ['Everything is already cached by default in 🤗nlp (in particular dataset\nloading and all the “map()” operations) so I don’t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it’s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <notifications@github.com> wrote:\n\n> For e... | 2020-06-06 11:02:07 | 2020-07-09 09:15:07 | 2020-07-09 09:15:07 | NONE | null | null | null | For example if I want to use streamlit with a nlp dataset:
```
@st.cache
def load_data():
return nlp.load_dataset('squad')
```
This code raises the error "uncachable object"
Right now I just fixed with a constant for my specific case:
```
@st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})
```... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/246/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/246/timeline | null | completed | false |
4,647 | https://api.github.com/repos/huggingface/datasets/issues/245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/245/comments | https://api.github.com/repos/huggingface/datasets/issues/245/events | https://github.com/huggingface/datasets/issues/245 | 631,985,108 | MDU6SXNzdWU2MzE5ODUxMDg= | 245 | SST-2 test labels are all -1 | {'login': 'jxmorris12', 'id': 13238952, 'node_id': 'MDQ6VXNlcjEzMjM4OTUy', 'avatar_url': 'https://avatars.githubusercontent.com/u/13238952?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jxmorris12', 'html_url': 'https://github.com/jxmorris12', 'followers_url': 'https://api.github.com/users/jxmorris12/fol... | [] | closed | false | null | [] | null | ["this also happened to me with `nlp.load_dataset('glue', 'mnli')`"
"Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened to m... | 2020-06-05 21:41:42 | 2021-12-08 00:47:32 | 2020-06-06 16:56:41 | CONTRIBUTOR | null | null | null | I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1.
```
>>> import nlp
>>> glue = nlp.load_dataset('glue', 'sst2')
>>> glue
{'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'st... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/245/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/245/timeline | null | completed | false |
4,648 | https://api.github.com/repos/huggingface/datasets/issues/244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/244/comments | https://api.github.com/repos/huggingface/datasets/issues/244/events | https://github.com/huggingface/datasets/pull/244 | 631,869,155 | MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx | 244 | Add Allociné Dataset | {'login': 'TheophileBlard', 'id': 37028092, 'node_id': 'MDQ6VXNlcjM3MDI4MDky', 'avatar_url': 'https://avatars.githubusercontent.com/u/37028092?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/TheophileBlard', 'html_url': 'https://github.com/TheophileBlard', 'followers_url': 'https://api.github.com/users/Th... | [] | closed | false | null | [] | null | ['great work @TheophileBlard '
'LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? '
'It was pretty easy actually. Documentation is on point !'] | 2020-06-05 19:19:26 | 2020-06-11 07:47:26 | 2020-06-11 07:47:26 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/244', 'html_url': 'https://github.com/huggingface/datasets/pull/244', 'diff_url': 'https://github.com/huggingface/datasets/pull/244.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/244.patch', 'merged_at': datetime.datetime(2020, 6, 11, 7... | This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine.
Basically, it's a french "IMDB" dataset, with more reviews.
More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/244/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/244/timeline | null | null | true |
4,649 | https://api.github.com/repos/huggingface/datasets/issues/243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/243/comments | https://api.github.com/repos/huggingface/datasets/issues/243/events | https://github.com/huggingface/datasets/pull/243 | 631,735,848 | MDExOlB1bGxSZXF1ZXN0NDI4NTY2MTEy | 243 | Specify utf-8 encoding for GLUE | {'login': 'patpizio', 'id': 15801338, 'node_id': 'MDQ6VXNlcjE1ODAxMzM4', 'avatar_url': 'https://avatars.githubusercontent.com/u/15801338?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patpizio', 'html_url': 'https://github.com/patpizio', 'followers_url': 'https://api.github.com/users/patpizio/followers',... | [] | closed | false | null | [] | null | ['Thanks for fixing the encoding :)'] | 2020-06-05 16:33:00 | 2020-06-17 21:16:06 | 2020-06-08 08:42:01 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/243', 'html_url': 'https://github.com/huggingface/datasets/pull/243', 'diff_url': 'https://github.com/huggingface/datasets/pull/243.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/243.patch', 'merged_at': datetime.datetime(2020, 6, 8, 8,... | #242
This makes the GLUE-MNLI dataset readable on my machine, not sure if it's a Windows-only bug. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/243/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/243/timeline | null | null | true |
4,650 | https://api.github.com/repos/huggingface/datasets/issues/242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/242/comments | https://api.github.com/repos/huggingface/datasets/issues/242/events | https://github.com/huggingface/datasets/issues/242 | 631,733,683 | MDU6SXNzdWU2MzE3MzM2ODM= | 242 | UnicodeDecodeError when downloading GLUE-MNLI | {'login': 'patpizio', 'id': 15801338, 'node_id': 'MDQ6VXNlcjE1ODAxMzM4', 'avatar_url': 'https://avatars.githubusercontent.com/u/15801338?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patpizio', 'html_url': 'https://github.com/patpizio', 'followers_url': 'https://api.github.com/users/patpizio/followers',... | [] | closed | false | null | [] | null | ['It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure'
"On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts would al... | 2020-06-05 16:30:01 | 2020-06-09 16:06:47 | 2020-06-08 08:45:03 | CONTRIBUTOR | null | null | null | When I run
```python
dataset = nlp.load_dataset('glue', 'mnli')
```
I get an encoding error (could it be because I'm using Windows?) :
```python
# Lots of error log lines later...
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/242/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/242/timeline | null | completed | false |
4,651 | https://api.github.com/repos/huggingface/datasets/issues/241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/241/comments | https://api.github.com/repos/huggingface/datasets/issues/241/events | https://github.com/huggingface/datasets/pull/241 | 631,703,079 | MDExOlB1bGxSZXF1ZXN0NDI4NTQwMDM0 | 241 | Fix empty cache dir | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | ['Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think'
"> Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think\r\n\r\nNo it shouldn't force to redownloa... | 2020-06-05 15:45:22 | 2020-06-08 08:35:33 | 2020-06-08 08:35:31 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/241', 'html_url': 'https://github.com/huggingface/datasets/pull/241', 'diff_url': 'https://github.com/huggingface/datasets/pull/241.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/241.patch', 'merged_at': datetime.datetime(2020, 6, 8, 8,... | If the cache dir of a dataset is empty, the dataset fails to load and throws a FileNotFounfError. We could end up with empty cache dir because there was a line in the code that created the cache dir without using a temp dir. Using a temp dir is useful as it gets renamed to the real cache dir only if the full process is... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/241/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/241/timeline | null | null | true |
4,652 | https://api.github.com/repos/huggingface/datasets/issues/240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/240/comments | https://api.github.com/repos/huggingface/datasets/issues/240/events | https://github.com/huggingface/datasets/issues/240 | 631,434,677 | MDU6SXNzdWU2MzE0MzQ2Nzc= | 240 | Deterministic dataset loading | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | ['Yes good point !'
'I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok with i... | 2020-06-05 09:03:26 | 2020-06-08 09:18:14 | 2020-06-08 09:18:14 | MEMBER | null | null | null | When calling:
```python
import nlp
dataset = nlp.load_dataset("trivia_qa", split="validation[:1%]")
```
the resulting dataset is not deterministic over different google colabs.
After talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line:
https://github.com/huggingface/nlp/blob/2e0... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/240/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/240/timeline | null | completed | false |
4,653 | https://api.github.com/repos/huggingface/datasets/issues/239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/239/comments | https://api.github.com/repos/huggingface/datasets/issues/239/events | https://github.com/huggingface/datasets/issues/239 | 631,340,440 | MDU6SXNzdWU2MzEzNDA0NDA= | 239 | [Creating new dataset] Not found dataset_info.json | {'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richardd... | [] | closed | false | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'f... | null | ['I think you can just `rm` this directory and it should be good :)'
'@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?'
"Yes I have an idea of what's going on. I'm sure I can fix that"
'Hi, I rebase my local copy to `fix-empty-cache-dir`, and try to run aga... | 2020-06-05 06:15:04 | 2020-06-07 13:01:04 | 2020-06-07 13:01:04 | CONTRIBUTOR | null | null | null | Hi, I am trying to create Toronto Book Corpus. #131
I ran
`~/nlp % python nlp-cli test datasets/bookcorpus --save_infos --all_configs`
but this doesn't create `dataset_info.json` and try to use it
```
INFO:nlp.load:Checking datasets/bookcorpus/bookcorpus.py for additional imports.
INFO:filelock:Lock 1397953257... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/239/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/239/timeline | null | completed | false |
4,654 | https://api.github.com/repos/huggingface/datasets/issues/238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/238/comments | https://api.github.com/repos/huggingface/datasets/issues/238/events | https://github.com/huggingface/datasets/issues/238 | 631,260,143 | MDU6SXNzdWU2MzEyNjAxNDM= | 238 | [Metric] Bertscore : Warning : Empty candidate sentence; Setting recall to be 0. | {'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers',... | [{'id': 2067393914, 'node_id': 'MDU6TGFiZWwyMDY3MzkzOTE0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/metric%20bug', 'name': 'metric bug', 'color': '25b21e', 'default': False, 'description': 'A bug in a metric script'}] | closed | false | null | [] | null | ["This print statement comes from the official implementation of bert_score (see [here](https://github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py#L343)). The warning shows up only if the attention mask outputs no candidate.\r\nRight now we want to only use official code for metrics to have fair evaluations,... | 2020-06-05 02:14:47 | 2020-06-29 17:10:19 | 2020-06-29 17:10:19 | NONE | null | null | null | When running BERT-Score, I'm meeting this warning :
> Warning: Empty candidate sentence; Setting recall to be 0.
Code :
```
import nlp
metric = nlp.load_metric("bertscore")
scores = metric.compute(["swag", "swags"], ["swags", "totally something different"], lang="en", device=0)
```
---
**What am I do... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/238/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/238/timeline | null | completed | false |
4,655 | https://api.github.com/repos/huggingface/datasets/issues/237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/237/comments | https://api.github.com/repos/huggingface/datasets/issues/237/events | https://github.com/huggingface/datasets/issues/237 | 631,199,940 | MDU6SXNzdWU2MzExOTk5NDA= | 237 | Can't download MultiNLI | {'login': 'patpizio', 'id': 15801338, 'node_id': 'MDQ6VXNlcjE1ODAxMzM4', 'avatar_url': 'https://avatars.githubusercontent.com/u/15801338?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patpizio', 'html_url': 'https://github.com/patpizio', 'followers_url': 'https://api.github.com/users/patpizio/followers',... | [] | closed | false | null | [] | null | ["You should use `load_dataset('glue', 'mnli')`"
"Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242)... | 2020-06-04 23:05:21 | 2020-06-06 10:51:34 | 2020-06-06 10:51:34 | CONTRIBUTOR | null | null | null | When I try to download MultiNLI with
```python
dataset = load_dataset('multi_nli')
```
I get this long error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-13-3b11f6be4cb9> in <m... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/237/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/237/timeline | null | completed | false |
4,656 | https://api.github.com/repos/huggingface/datasets/issues/236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/236/comments | https://api.github.com/repos/huggingface/datasets/issues/236/events | https://github.com/huggingface/datasets/pull/236 | 631,099,875 | MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4 | 236 | CompGuessWhat?! dataset | {'login': 'aleSuglia', 'id': 1479733, 'node_id': 'MDQ6VXNlcjE0Nzk3MzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1479733?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/aleSuglia', 'html_url': 'https://github.com/aleSuglia', 'followers_url': 'https://api.github.com/users/aleSuglia/followers... | [] | closed | false | null | [] | null | ['Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset("compguesswhat", "compguesswhat-gameplay") \r\nnlp.load_dataset(... | 2020-06-04 19:45:50 | 2020-06-11 09:43:42 | 2020-06-11 07:45:21 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/236', 'html_url': 'https://github.com/huggingface/datasets/pull/236', 'diff_url': 'https://github.com/huggingface/datasets/pull/236.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/236.patch', 'merged_at': datetime.datetime(2020, 6, 11, 7... | Hello,
Thanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https://compguesswhat.github.io](https://compguesswhat.github.io)).
This pull-request adds the CompGuessWhat?! ... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/236/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/236/timeline | null | null | true |
4,657 | https://api.github.com/repos/huggingface/datasets/issues/235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/235/comments | https://api.github.com/repos/huggingface/datasets/issues/235/events | https://github.com/huggingface/datasets/pull/235 | 630,952,297 | MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0 | 235 | Add experimental datasets | {'login': 'yjernite', 'id': 10469459, 'node_id': 'MDQ6VXNlcjEwNDY5NDU5', 'avatar_url': 'https://avatars.githubusercontent.com/u/10469459?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yjernite', 'html_url': 'https://github.com/yjernite', 'followers_url': 'https://api.github.com/users/yjernite/followers',... | [] | closed | false | null | [] | null | ["I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so I d... | 2020-06-04 15:54:56 | 2020-06-12 15:38:55 | 2020-06-12 15:38:55 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/235', 'html_url': 'https://github.com/huggingface/datasets/pull/235', 'diff_url': 'https://github.com/huggingface/datasets/pull/235.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/235.patch', 'merged_at': datetime.datetime(2020, 6, 12, 1... | ## Adding an *experimental datasets* folder
After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to ... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/235/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/235/timeline | null | null | true |
4,658 | https://api.github.com/repos/huggingface/datasets/issues/234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/234/comments | https://api.github.com/repos/huggingface/datasets/issues/234/events | https://github.com/huggingface/datasets/issues/234 | 630,534,427 | MDU6SXNzdWU2MzA1MzQ0Mjc= | 234 | Huggingface NLP, Uploading custom dataset | {'login': 'Nouman97', 'id': 42269506, 'node_id': 'MDQ6VXNlcjQyMjY5NTA2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42269506?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Nouman97', 'html_url': 'https://github.com/Nouman97', 'followers_url': 'https://api.github.com/users/Nouman97/followers',... | [] | closed | false | null | [] | null | ["What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`"
'To load a dataset you need to have a script that defines the format of the examples, the splits and the way to gener... | 2020-06-04 05:59:06 | 2020-07-06 09:33:26 | 2020-07-06 09:33:26 | NONE | null | null | null | Hello,
Does anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp.
Thank you! | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/234/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/234/timeline | null | completed | false |
4,659 | https://api.github.com/repos/huggingface/datasets/issues/233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/233/comments | https://api.github.com/repos/huggingface/datasets/issues/233/events | https://github.com/huggingface/datasets/issues/233 | 630,432,132 | MDU6SXNzdWU2MzA0MzIxMzI= | 233 | Fail to download c4 english corpus | {'login': 'donggyukimc', 'id': 16605764, 'node_id': 'MDQ6VXNlcjE2NjA1NzY0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16605764?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/donggyukimc', 'html_url': 'https://github.com/donggyukimc', 'followers_url': 'https://api.github.com/users/donggyukimc... | [] | closed | false | null | [] | null | ['Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can f... | 2020-06-04 01:06:38 | 2021-01-08 07:17:32 | 2020-06-08 09:16:59 | NONE | null | null | null | i run following code to download c4 English corpus.
```
dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner'
, data_dir='/mypath')
```
and i met failure as follows
```
Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/.... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/233/reactions', 'total_count': 3, '+1': 3, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/233/timeline | null | completed | false |
4,660 | https://api.github.com/repos/huggingface/datasets/issues/232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/232/comments | https://api.github.com/repos/huggingface/datasets/issues/232/events | https://github.com/huggingface/datasets/pull/232 | 630,029,568 | MDExOlB1bGxSZXF1ZXN0NDI3MjI5NDcy | 232 | Nlp cli fix endpoints | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | ['LGTM 👍 '] | 2020-06-03 14:10:39 | 2020-06-08 09:02:58 | 2020-06-08 09:02:57 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/232', 'html_url': 'https://github.com/huggingface/datasets/pull/232', 'diff_url': 'https://github.com/huggingface/datasets/pull/232.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/232.patch', 'merged_at': datetime.datetime(2020, 6, 8, 9,... | With this PR users will be able to upload their own datasets and metrics.
As mentioned in #181, I had to use the new endpoints and revert the use of dataclasses (just in case we have changes in the API in the future).
We now distinguish commands for datasets and commands for metrics:
```bash
nlp-cli upload_data... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/232/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/232/timeline | null | null | true |
4,661 | https://api.github.com/repos/huggingface/datasets/issues/231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/231/comments | https://api.github.com/repos/huggingface/datasets/issues/231/events | https://github.com/huggingface/datasets/pull/231 | 629,988,694 | MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz | 231 | Add .download to MockDownloadManager | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-03 13:20:00 | 2020-06-03 14:25:56 | 2020-06-03 14:25:55 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/231', 'html_url': 'https://github.com/huggingface/datasets/pull/231', 'diff_url': 'https://github.com/huggingface/datasets/pull/231.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/231.patch', 'merged_at': datetime.datetime(2020, 6, 3, 14... | One method from the DownloadManager was missing and some users couldn't run the tests because of that.
@yjernite | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/231/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/231/timeline | null | null | true |
4,662 | https://api.github.com/repos/huggingface/datasets/issues/230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/230/comments | https://api.github.com/repos/huggingface/datasets/issues/230/events | https://github.com/huggingface/datasets/pull/230 | 629,983,684 | MDExOlB1bGxSZXF1ZXN0NDI3MTkzMTQ0 | 230 | Don't force to install apache beam for wikipedia dataset | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-06-03 13:13:07 | 2020-06-03 14:34:09 | 2020-06-03 14:34:07 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/230', 'html_url': 'https://github.com/huggingface/datasets/pull/230', 'diff_url': 'https://github.com/huggingface/datasets/pull/230.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/230.patch', 'merged_at': datetime.datetime(2020, 6, 3, 14... | As pointed out in #227, we shouldn't force users to install apache beam if the processed dataset can be downloaded. I moved the imports of some datasets to avoid this problem | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/230/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/230/timeline | null | null | true |
4,663 | https://api.github.com/repos/huggingface/datasets/issues/229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/229/comments | https://api.github.com/repos/huggingface/datasets/issues/229/events | https://github.com/huggingface/datasets/pull/229 | 629,956,490 | MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5 | 229 | Rename dataset_infos.json to dataset_info.json | {'login': 'aswin-giridhar', 'id': 11817160, 'node_id': 'MDQ6VXNlcjExODE3MTYw', 'avatar_url': 'https://avatars.githubusercontent.com/u/11817160?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/aswin-giridhar', 'html_url': 'https://github.com/aswin-giridhar', 'followers_url': 'https://api.github.com/users/as... | [] | closed | false | null | [] | null | ["\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewer c... | 2020-06-03 12:31:44 | 2020-06-03 12:52:54 | 2020-06-03 12:48:33 | NONE | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/229', 'html_url': 'https://github.com/huggingface/datasets/pull/229', 'diff_url': 'https://github.com/huggingface/datasets/pull/229.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/229.patch', 'merged_at': None} | As the file required for the viewing in the live nlp viewer is named as dataset_info.json | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/229/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/229/timeline | null | null | true |
4,664 | https://api.github.com/repos/huggingface/datasets/issues/228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/228/comments | https://api.github.com/repos/huggingface/datasets/issues/228/events | https://github.com/huggingface/datasets/issues/228 | 629,952,402 | MDU6SXNzdWU2Mjk5NTI0MDI= | 228 | Not able to access the XNLI dataset | {'login': 'aswin-giridhar', 'id': 11817160, 'node_id': 'MDQ6VXNlcjExODE3MTYw', 'avatar_url': 'https://avatars.githubusercontent.com/u/11817160?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/aswin-giridhar', 'html_url': 'https://github.com/aswin-giridhar', 'followers_url': 'https://api.github.com/users/as... | [{'id': 2107841032, 'node_id': 'MDU6TGFiZWwyMTA3ODQxMDMy', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer', 'name': 'nlp-viewer', 'color': '94203D', 'default': False, 'description': ''}] | closed | false | {'login': 'srush', 'id': 35882, 'node_id': 'MDQ6VXNlcjM1ODgy', 'avatar_url': 'https://avatars.githubusercontent.com/u/35882?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/srush', 'html_url': 'https://github.com/srush', 'followers_url': 'https://api.github.com/users/srush/followers', 'following_url': 'htt... | [{'login': 'srush', 'id': 35882, 'node_id': 'MDQ6VXNlcjM1ODgy', 'avatar_url': 'https://avatars.githubusercontent.com/u/35882?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/srush', 'html_url': 'https://github.com/srush', 'followers_url': 'https://api.github.com/users/srush/followers', 'following_url': 'ht... | null | ['Added pull request to change the name of the file from dataset_infos.json to dataset_info.json'
'Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? '
"Update: The dataset_info.json error is gone, bu... | 2020-06-03 12:25:14 | 2020-07-17 17:44:22 | 2020-07-17 17:44:22 | NONE | null | null | null | When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error.
```
FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json'
Traceback:
File "/... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/228/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/228/timeline | null | completed | false |
4,665 | https://api.github.com/repos/huggingface/datasets/issues/227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/227/comments | https://api.github.com/repos/huggingface/datasets/issues/227/events | https://github.com/huggingface/datasets/issues/227 | 629,845,704 | MDU6SXNzdWU2Mjk4NDU3MDQ= | 227 | Should we still have to force to install apache_beam to download wikipedia ? | {'login': 'richarddwang', 'id': 17963619, 'node_id': 'MDQ6VXNlcjE3OTYzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/17963619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/richarddwang', 'html_url': 'https://github.com/richarddwang', 'followers_url': 'https://api.github.com/users/richardd... | [] | closed | false | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'f... | null | ["Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies"
'Got it, feel free to close this issue when you think it’s resolved.'
'It should be good now :)'] | 2020-06-03 09:33:20 | 2020-06-03 15:25:41 | 2020-06-03 15:25:41 | CONTRIBUTOR | null | null | null | Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍
But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time.
Maybe we s... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/227/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/227/timeline | null | completed | false |
4,666 | https://api.github.com/repos/huggingface/datasets/issues/226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/226/comments | https://api.github.com/repos/huggingface/datasets/issues/226/events | https://github.com/huggingface/datasets/pull/226 | 628,344,520 | MDExOlB1bGxSZXF1ZXN0NDI1OTA0MjEz | 226 | add BlendedSkillTalk dataset | {'login': 'mariamabarham', 'id': 38249783, 'node_id': 'MDQ6VXNlcjM4MjQ5Nzgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/38249783?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariamabarham', 'html_url': 'https://github.com/mariamabarham', 'followers_url': 'https://api.github.com/users/maria... | [] | closed | false | null | [] | null | ['Awesome :D'] | 2020-06-01 10:54:45 | 2020-06-03 14:37:23 | 2020-06-03 14:37:22 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/226', 'html_url': 'https://github.com/huggingface/datasets/pull/226', 'diff_url': 'https://github.com/huggingface/datasets/pull/226.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/226.patch', 'merged_at': datetime.datetime(2020, 6, 3, 14... | This PR add the BlendedSkillTalk dataset, which is used to fine tune the blenderbot. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/226/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/226/timeline | null | null | true |
4,667 | https://api.github.com/repos/huggingface/datasets/issues/225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/225/comments | https://api.github.com/repos/huggingface/datasets/issues/225/events | https://github.com/huggingface/datasets/issues/225 | 628,083,366 | MDU6SXNzdWU2MjgwODMzNjY= | 225 | [ROUGE] Different scores with `files2rouge` | {'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers',... | [{'id': 2067400959, 'node_id': 'MDU6TGFiZWwyMDY3NDAwOTU5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion', 'name': 'Metric discussion', 'color': 'd722e8', 'default': False, 'description': 'Discussions on the metrics'}] | closed | false | {'login': 'yjernite', 'id': 10469459, 'node_id': 'MDQ6VXNlcjEwNDY5NDU5', 'avatar_url': 'https://avatars.githubusercontent.com/u/10469459?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yjernite', 'html_url': 'https://github.com/yjernite', 'followers_url': 'https://api.github.com/users/yjernite/followers',... | [{'login': 'yjernite', 'id': 10469459, 'node_id': 'MDQ6VXNlcjEwNDY5NDU5', 'avatar_url': 'https://avatars.githubusercontent.com/u/10469459?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yjernite', 'html_url': 'https://github.com/yjernite', 'followers_url': 'https://api.github.com/users/yjernite/followers'... | null | ["@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P/R/F scores. If I ... | 2020-06-01 00:50:36 | 2020-06-03 15:27:18 | 2020-06-03 15:27:18 | NONE | null | null | null | It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`.
Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing
---
`nlp` : (Only mid F-scores)
>rouge1 0.33508031962733364
rouge2 0.145743337761... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/225/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/225/timeline | null | completed | false |
4,668 | https://api.github.com/repos/huggingface/datasets/issues/224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/224/comments | https://api.github.com/repos/huggingface/datasets/issues/224/events | https://github.com/huggingface/datasets/issues/224 | 627,791,693 | MDU6SXNzdWU2Mjc3OTE2OTM= | 224 | [Feature Request/Help] BLEURT model -> PyTorch | {'login': 'adamwlev', 'id': 6889910, 'node_id': 'MDQ6VXNlcjY4ODk5MTA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6889910?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/adamwlev', 'html_url': 'https://github.com/adamwlev', 'followers_url': 'https://api.github.com/users/adamwlev/followers', '... | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | closed | false | {'login': 'yjernite', 'id': 10469459, 'node_id': 'MDQ6VXNlcjEwNDY5NDU5', 'avatar_url': 'https://avatars.githubusercontent.com/u/10469459?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yjernite', 'html_url': 'https://github.com/yjernite', 'followers_url': 'https://api.github.com/users/yjernite/followers',... | [{'login': 'yjernite', 'id': 10469459, 'node_id': 'MDQ6VXNlcjEwNDY5NDU5', 'avatar_url': 'https://avatars.githubusercontent.com/u/10469459?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yjernite', 'html_url': 'https://github.com/yjernite', 'followers_url': 'https://api.github.com/users/yjernite/followers'... | null | ['Is there any update on this? \r\n\r\nThanks!'
"Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?"
"We currently provide a wra... | 2020-05-30 18:30:40 | 2021-09-02 15:02:17 | 2021-01-04 09:53:32 | NONE | null | null | null | Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Tw... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/224/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/224/timeline | null | completed | false |
4,669 | https://api.github.com/repos/huggingface/datasets/issues/223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/223/comments | https://api.github.com/repos/huggingface/datasets/issues/223/events | https://github.com/huggingface/datasets/issues/223 | 627,683,386 | MDU6SXNzdWU2Mjc2ODMzODY= | 223 | [Feature request] Add FLUE dataset | {'login': 'lbourdois', 'id': 58078086, 'node_id': 'MDQ6VXNlcjU4MDc4MDg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/58078086?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lbourdois', 'html_url': 'https://github.com/lbourdois', 'followers_url': 'https://api.github.com/users/lbourdois/followe... | [{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}] | closed | false | null | [] | null | ['Hi @lbourdois, yes please share it with us'
'@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre-trai... | 2020-05-30 08:52:15 | 2020-12-03 13:39:33 | 2020-12-03 13:39:33 | NONE | null | null | null | Hi,
I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French.
In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned.
If it is not the case, I can provide each of the cleaned FLUE datasets (in the form... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/223/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/223/timeline | null | completed | false |
4,670 | https://api.github.com/repos/huggingface/datasets/issues/222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/222/comments | https://api.github.com/repos/huggingface/datasets/issues/222/events | https://github.com/huggingface/datasets/issues/222 | 627,586,690 | MDU6SXNzdWU2Mjc1ODY2OTA= | 222 | Colab Notebook breaks when downloading the squad dataset | {'login': 'carlos-aguayo', 'id': 338917, 'node_id': 'MDQ6VXNlcjMzODkxNw==', 'avatar_url': 'https://avatars.githubusercontent.com/u/338917?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/carlos-aguayo', 'html_url': 'https://github.com/carlos-aguayo', 'followers_url': 'https://api.github.com/users/carlos-ag... | [] | closed | false | null | [] | null | ["The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`"
'It still breaks very near the end\r\n\r\n
| {'url': 'https://api.github.com/repos/huggingface/datasets/issues/222/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/222/timeline | null | completed | false |
4,671 | https://api.github.com/repos/huggingface/datasets/issues/221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/221/comments | https://api.github.com/repos/huggingface/datasets/issues/221/events | https://github.com/huggingface/datasets/pull/221 | 627,300,648 | MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0 | 221 | Fix tests/test_dataset_common.py | {'login': 'tayciryahmed', 'id': 13635495, 'node_id': 'MDQ6VXNlcjEzNjM1NDk1', 'avatar_url': 'https://avatars.githubusercontent.com/u/13635495?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/tayciryahmed', 'html_url': 'https://github.com/tayciryahmed', 'followers_url': 'https://api.github.com/users/taycirya... | [] | closed | false | null | [] | null | ['Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?'] | 2020-05-29 14:12:15 | 2020-06-01 12:20:42 | 2020-05-29 15:02:23 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/221', 'html_url': 'https://github.com/huggingface/datasets/pull/221', 'diff_url': 'https://github.com/huggingface/datasets/pull/221.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/221.patch', 'merged_at': datetime.datetime(2020, 5, 29, 1... | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/ma... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/221/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/221/timeline | null | null | true |
4,672 | https://api.github.com/repos/huggingface/datasets/issues/220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/220/comments | https://api.github.com/repos/huggingface/datasets/issues/220/events | https://github.com/huggingface/datasets/pull/220 | 627,280,683 | MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy | 220 | dataset_arcd | {'login': 'tayciryahmed', 'id': 13635495, 'node_id': 'MDQ6VXNlcjEzNjM1NDk1', 'avatar_url': 'https://avatars.githubusercontent.com/u/13635495?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/tayciryahmed', 'html_url': 'https://github.com/tayciryahmed', 'followers_url': 'https://api.github.com/users/taycirya... | [] | closed | false | null | [] | null | ['you can rebase from master to fix the CI error :)' 'Awesome !'] | 2020-05-29 13:46:50 | 2020-05-29 14:58:40 | 2020-05-29 14:57:21 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/220', 'html_url': 'https://github.com/huggingface/datasets/pull/220', 'diff_url': 'https://github.com/huggingface/datasets/pull/220.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/220.patch', 'merged_at': datetime.datetime(2020, 5, 29, 1... | Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/220/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 1, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/220/timeline | null | null | true |
4,673 | https://api.github.com/repos/huggingface/datasets/issues/219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/219/comments | https://api.github.com/repos/huggingface/datasets/issues/219/events | https://github.com/huggingface/datasets/pull/219 | 627,235,893 | MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx | 219 | force mwparserfromhell as third party | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-29 12:33:17 | 2020-05-29 13:30:13 | 2020-05-29 13:30:12 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/219', 'html_url': 'https://github.com/huggingface/datasets/pull/219', 'diff_url': 'https://github.com/huggingface/datasets/pull/219.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/219.patch', 'merged_at': datetime.datetime(2020, 5, 29, 1... | This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/219/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/219/timeline | null | null | true |
4,674 | https://api.github.com/repos/huggingface/datasets/issues/218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/218/comments | https://api.github.com/repos/huggingface/datasets/issues/218/events | https://github.com/huggingface/datasets/pull/218 | 627,173,407 | MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz | 218 | Add Natual Questions and C4 scripts | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-29 10:40:30 | 2020-05-29 12:31:01 | 2020-05-29 12:31:00 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/218', 'html_url': 'https://github.com/huggingface/datasets/pull/218', 'diff_url': 'https://github.com/huggingface/datasets/pull/218.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/218.patch', 'merged_at': datetime.datetime(2020, 5, 29, 1... | Scripts are ready !
However they are not processed nor directly available from gcp yet. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/218/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/218/timeline | null | null | true |
4,675 | https://api.github.com/repos/huggingface/datasets/issues/217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/217/comments | https://api.github.com/repos/huggingface/datasets/issues/217/events | https://github.com/huggingface/datasets/issues/217 | 627,128,403 | MDU6SXNzdWU2MjcxMjg0MDM= | 217 | Multi-task dataset mixing | {'login': 'ghomasHudson', 'id': 13795113, 'node_id': 'MDQ6VXNlcjEzNzk1MTEz', 'avatar_url': 'https://avatars.githubusercontent.com/u/13795113?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ghomasHudson', 'html_url': 'https://github.com/ghomasHudson', 'followers_url': 'https://api.github.com/users/ghomasHu... | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}
{'id': 2067400324, 'node_id': 'MDU6TGFiZWwyMDY3NDAwMzI0', 'url': 'https://api.git... | open | false | null | [] | null | ['I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had "multiple" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypot... | 2020-05-29 09:22:26 | 2020-10-26 08:46:33 | null | CONTRIBUTOR | null | null | null | It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sam... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/217/reactions', 'total_count': 10, '+1': 10, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/217/timeline | null | null | false |
4,676 | https://api.github.com/repos/huggingface/datasets/issues/216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/216/comments | https://api.github.com/repos/huggingface/datasets/issues/216/events | https://github.com/huggingface/datasets/issues/216 | 626,896,890 | MDU6SXNzdWU2MjY4OTY4OTA= | 216 | ❓ How to get ROUGE-2 with the ROUGE metric ? | {'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers',... | [] | closed | false | null | [] | null | ["ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird"
'For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric(\'rouge\')\r\nwith open("pred.txt") as p, open("ref.txt") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=["rouge2"... | 2020-05-28 23:47:32 | 2020-06-01 00:04:35 | 2020-06-01 00:04:35 | NONE | null | null | null | I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric.
---
I compute scores with :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
rouge.add([lp], [lg])
score = rouge.compute()
```
... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/216/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/216/timeline | null | completed | false |
4,677 | https://api.github.com/repos/huggingface/datasets/issues/215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/215/comments | https://api.github.com/repos/huggingface/datasets/issues/215/events | https://github.com/huggingface/datasets/issues/215 | 626,867,879 | MDU6SXNzdWU2MjY4Njc4Nzk= | 215 | NonMatchingSplitsSizesError when loading blog_authorship_corpus | {'login': 'cedricconol', 'id': 52105365, 'node_id': 'MDQ6VXNlcjUyMTA1MzY1', 'avatar_url': 'https://avatars.githubusercontent.com/u/52105365?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/cedricconol', 'html_url': 'https://github.com/cedricconol', 'followers_url': 'https://api.github.com/users/cedricconol... | [{'id': 2067388877, 'node_id': 'MDU6TGFiZWwyMDY3Mzg4ODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug', 'name': 'dataset bug', 'color': '2edb81', 'default': False, 'description': 'A bug in a dataset script provided in the library'}] | closed | false | null | [] | null | ["I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInfo(n... | 2020-05-28 22:55:19 | 2022-02-10 13:05:45 | 2022-02-10 13:05:45 | NONE | null | null | null | Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`.
```
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train',
num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'),
'recorded... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/215/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/215/timeline | null | completed | false |
4,678 | https://api.github.com/repos/huggingface/datasets/issues/214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/214/comments | https://api.github.com/repos/huggingface/datasets/issues/214/events | https://github.com/huggingface/datasets/pull/214 | 626,641,549 | MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx | 214 | [arrow_dataset.py] add new filter function | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | ["I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet."
'In... | 2020-05-28 16:21:40 | 2020-05-29 11:43:29 | 2020-05-29 11:32:20 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/214', 'html_url': 'https://github.com/huggingface/datasets/pull/214', 'diff_url': 'https://github.com/huggingface/datasets/pull/214.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/214.patch', 'merged_at': datetime.datetime(2020, 5, 29, 1... | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a ... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/214/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/214/timeline | null | null | true |
4,679 | https://api.github.com/repos/huggingface/datasets/issues/213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/213/comments | https://api.github.com/repos/huggingface/datasets/issues/213/events | https://github.com/huggingface/datasets/pull/213 | 626,587,995 | MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3 | 213 | better message if missing beam options | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-28 15:06:57 | 2020-05-29 09:51:17 | 2020-05-29 09:51:16 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/213', 'html_url': 'https://github.com/huggingface/datasets/pull/213', 'diff_url': 'https://github.com/huggingface/datasets/pull/213.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/213.patch', 'merged_at': datetime.datetime(2020, 5, 29, 9... | WDYT @yjernite ?
For example:
```python
dataset = nlp.load_dataset('wikipedia', '20200501.aa')
```
Raises:
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to ru... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/213/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/213/timeline | null | null | true |
4,680 | https://api.github.com/repos/huggingface/datasets/issues/212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/212/comments | https://api.github.com/repos/huggingface/datasets/issues/212/events | https://github.com/huggingface/datasets/pull/212 | 626,580,198 | MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy | 212 | have 'add' and 'add_batch' for metrics | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-28 14:56:47 | 2020-05-29 10:41:05 | 2020-05-29 10:41:04 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/212', 'html_url': 'https://github.com/huggingface/datasets/pull/212', 'diff_url': 'https://github.com/huggingface/datasets/pull/212.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/212.patch', 'merged_at': datetime.datetime(2020, 5, 29, 1... | This should fix #116
Previously the `.add` method of metrics expected a batch of examples.
Now `.add` expects one prediction/reference and `.add_batch` expects a batch.
I think it is more coherent with the way the ArrowWriter works. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/212/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/212/timeline | null | null | true |
4,681 | https://api.github.com/repos/huggingface/datasets/issues/211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/211/comments | https://api.github.com/repos/huggingface/datasets/issues/211/events | https://github.com/huggingface/datasets/issues/211 | 626,565,994 | MDU6SXNzdWU2MjY1NjU5OTQ= | 211 | [Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}] | closed | false | {'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', '... | [{'login': 'thomwolf', 'id': 7353373, 'node_id': 'MDQ6VXNlcjczNTMzNzM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7353373?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomwolf', 'html_url': 'https://github.com/thomwolf', 'followers_url': 'https://api.github.com/users/thomwolf/followers', ... | null | ['Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it\'s cached ... | 2020-05-28 14:38:14 | 2020-07-23 10:15:16 | 2020-07-23 10:15:16 | MEMBER | null | null | null | Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to n... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/211/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/211/timeline | null | completed | false |
4,682 | https://api.github.com/repos/huggingface/datasets/issues/210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/210/comments | https://api.github.com/repos/huggingface/datasets/issues/210/events | https://github.com/huggingface/datasets/pull/210 | 626,504,243 | MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz | 210 | fix xnli metric kwargs description | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-28 13:21:44 | 2020-05-28 13:22:11 | 2020-05-28 13:22:10 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/210', 'html_url': 'https://github.com/huggingface/datasets/pull/210', 'diff_url': 'https://github.com/huggingface/datasets/pull/210.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/210.patch', 'merged_at': datetime.datetime(2020, 5, 28, 1... | The text was wrong as noticed in #202 | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/210/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/210/timeline | null | null | true |
4,683 | https://api.github.com/repos/huggingface/datasets/issues/209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/209/comments | https://api.github.com/repos/huggingface/datasets/issues/209/events | https://github.com/huggingface/datasets/pull/209 | 626,405,849 | MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4 | 209 | Add a Google Drive exception for small files | {'login': 'airKlizz', 'id': 25703835, 'node_id': 'MDQ6VXNlcjI1NzAzODM1', 'avatar_url': 'https://avatars.githubusercontent.com/u/25703835?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/airKlizz', 'html_url': 'https://github.com/airKlizz', 'followers_url': 'https://api.github.com/users/airKlizz/followers',... | [] | closed | false | null | [] | null | ['Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp'
'Nice ! ' '``make style`` done! Thanks for the approvals.'] | 2020-05-28 10:40:17 | 2020-05-28 15:15:04 | 2020-05-28 15:15:04 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/209', 'html_url': 'https://github.com/huggingface/datasets/pull/209', 'diff_url': 'https://github.com/huggingface/datasets/pull/209.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/209.patch', 'merged_at': datetime.datetime(2020, 5, 28, 1... | I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive.
One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/209/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/209/timeline | null | null | true |
4,684 | https://api.github.com/repos/huggingface/datasets/issues/208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/208/comments | https://api.github.com/repos/huggingface/datasets/issues/208/events | https://github.com/huggingface/datasets/pull/208 | 626,398,519 | MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx | 208 | [Dummy data] insert config name instead of config | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | [] | 2020-05-28 10:28:19 | 2020-05-28 12:48:01 | 2020-05-28 12:48:00 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/208', 'html_url': 'https://github.com/huggingface/datasets/pull/208', 'diff_url': 'https://github.com/huggingface/datasets/pull/208.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/208.patch', 'merged_at': datetime.datetime(2020, 5, 28, 1... | Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself.
Also, @lhoestq fixed small import bug introduced by beam command I think. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/208/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/208/timeline | null | null | true |
4,685 | https://api.github.com/repos/huggingface/datasets/issues/207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/207/comments | https://api.github.com/repos/huggingface/datasets/issues/207/events | https://github.com/huggingface/datasets/issues/207 | 625,932,200 | MDU6SXNzdWU2MjU5MzIyMDA= | 207 | Remove test set from NLP viewer | {'login': 'chrisdonahue', 'id': 748399, 'node_id': 'MDQ6VXNlcjc0ODM5OQ==', 'avatar_url': 'https://avatars.githubusercontent.com/u/748399?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/chrisdonahue', 'html_url': 'https://github.com/chrisdonahue', 'followers_url': 'https://api.github.com/users/chrisdonahue... | [{'id': 2107841032, 'node_id': 'MDU6TGFiZWwyMTA3ODQxMDMy', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer', 'name': 'nlp-viewer', 'color': '94203D', 'default': False, 'description': ''}] | closed | false | null | [] | null | ['~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)'
'Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.'
'We do no ... | 2020-05-27 18:32:07 | 2022-02-10 13:17:45 | 2022-02-10 13:17:45 | NONE | null | null | null | While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and smal... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/207/reactions', 'total_count': 3, '+1': 3, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/207/timeline | null | completed | false |
4,686 | https://api.github.com/repos/huggingface/datasets/issues/206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/206/comments | https://api.github.com/repos/huggingface/datasets/issues/206/events | https://github.com/huggingface/datasets/issues/206 | 625,842,989 | MDU6SXNzdWU2MjU4NDI5ODk= | 206 | [Question] Combine 2 datasets which have the same columns | {'login': 'airKlizz', 'id': 25703835, 'node_id': 'MDQ6VXNlcjI1NzAzODM1', 'avatar_url': 'https://avatars.githubusercontent.com/u/25703835?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/airKlizz', 'html_url': 'https://github.com/airKlizz', 'followers_url': 'https://api.github.com/users/airKlizz/followers',... | [] | closed | false | null | [] | null | ['We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.'
'Ok great! I will look at it. Thanks'] | 2020-05-27 16:25:52 | 2020-06-10 09:11:14 | 2020-06-10 09:11:14 | CONTRIBUTOR | null | null | null | Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/206/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/206/timeline | null | completed | false |
4,687 | https://api.github.com/repos/huggingface/datasets/issues/205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/205/comments | https://api.github.com/repos/huggingface/datasets/issues/205/events | https://github.com/huggingface/datasets/pull/205 | 625,839,335 | MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1 | 205 | Better arrow dataset iter | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-27 16:20:21 | 2020-05-27 16:39:58 | 2020-05-27 16:39:56 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/205', 'html_url': 'https://github.com/huggingface/datasets/pull/205', 'diff_url': 'https://github.com/huggingface/datasets/pull/205.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/205.patch', 'merged_at': datetime.datetime(2020, 5, 27, 1... | I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow).
With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/205/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/205/timeline | null | null | true |
4,688 | https://api.github.com/repos/huggingface/datasets/issues/204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/204/comments | https://api.github.com/repos/huggingface/datasets/issues/204/events | https://github.com/huggingface/datasets/pull/204 | 625,655,849 | MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw | 204 | Add Dataflow support + Wikipedia + Wiki40b | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-27 12:32:49 | 2020-05-28 08:10:35 | 2020-05-28 08:10:34 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/204', 'html_url': 'https://github.com/huggingface/datasets/pull/204', 'diff_url': 'https://github.com/huggingface/datasets/pull/204.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/204.patch', 'merged_at': datetime.datetime(2020, 5, 28, 8... | # Add Dataflow support + Wikipedia + Wiki40b
## Support datasets processing with Apache Beam
Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc.
To process such da... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/204/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/204/timeline | null | null | true |
4,689 | https://api.github.com/repos/huggingface/datasets/issues/203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/203/comments | https://api.github.com/repos/huggingface/datasets/issues/203/events | https://github.com/huggingface/datasets/pull/203 | 625,515,488 | MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3 | 203 | Raise an error if no config name for datasets like glue | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | [] | 2020-05-27 09:03:58 | 2020-05-27 16:40:39 | 2020-05-27 16:40:38 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/203', 'html_url': 'https://github.com/huggingface/datasets/pull/203', 'diff_url': 'https://github.com/huggingface/datasets/pull/203.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/203.patch', 'merged_at': datetime.datetime(2020, 5, 27, 1... | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/203/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/203/timeline | null | null | true |
4,690 | https://api.github.com/repos/huggingface/datasets/issues/202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/202/comments | https://api.github.com/repos/huggingface/datasets/issues/202/events | https://github.com/huggingface/datasets/issues/202 | 625,493,983 | MDU6SXNzdWU2MjU0OTM5ODM= | 202 | Mistaken `_KWARGS_DESCRIPTION` for XNLI metric | {'login': 'phiyodr', 'id': 33572125, 'node_id': 'MDQ6VXNlcjMzNTcyMTI1', 'avatar_url': 'https://avatars.githubusercontent.com/u/33572125?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/phiyodr', 'html_url': 'https://github.com/phiyodr', 'followers_url': 'https://api.github.com/users/phiyodr/followers', 'fo... | [] | closed | false | null | [] | null | ['Indeed, good catch ! thanks\r\nFixing it right now'] | 2020-05-27 08:34:42 | 2020-05-28 13:22:36 | 2020-05-28 13:22:36 | NONE | null | null | null | Hi!
The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric:
... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/202/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/202/timeline | null | completed | false |
4,691 | https://api.github.com/repos/huggingface/datasets/issues/201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/201/comments | https://api.github.com/repos/huggingface/datasets/issues/201/events | https://github.com/huggingface/datasets/pull/201 | 625,235,430 | MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw | 201 | Fix typo in README | {'login': 'LysandreJik', 'id': 30755778, 'node_id': 'MDQ6VXNlcjMwNzU1Nzc4', 'avatar_url': 'https://avatars.githubusercontent.com/u/30755778?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/LysandreJik', 'html_url': 'https://github.com/LysandreJik', 'followers_url': 'https://api.github.com/users/LysandreJik... | [] | closed | false | null | [] | null | ['Amazing, @LysandreJik!' 'Really did my best!'] | 2020-05-26 22:18:21 | 2020-05-26 23:40:31 | 2020-05-26 23:00:56 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/201', 'html_url': 'https://github.com/huggingface/datasets/pull/201', 'diff_url': 'https://github.com/huggingface/datasets/pull/201.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/201.patch', 'merged_at': datetime.datetime(2020, 5, 26, 2... | null | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/201/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/201/timeline | null | null | true |
4,692 | https://api.github.com/repos/huggingface/datasets/issues/200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/200/comments | https://api.github.com/repos/huggingface/datasets/issues/200/events | https://github.com/huggingface/datasets/pull/200 | 625,226,638 | MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0 | 200 | [ArrowWriter] Set schema at first write example | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | ['Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?'] | 2020-05-26 21:59:48 | 2020-05-27 09:07:54 | 2020-05-27 09:07:53 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/200', 'html_url': 'https://github.com/huggingface/datasets/pull/200', 'diff_url': 'https://github.com/huggingface/datasets/pull/200.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/200.patch', 'merged_at': datetime.datetime(2020, 5, 27, 9... | Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so).
I noticed that it was not done if the first example is added via `.write`, so I added it for coherence. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/200/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/200/timeline | null | null | true |
4,693 | https://api.github.com/repos/huggingface/datasets/issues/199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/199/comments | https://api.github.com/repos/huggingface/datasets/issues/199/events | https://github.com/huggingface/datasets/pull/199 | 625,217,440 | MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx | 199 | Fix GermEval 2014 dataset infos | {'login': 'stefan-it', 'id': 20651387, 'node_id': 'MDQ6VXNlcjIwNjUxMzg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/20651387?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stefan-it', 'html_url': 'https://github.com/stefan-it', 'followers_url': 'https://api.github.com/users/stefan-it/followe... | [] | closed | false | null | [] | null | ['Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)'
'Oh good catch ! This should fix it indeed'] | 2020-05-26 21:41:44 | 2020-05-26 21:50:24 | 2020-05-26 21:50:24 | CONTRIBUTOR | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/199', 'html_url': 'https://github.com/huggingface/datasets/pull/199', 'diff_url': 'https://github.com/huggingface/datasets/pull/199.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/199.patch', 'merged_at': datetime.datetime(2020, 5, 26, 2... | Hi,
this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/199/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/199/timeline | null | null | true |
4,694 | https://api.github.com/repos/huggingface/datasets/issues/198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/198/comments | https://api.github.com/repos/huggingface/datasets/issues/198/events | https://github.com/huggingface/datasets/issues/198 | 625,200,627 | MDU6SXNzdWU2MjUyMDA2Mjc= | 198 | Index outside of table length | {'login': 'casajarm', 'id': 305717, 'node_id': 'MDQ6VXNlcjMwNTcxNw==', 'avatar_url': 'https://avatars.githubusercontent.com/u/305717?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/casajarm', 'html_url': 'https://github.com/casajarm', 'followers_url': 'https://api.github.com/users/casajarm/followers', 'fo... | [] | closed | false | null | [] | null | ['Sounds like something related to the nlp viewer @srush ' 'Fixed. '] | 2020-05-26 21:09:40 | 2020-05-26 22:43:49 | 2020-05-26 22:43:49 | NONE | null | null | null | The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955).
> ValueError: Index (2000) outside of table length (2000).
> Traceback:
> File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _ru... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/198/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/198/timeline | null | completed | false |
4,695 | https://api.github.com/repos/huggingface/datasets/issues/197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/197/comments | https://api.github.com/repos/huggingface/datasets/issues/197/events | https://github.com/huggingface/datasets/issues/197 | 624,966,904 | MDU6SXNzdWU2MjQ5NjY5MDQ= | 197 | Scientific Papers only downloading Pubmed | {'login': 'antmarakis', 'id': 17463361, 'node_id': 'MDQ6VXNlcjE3NDYzMzYx', 'avatar_url': 'https://avatars.githubusercontent.com/u/17463361?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/antmarakis', 'html_url': 'https://github.com/antmarakis', 'followers_url': 'https://api.github.com/users/antmarakis/fol... | [] | closed | false | null | [] | null | ["Hi so there are indeed two configurations in the datasets as you can see [here](https://github.com/huggingface/nlp/blob/master/datasets/scientific_papers/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.load_... | 2020-05-26 15:18:47 | 2020-05-28 08:19:28 | 2020-05-28 08:19:28 | NONE | null | null | null | Hi!
I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following:
```
dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.')
Downloading: 10... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/197/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/197/timeline | null | completed | false |
4,696 | https://api.github.com/repos/huggingface/datasets/issues/196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/196/comments | https://api.github.com/repos/huggingface/datasets/issues/196/events | https://github.com/huggingface/datasets/pull/196 | 624,901,266 | MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw | 196 | Check invalid config name | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [] | closed | false | null | [] | null | ["I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n"
"> I think that's not related to t... | 2020-05-26 13:52:51 | 2020-05-26 21:04:56 | 2020-05-26 21:04:55 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/196', 'html_url': 'https://github.com/huggingface/datasets/pull/196', 'diff_url': 'https://github.com/huggingface/datasets/pull/196.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/196.patch', 'merged_at': datetime.datetime(2020, 5, 26, 2... | As said in #194, we should raise an error if the config name has bad characters.
Bad characters are those that are not allowed for directory names on windows. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/196/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/196/timeline | null | null | true |
4,697 | https://api.github.com/repos/huggingface/datasets/issues/195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/195/comments | https://api.github.com/repos/huggingface/datasets/issues/195/events | https://github.com/huggingface/datasets/pull/195 | 624,858,686 | MDExOlB1bGxSZXF1ZXN0NDIzMTg1NTAy | 195 | [Dummy data command] add new case to command | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | ['@lhoestq - tiny change in the dummy data command, should be good to merge.'] | 2020-05-26 12:50:47 | 2020-05-26 14:38:28 | 2020-05-26 14:38:27 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/195', 'html_url': 'https://github.com/huggingface/datasets/pull/195', 'diff_url': 'https://github.com/huggingface/datasets/pull/195.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/195.patch', 'merged_at': datetime.datetime(2020, 5, 26, 1... | Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data. | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/195/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/195/timeline | null | null | true |
4,698 | https://api.github.com/repos/huggingface/datasets/issues/194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/194/comments | https://api.github.com/repos/huggingface/datasets/issues/194/events | https://github.com/huggingface/datasets/pull/194 | 624,854,897 | MDExOlB1bGxSZXF1ZXN0NDIzMTgyNDM5 | 194 | Add Dataset: Qanta | {'login': 'patrickvonplaten', 'id': 23423619, 'node_id': 'MDQ6VXNlcjIzNDIzNjE5', 'avatar_url': 'https://avatars.githubusercontent.com/u/23423619?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickvonplaten', 'html_url': 'https://github.com/patrickvonplaten', 'followers_url': 'https://api.github.com/us... | [] | closed | false | null | [] | null | ['@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.'
"It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `/` etc.\r\n\r\nI'll add som... | 2020-05-26 12:44:35 | 2020-05-26 16:58:17 | 2020-05-26 13:16:20 | MEMBER | null | false | {'url': 'https://api.github.com/repos/huggingface/datasets/pulls/194', 'html_url': 'https://github.com/huggingface/datasets/pull/194', 'diff_url': 'https://github.com/huggingface/datasets/pull/194.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/194.patch', 'merged_at': datetime.datetime(2020, 5, 26, 1... | Fixes dummy data for #169 @EntilZha | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/194/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/194/timeline | null | null | true |
4,699 | https://api.github.com/repos/huggingface/datasets/issues/193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/193/comments | https://api.github.com/repos/huggingface/datasets/issues/193/events | https://github.com/huggingface/datasets/issues/193 | 624,655,558 | MDU6SXNzdWU2MjQ2NTU1NTg= | 193 | [Tensorflow] Use something else than `from_tensor_slices()` | {'login': 'astariul', 'id': 43774355, 'node_id': 'MDQ6VXNlcjQzNzc0MzU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43774355?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/astariul', 'html_url': 'https://github.com/astariul', 'followers_url': 'https://api.github.com/users/astariul/followers',... | [] | closed | false | {'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo... | [{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'f... | null | ["I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try."
'Is `tf.data.Dataset.from_generator` working on TPU ?'
'`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile "/usr/local/lib/python3.6/contextlib.py", line 88, in __exit__\r\n next(self.gen)\r\n F... | 2020-05-26 07:19:14 | 2020-10-27 15:28:11 | 2020-10-27 15:28:11 | NONE | null | null | null | In the example notebook, the TF Dataset is built using `from_tensor_slices()` :
```python
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
label... | {'url': 'https://api.github.com/repos/huggingface/datasets/issues/193/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0} | https://api.github.com/repos/huggingface/datasets/issues/193/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.