url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.28B | node_id stringlengths 18 32 | number int64 1 4.53k | title stringlengths 1 276 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,656B | updated_at int64 1,587B 1,656B | closed_at int64 1,587B 1,656B ⌀ | author_association stringclasses 3
values | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 1
value | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/970/comments | https://api.github.com/repos/huggingface/datasets/issues/970/events | https://github.com/huggingface/datasets/pull/970 | 754,697,489 | MDExOlB1bGxSZXF1ZXN0NTMwNTUxNTkz | 970 | Add SWAG | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [] | 1,606,854,065,000 | 1,606,902,916,000 | 1,606,902,915,000 | MEMBER | null | Commonsense NLI -> https://rowanzellers.com/swag/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/970/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/970",
"html_url": "https://github.com/huggingface/datasets/pull/970",
"diff_url": "https://github.com/huggingface/datasets/pull/970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/970.patch",
"merged_at": 1606902915000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/969/comments | https://api.github.com/repos/huggingface/datasets/issues/969/events | https://github.com/huggingface/datasets/pull/969 | 754,681,940 | MDExOlB1bGxSZXF1ZXN0NTMwNTM4ODQz | 969 | Add wiki auto dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yje... | [] | closed | false | null | [] | null | [] | 1,606,852,691,000 | 1,606,925,954,000 | 1,606,925,954,000 | MEMBER | null | This PR adds the WikiAuto sentence simplification dataset
https://github.com/chaojiang06/wiki-auto
This is also a prospective GEM task, hence the README.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/969/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/969",
"html_url": "https://github.com/huggingface/datasets/pull/969",
"diff_url": "https://github.com/huggingface/datasets/pull/969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/969.patch",
"merged_at": 1606925954000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/968/comments | https://api.github.com/repos/huggingface/datasets/issues/968/events | https://github.com/huggingface/datasets/pull/968 | 754,659,015 | MDExOlB1bGxSZXF1ZXN0NTMwNTIwMjEz | 968 | ADD Afrikaans NER | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [
"One trick if you want to add other datasets: consider running these commands each time you want to add a new dataset\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my_dataset_name>\r\n```"
] | 1,606,850,583,000 | 1,606,902,088,000 | 1,606,902,088,000 | CONTRIBUTOR | null | Afrikaans NER corpus | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/968/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/968",
"html_url": "https://github.com/huggingface/datasets/pull/968",
"diff_url": "https://github.com/huggingface/datasets/pull/968.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/968.patch",
"merged_at": 1606902088000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/967/comments | https://api.github.com/repos/huggingface/datasets/issues/967/events | https://github.com/huggingface/datasets/pull/967 | 754,578,988 | MDExOlB1bGxSZXF1ZXN0NTMwNDU0OTI3 | 967 | Add CS Restaurants dataset | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"Oh yeah, for some reason I thought you had to do it after the merge, I'll get on it",
"Weird, now the CI seems to fail because of other datasets (XGLUE, Norwegian_NER)",
"Yea you just need to rebase from master",
"Re-opening a PR without the messed-up rebase"
] | 1,606,843,057,000 | 1,606,931,864,000 | 1,606,931,845,000 | MEMBER | null | This PR adds the Czech restaurants dataset for Czech NLG. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/967/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/967",
"html_url": "https://github.com/huggingface/datasets/pull/967",
"diff_url": "https://github.com/huggingface/datasets/pull/967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/967.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/966/comments | https://api.github.com/repos/huggingface/datasets/issues/966/events | https://github.com/huggingface/datasets/pull/966 | 754,558,686 | MDExOlB1bGxSZXF1ZXN0NTMwNDM4NDE4 | 966 | Add CLINC150 Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR",
"created new [PR](https://github.com/huggingface/datasets/pull/1016)\r\n\r\nclosing this!"
] | 1,606,841,413,000 | 1,606,934,743,000 | 1,606,934,730,000 | CONTRIBUTOR | null | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/966/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/966",
"html_url": "https://github.com/huggingface/datasets/pull/966",
"diff_url": "https://github.com/huggingface/datasets/pull/966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/966.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/965/comments | https://api.github.com/repos/huggingface/datasets/issues/965/events | https://github.com/huggingface/datasets/pull/965 | 754,553,169 | MDExOlB1bGxSZXF1ZXN0NTMwNDMzODQ2 | 965 | Add CLINC150 Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [] | 1,606,840,980,000 | 1,606,841,476,000 | 1,606,841,355,000 | CONTRIBUTOR | null | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/965/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/965",
"html_url": "https://github.com/huggingface/datasets/pull/965",
"diff_url": "https://github.com/huggingface/datasets/pull/965.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/965.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/964/comments | https://api.github.com/repos/huggingface/datasets/issues/964/events | https://github.com/huggingface/datasets/pull/964 | 754,474,660 | MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy | 964 | Adding the WebNLG dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yje... | [] | closed | false | null | [] | null | [
"This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) "
] | 1,606,835,123,000 | 1,606,930,445,000 | 1,606,930,445,000 | MEMBER | null | This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.
More information can be found [here](https://webnlg-challenge.loria.fr/)
Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/964/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/964",
"html_url": "https://github.com/huggingface/datasets/pull/964",
"diff_url": "https://github.com/huggingface/datasets/pull/964.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/964.patch",
"merged_at": 1606930445000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/963/comments | https://api.github.com/repos/huggingface/datasets/issues/963/events | https://github.com/huggingface/datasets/pull/963 | 754,451,234 | MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4 | 963 | add CODAH dataset | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [] | 1,606,833,425,000 | 1,606,916,758,000 | 1,606,915,285,000 | MEMBER | null | Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/963/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/963",
"html_url": "https://github.com/huggingface/datasets/pull/963",
"diff_url": "https://github.com/huggingface/datasets/pull/963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/963.patch",
"merged_at": 1606915285000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/962/comments | https://api.github.com/repos/huggingface/datasets/issues/962/events | https://github.com/huggingface/datasets/pull/962 | 754,441,428 | MDExOlB1bGxSZXF1ZXN0NTMwMzQxMDA2 | 962 | Add Danish Political Comments Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [] | 1,606,832,912,000 | 1,606,991,515,000 | 1,606,991,514,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/962/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/962",
"html_url": "https://github.com/huggingface/datasets/pull/962",
"diff_url": "https://github.com/huggingface/datasets/pull/962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/962.patch",
"merged_at": 1606991514000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/961/comments | https://api.github.com/repos/huggingface/datasets/issues/961/events | https://github.com/huggingface/datasets/issues/961 | 754,434,398 | MDU6SXNzdWU3NTQ0MzQzOTg= | 961 | sample multiple datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | open | false | null | [] | null | [
"here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to us... | 1,606,832,402,000 | 1,606,872,764,000 | null | CONTRIBUTOR | null | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/961/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/960/comments | https://api.github.com/repos/huggingface/datasets/issues/960/events | https://github.com/huggingface/datasets/pull/960 | 754,422,710 | MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx | 960 | Add code to automate parts of the dataset card | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,606,831,491,000 | 1,619,423,761,000 | 1,619,423,761,000 | MEMBER | null | Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/960/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/960",
"html_url": "https://github.com/huggingface/datasets/pull/960",
"diff_url": "https://github.com/huggingface/datasets/pull/960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/960.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/959/comments | https://api.github.com/repos/huggingface/datasets/issues/959/events | https://github.com/huggingface/datasets/pull/959 | 754,418,610 | MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1 | 959 | Add Tunizi Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [] | 1,606,831,179,000 | 1,607,005,301,000 | 1,607,005,300,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/959/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/959",
"html_url": "https://github.com/huggingface/datasets/pull/959",
"diff_url": "https://github.com/huggingface/datasets/pull/959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/959.patch",
"merged_at": 1607005300000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/958/comments | https://api.github.com/repos/huggingface/datasets/issues/958/events | https://github.com/huggingface/datasets/pull/958 | 754,404,095 | MDExOlB1bGxSZXF1ZXN0NTMwMzA5ODkz | 958 | dataset(ncslgr): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [] | closed | false | null | [] | null | [
"@lhoestq I added the README files, and now the tests fail... (check commit history, only changed MD file)\r\nThe tests seem a bit unstable",
"the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | 1,606,830,077,000 | 1,607,358,939,000 | 1,607,358,939,000 | CONTRIBUTOR | null | clean #789 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/958/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/958",
"html_url": "https://github.com/huggingface/datasets/pull/958",
"diff_url": "https://github.com/huggingface/datasets/pull/958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/958.patch",
"merged_at": 1607358939000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/957/comments | https://api.github.com/repos/huggingface/datasets/issues/957/events | https://github.com/huggingface/datasets/pull/957 | 754,380,073 | MDExOlB1bGxSZXF1ZXN0NTMwMjg5OTk4 | 957 | Isixhosa ner corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [] | 1,606,828,116,000 | 1,606,846,498,000 | 1,606,846,498,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/957/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/957",
"html_url": "https://github.com/huggingface/datasets/pull/957",
"diff_url": "https://github.com/huggingface/datasets/pull/957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/957.patch",
"merged_at": 1606846498000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/956/comments | https://api.github.com/repos/huggingface/datasets/issues/956/events | https://github.com/huggingface/datasets/pull/956 | 754,368,378 | MDExOlB1bGxSZXF1ZXN0NTMwMjgwMzU1 | 956 | Add Norwegian NER | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | closed | false | null | [] | null | [
"Merging this one, good job and thank you @jplu :) "
] | 1,606,827,062,000 | 1,606,899,191,000 | 1,606,846,161,000 | CONTRIBUTOR | null | This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset.
I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/956/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/956",
"html_url": "https://github.com/huggingface/datasets/pull/956",
"diff_url": "https://github.com/huggingface/datasets/pull/956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/956.patch",
"merged_at": 1606846161000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/955/comments | https://api.github.com/repos/huggingface/datasets/issues/955/events | https://github.com/huggingface/datasets/pull/955 | 754,367,291 | MDExOlB1bGxSZXF1ZXN0NTMwMjc5NDQw | 955 | Added PragmEval benchmark | {
"login": "sileod",
"id": 9168444,
"node_id": "MDQ6VXNlcjkxNjg0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9168444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sileod",
"html_url": "https://github.com/sileod",
"followers_url": "https://api.github.com/users/sileod/foll... | [] | closed | false | null | [] | null | [
"> Really cool ! Thanks for adding this one :)\r\n> Good job at adding all those citations for each task\r\n> \r\n> Looks like the dummy data test doesn't pass. Maybe some files are missing in the dummy_data.zip files ?\r\n> The error reports `pragmeval/verifiability/train.tsv` to be missing\r\n> \r\n> Also could y... | 1,606,826,955,000 | 1,607,078,612,000 | 1,606,988,207,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/955/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/955",
"html_url": "https://github.com/huggingface/datasets/pull/955",
"diff_url": "https://github.com/huggingface/datasets/pull/955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/955.patch",
"merged_at": 1606988207000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/954/comments | https://api.github.com/repos/huggingface/datasets/issues/954/events | https://github.com/huggingface/datasets/pull/954 | 754,362,012 | MDExOlB1bGxSZXF1ZXN0NTMwMjc1MDY4 | 954 | add prachathai67k | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Test failing for same issues as https://github.com/huggingface/datasets/pull/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py... | 1,606,826,455,000 | 1,606,885,931,000 | 1,606,884,232,000 | CONTRIBUTOR | null | `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The prachathai-67k dataset was scraped from the news site Prachathai.
We filtered out those articles with less than 500 characters of body text, mostly images and cartoons.
It contains 67,889 articles wtih 12 curated tags ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/954/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/954",
"html_url": "https://github.com/huggingface/datasets/pull/954",
"diff_url": "https://github.com/huggingface/datasets/pull/954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/954.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/953/comments | https://api.github.com/repos/huggingface/datasets/issues/953/events | https://github.com/huggingface/datasets/pull/953 | 754,359,942 | MDExOlB1bGxSZXF1ZXN0NTMwMjczMzg5 | 953 | added health_fact dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nInitially I tried int(-1) only in place of nan labels and missing values but I kept on getting this error ```pyarrow.lib.ArrowTypeError: Expected bytes, got a 'int' object``` maybe because I'm sending int values (-1) to objects which are string type"
] | 1,606,826,264,000 | 1,606,864,293,000 | 1,606,864,293,000 | CONTRIBUTOR | null | Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/953/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/953",
"html_url": "https://github.com/huggingface/datasets/pull/953",
"diff_url": "https://github.com/huggingface/datasets/pull/953.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/953.patch",
"merged_at": 1606864293000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/952/comments | https://api.github.com/repos/huggingface/datasets/issues/952/events | https://github.com/huggingface/datasets/pull/952 | 754,357,270 | MDExOlB1bGxSZXF1ZXN0NTMwMjcxMTQz | 952 | Add orange sum | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [] | 1,606,826,014,000 | 1,606,837,440,000 | 1,606,837,440,000 | CONTRIBUTOR | null | Add OrangeSum a french abstractive summarization dataset.
Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/952/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/952",
"html_url": "https://github.com/huggingface/datasets/pull/952",
"diff_url": "https://github.com/huggingface/datasets/pull/952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/952.patch",
"merged_at": 1606837440000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/951/comments | https://api.github.com/repos/huggingface/datasets/issues/951/events | https://github.com/huggingface/datasets/pull/951 | 754,349,979 | MDExOlB1bGxSZXF1ZXN0NTMwMjY1MTY0 | 951 | Prachathai67k | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k"
] | 1,606,825,312,000 | 1,606,825,793,000 | 1,606,825,706,000 | CONTRIBUTOR | null | Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articl... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/951/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/951",
"html_url": "https://github.com/huggingface/datasets/pull/951",
"diff_url": "https://github.com/huggingface/datasets/pull/951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/951.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/950/comments | https://api.github.com/repos/huggingface/datasets/issues/950/events | https://github.com/huggingface/datasets/pull/950 | 754,318,686 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4OTQx | 950 | Support .xz file format | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,606,822,488,000 | 1,606,829,958,000 | 1,606,829,958,000 | MEMBER | null | Add support to extract/uncompress files in .xz format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/950/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/950",
"html_url": "https://github.com/huggingface/datasets/pull/950",
"diff_url": "https://github.com/huggingface/datasets/pull/950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/950.patch",
"merged_at": 1606829958000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/949/comments | https://api.github.com/repos/huggingface/datasets/issues/949/events | https://github.com/huggingface/datasets/pull/949 | 754,317,777 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4MTky | 949 | Add GermaNER Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [
"@lhoestq added. "
] | 1,606,822,411,000 | 1,607,004,401,000 | 1,607,004,400,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/949/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/949",
"html_url": "https://github.com/huggingface/datasets/pull/949",
"diff_url": "https://github.com/huggingface/datasets/pull/949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/949.patch",
"merged_at": 1607004400000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/948/comments | https://api.github.com/repos/huggingface/datasets/issues/948/events | https://github.com/huggingface/datasets/pull/948 | 754,306,260 | MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz | 948 | docs(ADD_NEW_DATASET): correct indentation for script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [] | closed | false | null | [] | null | [] | 1,606,821,458,000 | 1,606,821,918,000 | 1,606,821,918,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/948/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/948",
"html_url": "https://github.com/huggingface/datasets/pull/948",
"diff_url": "https://github.com/huggingface/datasets/pull/948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/948.patch",
"merged_at": 1606821918000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/947/comments | https://api.github.com/repos/huggingface/datasets/issues/947/events | https://github.com/huggingface/datasets/pull/947 | 754,286,658 | MDExOlB1bGxSZXF1ZXN0NTMwMjEyMjc3 | 947 | Add europeana newspapers | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | closed | false | null | [] | null | [] | 1,606,819,938,000 | 1,606,902,155,000 | 1,606,902,129,000 | CONTRIBUTOR | null | This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/947/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/947",
"html_url": "https://github.com/huggingface/datasets/pull/947",
"diff_url": "https://github.com/huggingface/datasets/pull/947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/947.patch",
"merged_at": 1606902129000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/946/comments | https://api.github.com/repos/huggingface/datasets/issues/946/events | https://github.com/huggingface/datasets/pull/946 | 754,278,632 | MDExOlB1bGxSZXF1ZXN0NTMwMjA1Nzgw | 946 | add PEC dataset | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"The checks failed again even if I didn't make any changes.",
"you just need to rebase from master to fix the CI :)",
"Sorry for the mess, I'm confused by the rebase and thus created a new branch."
] | 1,606,819,301,000 | 1,606,963,634,000 | 1,606,963,634,000 | CONTRIBUTOR | null | A persona-based empathetic conversation dataset published at EMNLP 2020. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/946/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/946",
"html_url": "https://github.com/huggingface/datasets/pull/946",
"diff_url": "https://github.com/huggingface/datasets/pull/946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/946.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/945/comments | https://api.github.com/repos/huggingface/datasets/issues/945/events | https://github.com/huggingface/datasets/pull/945 | 754,273,920 | MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1 | 945 | Adding Babi dataset - English version | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [] | closed | false | null | [] | null | [
"Replaced by #1126"
] | 1,606,818,936,000 | 1,607,096,585,000 | 1,607,096,574,000 | MEMBER | null | Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/945/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/945",
"html_url": "https://github.com/huggingface/datasets/pull/945",
"diff_url": "https://github.com/huggingface/datasets/pull/945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/945.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/944/comments | https://api.github.com/repos/huggingface/datasets/issues/944/events | https://github.com/huggingface/datasets/pull/944 | 754,228,947 | MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5 | 944 | Add German Legal Entity Recognition Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [
"thanks ! merging this one"
] | 1,606,815,502,000 | 1,607,000,816,000 | 1,607,000,815,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/944/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/944",
"html_url": "https://github.com/huggingface/datasets/pull/944",
"diff_url": "https://github.com/huggingface/datasets/pull/944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/944.patch",
"merged_at": 1607000814000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/943/comments | https://api.github.com/repos/huggingface/datasets/issues/943/events | https://github.com/huggingface/datasets/pull/943 | 754,192,491 | MDExOlB1bGxSZXF1ZXN0NTMwMTM2ODM3 | 943 | The FLUE Benchmark | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | closed | false | null | [] | null | [] | 1,606,813,250,000 | 1,606,836,278,000 | 1,606,836,270,000 | CONTRIBUTOR | null | This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content.
Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambigu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/943/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/943",
"html_url": "https://github.com/huggingface/datasets/pull/943",
"diff_url": "https://github.com/huggingface/datasets/pull/943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/943.patch",
"merged_at": 1606836270000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/942/comments | https://api.github.com/repos/huggingface/datasets/issues/942/events | https://github.com/huggingface/datasets/issues/942 | 754,162,318 | MDU6SXNzdWU3NTQxNjIzMTg= | 942 | D | {
"login": "CryptoMiKKi",
"id": 74238514,
"node_id": "MDQ6VXNlcjc0MjM4NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CryptoMiKKi",
"html_url": "https://github.com/CryptoMiKKi",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [] | 1,606,810,630,000 | 1,607,013,773,000 | 1,607,013,773,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/942/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/941/comments | https://api.github.com/repos/huggingface/datasets/issues/941/events | https://github.com/huggingface/datasets/pull/941 | 754,141,321 | MDExOlB1bGxSZXF1ZXN0NTMwMDk0MTI2 | 941 | Add People's Daily NER dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"> LGTM thanks :)\n> \n> \n> \n> Before we merge, could you add a dataset card ? see here for more info: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\n> \n> \n> \n> Note that only the tags at the top of the dataset card are mandatory, if you feel ... | 1,606,808,933,000 | 1,606,934,563,000 | 1,606,934,561,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/941/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/941",
"html_url": "https://github.com/huggingface/datasets/pull/941",
"diff_url": "https://github.com/huggingface/datasets/pull/941.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/941.patch",
"merged_at": 1606934561000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/940/comments | https://api.github.com/repos/huggingface/datasets/issues/940/events | https://github.com/huggingface/datasets/pull/940 | 754,010,753 | MDExOlB1bGxSZXF1ZXN0NTI5OTc3OTQ2 | 940 | Add MSRA NER dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"LGTM, don't forget the tags ;)"
] | 1,606,798,931,000 | 1,607,074,180,000 | 1,606,807,553,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/940/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/940",
"html_url": "https://github.com/huggingface/datasets/pull/940",
"diff_url": "https://github.com/huggingface/datasets/pull/940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/940.patch",
"merged_at": 1606807553000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/939/comments | https://api.github.com/repos/huggingface/datasets/issues/939/events | https://github.com/huggingface/datasets/pull/939 | 753,965,405 | MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz | 939 | add wisesight_sentiment | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILE... | 1,606,791,999,000 | 1,606,884,758,000 | 1,606,883,751,000 | CONTRIBUTOR | null | Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
Model Card:
---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- th
licenses:
- cc0-1.0
multilinguality:
- monolingual
size_categories:... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/939/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/939",
"html_url": "https://github.com/huggingface/datasets/pull/939",
"diff_url": "https://github.com/huggingface/datasets/pull/939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/939.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/938/comments | https://api.github.com/repos/huggingface/datasets/issues/938/events | https://github.com/huggingface/datasets/pull/938 | 753,940,979 | MDExOlB1bGxSZXF1ZXN0NTI5OTIxNzU5 | 938 | V-1.0.0 of isizulu_ner_corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [
"closing since it's been added in #957 "
] | 1,606,788,272,000 | 1,606,865,676,000 | 1,606,865,676,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/938/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/938/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/938",
"html_url": "https://github.com/huggingface/datasets/pull/938",
"diff_url": "https://github.com/huggingface/datasets/pull/938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/938.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/937/comments | https://api.github.com/repos/huggingface/datasets/issues/937/events | https://github.com/huggingface/datasets/issues/937 | 753,921,078 | MDU6SXNzdWU3NTM5MjEwNzg= | 937 | Local machine/cluster Beam Datasets example/tutorial | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to ma... | 1,606,785,103,000 | 1,608,731,696,000 | null | NONE | null | Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get eit... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/937/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/936/comments | https://api.github.com/repos/huggingface/datasets/issues/936/events | https://github.com/huggingface/datasets/pull/936 | 753,915,603 | MDExOlB1bGxSZXF1ZXN0NTI5OTAxODMw | 936 | Added HANS parses and categories | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [] | 1,606,784,296,000 | 1,606,828,781,000 | 1,606,828,780,000 | MEMBER | null | This pull request adds HANS missing information: the sentence parses, as well as the heuristic category. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/936/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/936/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/936",
"html_url": "https://github.com/huggingface/datasets/pull/936",
"diff_url": "https://github.com/huggingface/datasets/pull/936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/936.patch",
"merged_at": 1606828780000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/935/comments | https://api.github.com/repos/huggingface/datasets/issues/935/events | https://github.com/huggingface/datasets/pull/935 | 753,863,055 | MDExOlB1bGxSZXF1ZXN0NTI5ODU5MjM4 | 935 | add PIB dataset | {
"login": "vasudevgupta7",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasudevgupta7",
"html_url": "https://github.com/vasudevgupta7",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks",
"Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets/pib/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets/p... | 1,606,776,943,000 | 1,606,864,631,000 | 1,606,864,631,000 | CONTRIBUTOR | null | This pull request will add PIB dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/935/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/935",
"html_url": "https://github.com/huggingface/datasets/pull/935",
"diff_url": "https://github.com/huggingface/datasets/pull/935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/935.patch",
"merged_at": 1606864631000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/934/comments | https://api.github.com/repos/huggingface/datasets/issues/934/events | https://github.com/huggingface/datasets/pull/934 | 753,860,095 | MDExOlB1bGxSZXF1ZXN0NTI5ODU2ODY4 | 934 | small updates to the "add new dataset" guide | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"cc @yjernite @lhoestq @thomwolf "
] | 1,606,776,550,000 | 1,606,798,582,000 | 1,606,778,040,000 | MEMBER | null | small updates (corrections/typos) to the "add new dataset" guide | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/934/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/934",
"html_url": "https://github.com/huggingface/datasets/pull/934",
"diff_url": "https://github.com/huggingface/datasets/pull/934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/934.patch",
"merged_at": 1606778040000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/933/comments | https://api.github.com/repos/huggingface/datasets/issues/933/events | https://github.com/huggingface/datasets/pull/933 | 753,854,272 | MDExOlB1bGxSZXF1ZXN0NTI5ODUyMTI1 | 933 | Add NumerSense | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/... | [] | closed | false | null | [] | null | [] | 1,606,775,793,000 | 1,606,854,350,000 | 1,606,852,316,000 | CONTRIBUTOR | null | Adds the NumerSense dataset
- Webpage/leaderboard: https://inklab.usc.edu/NumerSense/
- Paper: https://arxiv.org/abs/2005.00683
- Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/933/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/933",
"html_url": "https://github.com/huggingface/datasets/pull/933",
"diff_url": "https://github.com/huggingface/datasets/pull/933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/933.patch",
"merged_at": 1606852316000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/932/comments | https://api.github.com/repos/huggingface/datasets/issues/932/events | https://github.com/huggingface/datasets/pull/932 | 753,840,300 | MDExOlB1bGxSZXF1ZXN0NTI5ODQwNjQ3 | 932 | adding metooma dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/aka... | [] | closed | false | null | [] | null | [
"This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and gu... | 1,606,774,189,000 | 1,606,869,474,000 | 1,606,869,474,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/932/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/932",
"html_url": "https://github.com/huggingface/datasets/pull/932",
"diff_url": "https://github.com/huggingface/datasets/pull/932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/932.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/931/comments | https://api.github.com/repos/huggingface/datasets/issues/931/events | https://github.com/huggingface/datasets/pull/931 | 753,818,193 | MDExOlB1bGxSZXF1ZXN0NTI5ODIzMDYz | 931 | [WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [] | open | false | null | [] | null | [] | 1,606,771,821,000 | 1,606,771,821,000 | null | MEMBER | null | Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1`
Didn't managed to see how to solve that.
Putting aside for now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/931/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/931/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/931",
"html_url": "https://github.com/huggingface/datasets/pull/931",
"diff_url": "https://github.com/huggingface/datasets/pull/931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/931.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/930/comments | https://api.github.com/repos/huggingface/datasets/issues/930/events | https://github.com/huggingface/datasets/pull/930 | 753,801,204 | MDExOlB1bGxSZXF1ZXN0NTI5ODA5MzM1 | 930 | Lambada | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [] | 1,606,770,153,000 | 1,606,783,032,000 | 1,606,783,031,000 | MEMBER | null | Added LAMBADA dataset.
A couple of points of attention (mostly because I am not sure)
- The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples.
- The dev and test splits don't have the `category` field so I put `None` by defaul... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/930/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/930",
"html_url": "https://github.com/huggingface/datasets/pull/930",
"diff_url": "https://github.com/huggingface/datasets/pull/930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/930.patch",
"merged_at": 1606783031000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/929/comments | https://api.github.com/repos/huggingface/datasets/issues/929/events | https://github.com/huggingface/datasets/pull/929 | 753,737,794 | MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3 | 929 | Add weibo NER dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [] | 1,606,764,167,000 | 1,607,002,615,000 | 1,607,002,614,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/929/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/929",
"html_url": "https://github.com/huggingface/datasets/pull/929",
"diff_url": "https://github.com/huggingface/datasets/pull/929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/929.patch",
"merged_at": 1607002614000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/928/comments | https://api.github.com/repos/huggingface/datasets/issues/928/events | https://github.com/huggingface/datasets/pull/928 | 753,722,324 | MDExOlB1bGxSZXF1ZXN0NTI5NzQ1OTIx | 928 | Add the Multilingual Amazon Reviews Corpus | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/... | [] | closed | false | null | [] | null | [] | 1,606,762,686,000 | 1,606,838,670,000 | 1,606,838,667,000 | CONTRIBUTOR | null | - **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`)
- **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese.
- **Paper:** https://arxiv.org/abs/2010.02573
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` us... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/928/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/928/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/928",
"html_url": "https://github.com/huggingface/datasets/pull/928",
"diff_url": "https://github.com/huggingface/datasets/pull/928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/928.patch",
"merged_at": 1606838667000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/927/comments | https://api.github.com/repos/huggingface/datasets/issues/927/events | https://github.com/huggingface/datasets/issues/927 | 753,679,020 | MDU6SXNzdWU3NTM2NzkwMjA= | 927 | Hello | {
"login": "k125-ak",
"id": 75259546,
"node_id": "MDQ6VXNlcjc1MjU5NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k125-ak",
"html_url": "https://github.com/k125-ak",
"followers_url": "https://api.github.com/users/k125-a... | [] | closed | false | null | [] | null | [] | 1,606,758,605,000 | 1,606,758,630,000 | 1,606,758,630,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/927/timeline | null | completed | null | null | false | |
https://api.github.com/repos/huggingface/datasets/issues/926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/926/comments | https://api.github.com/repos/huggingface/datasets/issues/926/events | https://github.com/huggingface/datasets/pull/926 | 753,676,069 | MDExOlB1bGxSZXF1ZXN0NTI5NzA4MTcy | 926 | add inquisitive | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?",
"> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should defin... | 1,606,758,322,000 | 1,606,916,722,000 | 1,606,916,413,000 | MEMBER | null | Adding inquisitive qg dataset
More info: https://github.com/wjko2/INQUISITIVE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/926/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/926",
"html_url": "https://github.com/huggingface/datasets/pull/926",
"diff_url": "https://github.com/huggingface/datasets/pull/926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/926.patch",
"merged_at": 1606916413000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/925/comments | https://api.github.com/repos/huggingface/datasets/issues/925/events | https://github.com/huggingface/datasets/pull/925 | 753,672,661 | MDExOlB1bGxSZXF1ZXN0NTI5NzA1MzM4 | 925 | Add Turku NLP Corpus for Finnish NER | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [
"> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n"
] | 1,606,758,019,000 | 1,607,004,431,000 | 1,607,004,430,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/925/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/925",
"html_url": "https://github.com/huggingface/datasets/pull/925",
"diff_url": "https://github.com/huggingface/datasets/pull/925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/925.patch",
"merged_at": 1607004430000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/924/comments | https://api.github.com/repos/huggingface/datasets/issues/924/events | https://github.com/huggingface/datasets/pull/924 | 753,631,951 | MDExOlB1bGxSZXF1ZXN0NTI5NjcyMzgw | 924 | Add DART | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"LGTM!"
] | 1,606,754,557,000 | 1,606,878,822,000 | 1,606,878,821,000 | MEMBER | null | - **Name:** *DART*
- **Description:** *DART is a large dataset for open-domain structured data record to text generation.*
- **Paper:** *https://arxiv.org/abs/2007.02871*
- **Data:** *https://github.com/Yale-LILY/dart#leaderboard*
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/924/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/924",
"html_url": "https://github.com/huggingface/datasets/pull/924",
"diff_url": "https://github.com/huggingface/datasets/pull/924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/924.patch",
"merged_at": 1606878821000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/923/comments | https://api.github.com/repos/huggingface/datasets/issues/923/events | https://github.com/huggingface/datasets/pull/923 | 753,569,220 | MDExOlB1bGxSZXF1ZXN0NTI5NjIyMDQx | 923 | Add CC-100 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"Hello @lhoestq, I would like just to ask you if it is OK that I include this feature 9f32ba1 in this PR or you would prefer to have it in a separate one.\r\n\r\nI was wondering whether include also a test, but I did not find any test for the other file formats...",
"Hi ! Sure that would be valuable to support .x... | 1,606,749,802,000 | 1,618,925,657,000 | 1,618,925,657,000 | MEMBER | null | Add CC-100.
Close #773 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/923/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/923",
"html_url": "https://github.com/huggingface/datasets/pull/923",
"diff_url": "https://github.com/huggingface/datasets/pull/923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/923.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/922/comments | https://api.github.com/repos/huggingface/datasets/issues/922/events | https://github.com/huggingface/datasets/pull/922 | 753,559,130 | MDExOlB1bGxSZXF1ZXN0NTI5NjEzOTA4 | 922 | Add XOR QA Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)",
"I followed the instructions mentioned there but my datas... | 1,606,749,054,000 | 1,606,878,741,000 | 1,606,878,741,000 | CONTRIBUTOR | null | Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/922/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/922",
"html_url": "https://github.com/huggingface/datasets/pull/922",
"diff_url": "https://github.com/huggingface/datasets/pull/922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/922.patch",
"merged_at": 1606878741000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/920/comments | https://api.github.com/repos/huggingface/datasets/issues/920/events | https://github.com/huggingface/datasets/pull/920 | 753,445,747 | MDExOlB1bGxSZXF1ZXN0NTI5NTIzMTgz | 920 | add dream dataset | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If you can't fill some fields then just leave `[N/A]`\r\n\r\nQuick amendment: `[N/A]` is for fields that are not relevant: if you can'... | 1,606,740,014,000 | 1,607,013,912,000 | 1,606,923,552,000 | MEMBER | null | Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension
More details:
https://dataset.org/dream/
https://github.com/nlpdata/dream | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/920/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/920",
"html_url": "https://github.com/huggingface/datasets/pull/920",
"diff_url": "https://github.com/huggingface/datasets/pull/920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/920.patch",
"merged_at": 1606923552000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/919/comments | https://api.github.com/repos/huggingface/datasets/issues/919/events | https://github.com/huggingface/datasets/issues/919 | 753,434,472 | MDU6SXNzdWU3NTM0MzQ0NzI= | 919 | wrong length with datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | closed | false | null | [] | null | [
"Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ",
"sorry I misunderstood length of dataset with dataloader, closed. thanks "
] | 1,606,739,019,000 | 1,606,739,847,000 | 1,606,739,846,000 | CONTRIBUTOR | null | Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/919/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/918/comments | https://api.github.com/repos/huggingface/datasets/issues/918/events | https://github.com/huggingface/datasets/pull/918 | 753,397,440 | MDExOlB1bGxSZXF1ZXN0NTI5NDgzOTk4 | 918 | Add conll2002 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,606,735,775,000 | 1,606,761,270,000 | 1,606,761,269,000 | MEMBER | null | Adding the Conll2002 dataset for NER.
More info here : https://www.clips.uantwerpen.be/conll2002/ner/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/918/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/918",
"html_url": "https://github.com/huggingface/datasets/pull/918",
"diff_url": "https://github.com/huggingface/datasets/pull/918.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/918.patch",
"merged_at": 1606761269000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/917/comments | https://api.github.com/repos/huggingface/datasets/issues/917/events | https://github.com/huggingface/datasets/pull/917 | 753,391,591 | MDExOlB1bGxSZXF1ZXN0NTI5NDc5MTIy | 917 | Addition of Concode Dataset | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"Testing command doesn't work\r\n###trace\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n========================================================= short test summary info ========================================================== \r\nERROR tests/test_dataset_common.py - absl.testing.parameterized.No... | 1,606,735,259,000 | 1,609,210,536,000 | 1,609,210,536,000 | CONTRIBUTOR | null | ##Overview
Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation)
Reference Links
Paper Link = https://arxiv.org/pdf/1904.09086.pdf
Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/917/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/917",
"html_url": "https://github.com/huggingface/datasets/pull/917",
"diff_url": "https://github.com/huggingface/datasets/pull/917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/917.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/916/comments | https://api.github.com/repos/huggingface/datasets/issues/916/events | https://github.com/huggingface/datasets/pull/916 | 753,376,643 | MDExOlB1bGxSZXF1ZXN0NTI5NDY3MTkx | 916 | Add Swedish NER Corpus | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://ap... | [] | closed | false | null | [] | null | [
"Yes the use of configs is optional",
"@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[Mo... | 1,606,733,991,000 | 1,606,878,650,000 | 1,606,878,649,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/916/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/916",
"html_url": "https://github.com/huggingface/datasets/pull/916",
"diff_url": "https://github.com/huggingface/datasets/pull/916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/916.patch",
"merged_at": 1606878649000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/915/comments | https://api.github.com/repos/huggingface/datasets/issues/915/events | https://github.com/huggingface/datasets/issues/915 | 753,118,481 | MDU6SXNzdWU3NTMxMTg0ODE= | 915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | {
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhu... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6... | open | false | null | [] | null | [
"This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?",
"@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equiv... | 1,606,708,246,000 | 1,608,786,709,000 | null | NONE | null | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/915/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/914/comments | https://api.github.com/repos/huggingface/datasets/issues/914/events | https://github.com/huggingface/datasets/pull/914 | 752,956,106 | MDExOlB1bGxSZXF1ZXN0NTI5MTM2Njk3 | 914 | Add list_github_datasets api for retrieving dataset name list in github repo | {
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhu... | [] | closed | false | null | [] | null | [
"We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?",
"> We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip... | 1,606,668,135,000 | 1,606,893,676,000 | 1,606,893,676,000 | NONE | null | Thank you for your great effort on unifying data processing for NLP!
This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/914/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/914",
"html_url": "https://github.com/huggingface/datasets/pull/914",
"diff_url": "https://github.com/huggingface/datasets/pull/914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/914.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/913/comments | https://api.github.com/repos/huggingface/datasets/issues/913/events | https://github.com/huggingface/datasets/pull/913 | 752,892,020 | MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3 | 913 | My new dataset PEC | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"How to resolve these failed checks?",
"Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor exa... | 1,606,648,237,000 | 1,606,819,313,000 | 1,606,819,313,000 | CONTRIBUTOR | null | A new dataset PEC published in EMNLP 2020. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/913/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/913",
"html_url": "https://github.com/huggingface/datasets/pull/913",
"diff_url": "https://github.com/huggingface/datasets/pull/913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/913.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/911/comments | https://api.github.com/repos/huggingface/datasets/issues/911/events | https://github.com/huggingface/datasets/issues/911 | 752,806,215 | MDU6SXNzdWU3NTI4MDYyMTU= | 911 | datasets module not found | {
"login": "sbassam",
"id": 15836274,
"node_id": "MDQ6VXNlcjE1ODM2Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbassam",
"html_url": "https://github.com/sbassam",
"followers_url": "https://api.github.com/users/sbassa... | [] | closed | false | null | [] | null | [
"nvm, I'd made an assumption that the library gets installed with transformers. "
] | 1,606,613,055,000 | 1,606,660,389,000 | 1,606,660,389,000 | NONE | null | Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/911/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/911/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/910/comments | https://api.github.com/repos/huggingface/datasets/issues/910/events | https://github.com/huggingface/datasets/issues/910 | 752,772,723 | MDU6SXNzdWU3NTI3NzI3MjM= | 910 | Grindr meeting app web.Grindr | {
"login": "jackin34",
"id": 75184749,
"node_id": "MDQ6VXNlcjc1MTg0NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/75184749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackin34",
"html_url": "https://github.com/jackin34",
"followers_url": "https://api.github.com/users/jac... | [] | closed | false | null | [] | null | [] | 1,606,599,383,000 | 1,606,644,711,000 | 1,606,644,711,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/910/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/909/comments | https://api.github.com/repos/huggingface/datasets/issues/909/events | https://github.com/huggingface/datasets/pull/909 | 752,508,299 | MDExOlB1bGxSZXF1ZXN0NTI4ODE1NDYz | 909 | Add FiNER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps://github.com/huggingface/datasets/blob/mas... | 1,606,521,260,000 | 1,607,360,183,000 | 1,607,360,183,000 | CONTRIBUTOR | null | Hi,
this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset.
The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data).
Notice: they provide two testsets. The additional te... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/909/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/909",
"html_url": "https://github.com/huggingface/datasets/pull/909",
"diff_url": "https://github.com/huggingface/datasets/pull/909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/909.patch",
"merged_at": 1607360183000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/908/comments | https://api.github.com/repos/huggingface/datasets/issues/908/events | https://github.com/huggingface/datasets/pull/908 | 752,428,652 | MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz | 908 | Add dependency on black for tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [
"Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..."
] | 1,606,504,368,000 | 1,606,513,613,000 | 1,606,513,612,000 | MEMBER | null | Add package 'black' as an installation requirement for tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/908/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/908",
"html_url": "https://github.com/huggingface/datasets/pull/908",
"diff_url": "https://github.com/huggingface/datasets/pull/908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/908.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/907/comments | https://api.github.com/repos/huggingface/datasets/issues/907/events | https://github.com/huggingface/datasets/pull/907 | 752,422,351 | MDExOlB1bGxSZXF1ZXN0NTI4NzQ4ODMx | 907 | Remove os.path.join from all URLs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,606,503,330,000 | 1,606,690,100,000 | 1,606,690,099,000 | MEMBER | null | Remove `os.path.join` from all URLs in dataset scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/907/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/907/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/907",
"html_url": "https://github.com/huggingface/datasets/pull/907",
"diff_url": "https://github.com/huggingface/datasets/pull/907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/907.patch",
"merged_at": 1606690099000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/906/comments | https://api.github.com/repos/huggingface/datasets/issues/906/events | https://github.com/huggingface/datasets/pull/906 | 752,403,395 | MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0 | 906 | Fix url with backslash in windows for blimp and pg19 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,606,499,951,000 | 1,606,501,196,000 | 1,606,501,196,000 | MEMBER | null | Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls
cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/906/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/906",
"html_url": "https://github.com/huggingface/datasets/pull/906",
"diff_url": "https://github.com/huggingface/datasets/pull/906.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/906.patch",
"merged_at": 1606501195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/905/comments | https://api.github.com/repos/huggingface/datasets/issues/905/events | https://github.com/huggingface/datasets/pull/905 | 752,395,456 | MDExOlB1bGxSZXF1ZXN0NTI4NzI3OTEy | 905 | Disallow backslash in urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Looks like the test doesn't detect all the problems fixed by #907 , I'll fix that",
"Ok found why it doesn't detect the problems fixed by #907 . That's because for all those datasets the urls are actually fine (no backslash) on windows, even if it uses `os.path.join`.\r\n\r\nThis is because of the behavior of `o... | 1,606,498,708,000 | 1,606,690,117,000 | 1,606,690,116,000 | MEMBER | null | Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows.
I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts.
The tests ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/905/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/905",
"html_url": "https://github.com/huggingface/datasets/pull/905",
"diff_url": "https://github.com/huggingface/datasets/pull/905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/905.patch",
"merged_at": 1606690116000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/904/comments | https://api.github.com/repos/huggingface/datasets/issues/904/events | https://github.com/huggingface/datasets/pull/904 | 752,372,743 | MDExOlB1bGxSZXF1ZXN0NTI4NzA5NTUx | 904 | Very detailed step-by-step on how to add a dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [] | closed | false | null | [] | null | [
"Awesome! Thanks @lhoestq "
] | 1,606,495,521,000 | 1,606,730,187,000 | 1,606,730,186,000 | MEMBER | null | Add very detailed step-by-step instructions to add a new dataset to the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/904/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/904",
"html_url": "https://github.com/huggingface/datasets/pull/904",
"diff_url": "https://github.com/huggingface/datasets/pull/904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/904.patch",
"merged_at": 1606730186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/903/comments | https://api.github.com/repos/huggingface/datasets/issues/903/events | https://github.com/huggingface/datasets/pull/903 | 752,360,614 | MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3 | 903 | Fix URL with backslash in Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [
"@lhoestq I was indeed working on that... to make another commit on this feature branch...",
"But as you prefer... nevermind! :)",
"Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happ... | 1,606,494,384,000 | 1,606,500,286,000 | 1,606,500,286,000 | MEMBER | null | In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash.
In general, `os.path.join` should be avoided to generate URLs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/903/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/903",
"html_url": "https://github.com/huggingface/datasets/pull/903",
"diff_url": "https://github.com/huggingface/datasets/pull/903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/903.patch",
"merged_at": 1606500286000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/902/comments | https://api.github.com/repos/huggingface/datasets/issues/902/events | https://github.com/huggingface/datasets/pull/902 | 752,345,739 | MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw | 902 | Follow cache_dir parameter to gcs downloader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,606,492,926,000 | 1,606,690,134,000 | 1,606,690,133,000 | MEMBER | null | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/902/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/902",
"html_url": "https://github.com/huggingface/datasets/pull/902",
"diff_url": "https://github.com/huggingface/datasets/pull/902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/902.patch",
"merged_at": 1606690133000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/901/comments | https://api.github.com/repos/huggingface/datasets/issues/901/events | https://github.com/huggingface/datasets/pull/901 | 752,233,851 | MDExOlB1bGxSZXF1ZXN0NTI4NTk3NDU5 | 901 | Addition of Nl2Bash Dataset | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do... | 1,606,481,635,000 | 1,606,673,365,000 | 1,606,673,331,000 | CONTRIBUTOR | null | ## Overview
The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities.
## Footnotes
The following dataset marks the first ML on source code related... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/901/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/901",
"html_url": "https://github.com/huggingface/datasets/pull/901",
"diff_url": "https://github.com/huggingface/datasets/pull/901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/901.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/900/comments | https://api.github.com/repos/huggingface/datasets/issues/900/events | https://github.com/huggingface/datasets/issues/900 | 752,214,066 | MDU6SXNzdWU3NTIyMTQwNjY= | 900 | datasets.load_dataset() custom chaching directory bug | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.gi... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Thanks for reporting ! I'm looking into it."
] | 1,606,479,533,000 | 1,606,690,133,000 | 1,606,690,133,000 | NONE | null | Hello,
I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to
`~/.cache`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```p... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/900/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/899/comments | https://api.github.com/repos/huggingface/datasets/issues/899/events | https://github.com/huggingface/datasets/pull/899 | 752,191,227 | MDExOlB1bGxSZXF1ZXN0NTI4NTYzNzYz | 899 | Allow arrow based builder in auto dummy data generation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,606,477,178,000 | 1,606,483,809,000 | 1,606,483,808,000 | MEMBER | null | Following #898 I added support for arrow based builder for the auto dummy data generator | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/899/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/899",
"html_url": "https://github.com/huggingface/datasets/pull/899",
"diff_url": "https://github.com/huggingface/datasets/pull/899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/899.patch",
"merged_at": 1606483808000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/898/comments | https://api.github.com/repos/huggingface/datasets/issues/898/events | https://github.com/huggingface/datasets/pull/898 | 752,148,284 | MDExOlB1bGxSZXF1ZXN0NTI4NTI4MDY1 | 898 | Adding SQA dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [] | closed | false | null | [] | null | [
"This dataset seems to have around 1000 configs. Therefore when creating the dummy data we end up with hundreds of MB of dummy data which we don't want to add in the repo.\r\nLet's make this PR on hold for now and find a solution after the sprint of next week",
"Closing in favor of #1566 "
] | 1,606,472,958,000 | 1,608,036,880,000 | 1,608,036,859,000 | MEMBER | null | As discussed in #880
Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/898/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/898",
"html_url": "https://github.com/huggingface/datasets/pull/898",
"diff_url": "https://github.com/huggingface/datasets/pull/898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/898.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/897/comments | https://api.github.com/repos/huggingface/datasets/issues/897/events | https://github.com/huggingface/datasets/issues/897 | 752,100,256 | MDU6SXNzdWU3NTIxMDAyNTY= | 897 | Dataset viewer issues | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users... | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?",
"Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewe... | 1,606,468,474,000 | 1,635,671,521,000 | 1,635,671,521,000 | CONTRIBUTOR | null | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. T... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/897/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/896/comments | https://api.github.com/repos/huggingface/datasets/issues/896/events | https://github.com/huggingface/datasets/pull/896 | 751,834,265 | MDExOlB1bGxSZXF1ZXN0NTI4MjcyMjc0 | 896 | Add template and documentation for dataset card | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yje... | [] | closed | false | null | [] | null | [] | 1,606,426,225,000 | 1,606,525,815,000 | 1,606,525,815,000 | MEMBER | null | This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora
New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/896/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/896",
"html_url": "https://github.com/huggingface/datasets/pull/896",
"diff_url": "https://github.com/huggingface/datasets/pull/896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/896.patch",
"merged_at": 1606525814000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/895/comments | https://api.github.com/repos/huggingface/datasets/issues/895/events | https://github.com/huggingface/datasets/pull/895 | 751,782,295 | MDExOlB1bGxSZXF1ZXN0NTI4MjMyMjU3 | 895 | Better messages regarding split naming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,606,416,946,000 | 1,606,483,860,000 | 1,606,483,859,000 | MEMBER | null | I made explicit the error message when a bad split name is used.
Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/895/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/895",
"html_url": "https://github.com/huggingface/datasets/pull/895",
"diff_url": "https://github.com/huggingface/datasets/pull/895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/895.patch",
"merged_at": 1606483859000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/894/comments | https://api.github.com/repos/huggingface/datasets/issues/894/events | https://github.com/huggingface/datasets/pull/894 | 751,734,905 | MDExOlB1bGxSZXF1ZXN0NTI4MTkzNzQy | 894 | Allow several tags sets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Closing since we don't need to update the tags of those three datasets (for each one of them there is only one tag set)"
] | 1,606,410,253,000 | 1,620,239,057,000 | 1,606,508,149,000 | MEMBER | null | Hi !
Currently we have three dataset cards : snli, cnn_dailymail and allocine.
For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc.
For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnl... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/894/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/894/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/894",
"html_url": "https://github.com/huggingface/datasets/pull/894",
"diff_url": "https://github.com/huggingface/datasets/pull/894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/894.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/893/comments | https://api.github.com/repos/huggingface/datasets/issues/893/events | https://github.com/huggingface/datasets/pull/893 | 751,703,696 | MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx | 893 | add metrec: arabic poetry dataset | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"@lhoestq removed prints and added the dataset card. ",
"@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ",
"Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- ... | 1,606,407,016,000 | 1,606,839,895,000 | 1,606,835,707,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/893/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/893",
"html_url": "https://github.com/huggingface/datasets/pull/893",
"diff_url": "https://github.com/huggingface/datasets/pull/893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/893.patch",
"merged_at": 1606835707000
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/892/comments | https://api.github.com/repos/huggingface/datasets/issues/892/events | https://github.com/huggingface/datasets/pull/892 | 751,658,262 | MDExOlB1bGxSZXF1ZXN0NTI4MTMxNTE1 | 892 | Add a few datasets of reference in the documentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Looks good to me. Do we also support TSV in this helper (explain if it should be text or CSV) and in the dummy-data creator?",
"snli is basically based on tsv files (but named as .txt) and it is in the list of datasets of reference.\r\nThe dummy data creator supports tsv",
"merging this one.\r\nIf you think of... | 1,606,402,959,000 | 1,606,500,525,000 | 1,606,500,524,000 | MEMBER | null | I started making a small list of various datasets of reference in the documentation.
Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from.
Let me know what you think, and if you have ideas of other datasets that we may add to this list, please l... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/892/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/892",
"html_url": "https://github.com/huggingface/datasets/pull/892",
"diff_url": "https://github.com/huggingface/datasets/pull/892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/892.patch",
"merged_at": 1606500524000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/891/comments | https://api.github.com/repos/huggingface/datasets/issues/891/events | https://github.com/huggingface/datasets/pull/891 | 751,576,869 | MDExOlB1bGxSZXF1ZXN0NTI4MDY1MTQ3 | 891 | gitignore .python-version | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [] | 1,606,395,958,000 | 1,606,397,307,000 | 1,606,397,306,000 | MEMBER | null | ignore `.python-version` added by `pyenv` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/891/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/891",
"html_url": "https://github.com/huggingface/datasets/pull/891",
"diff_url": "https://github.com/huggingface/datasets/pull/891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/891.patch",
"merged_at": 1606397306000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/890/comments | https://api.github.com/repos/huggingface/datasets/issues/890/events | https://github.com/huggingface/datasets/pull/890 | 751,534,050 | MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3 | 890 | Add LER | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [
"Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat /Users/joelniklaus/NextCloud/PhDJoelNiklaus/Code/datasets/datasets/ler/ler.py\r\nOh no! 💥 💔 💥\r\n1 file ... | 1,606,391,903,000 | 1,606,829,615,000 | 1,606,829,176,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/890/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/890",
"html_url": "https://github.com/huggingface/datasets/pull/890",
"diff_url": "https://github.com/huggingface/datasets/pull/890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/890.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/889/comments | https://api.github.com/repos/huggingface/datasets/issues/889/events | https://github.com/huggingface/datasets/pull/889 | 751,115,691 | MDExOlB1bGxSZXF1ZXN0NTI3NjkwODE2 | 889 | Optional per-dataset default config name | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/... | [] | closed | false | null | [] | null | [
"I like the idea ! And the approach is right imo\r\n\r\nNote that by changing this we will have to add a way for users to get the config lists of a dataset. In the current user workflow, the user could see the list of the config when the missing config error is raised but now it won't be the case because of the def... | 1,606,338,150,000 | 1,606,757,253,000 | 1,606,757,247,000 | CONTRIBUTOR | null | This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/889/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/889",
"html_url": "https://github.com/huggingface/datasets/pull/889",
"diff_url": "https://github.com/huggingface/datasets/pull/889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/889.patch",
"merged_at": 1606757247000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/888/comments | https://api.github.com/repos/huggingface/datasets/issues/888/events | https://github.com/huggingface/datasets/issues/888 | 750,944,422 | MDU6SXNzdWU3NTA5NDQ0MjI= | 888 | Nested lists are zipped unexpectedly | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [] | closed | false | null | [] | null | [
"Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details",
"Thanks.\r\nThis is a bit (very) confusing, but I guess if ... | 1,606,320,466,000 | 1,606,325,439,000 | 1,606,325,439,000 | CONTRIBUTOR | null | I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/888/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/887/comments | https://api.github.com/repos/huggingface/datasets/issues/887/events | https://github.com/huggingface/datasets/issues/887 | 750,868,831 | MDU6SXNzdWU3NTA4Njg4MzE= | 887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since ... | 1,606,314,741,000 | 1,631,207,020,000 | null | CONTRIBUTOR | null | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/887/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/886/comments | https://api.github.com/repos/huggingface/datasets/issues/886/events | https://github.com/huggingface/datasets/pull/886 | 750,829,314 | MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5 | 886 | Fix wikipedia custom config | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https://github.com/huggingface/datasets/issues/577#issuecomment-868122769)"
] | 1,606,311,852,000 | 1,624,598,656,000 | 1,606,318,933,000 | MEMBER | null | It should be possible to use the wikipedia dataset with any `language` and `date`.
However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.
I fixed that and was able to run
```python
from datasets import load_dataset
load_dataset("./datasets/wi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/886/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/886/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/886",
"html_url": "https://github.com/huggingface/datasets/pull/886",
"diff_url": "https://github.com/huggingface/datasets/pull/886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/886.patch",
"merged_at": 1606318933000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/885/comments | https://api.github.com/repos/huggingface/datasets/issues/885/events | https://github.com/huggingface/datasets/issues/885 | 750,789,052 | MDU6SXNzdWU3NTA3ODkwNTI= | 885 | Very slow cold-start | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Good point!",
"Yes indeed. We can probably improve that by using lazy imports",
"#1690 added fast start-up of the library "
] | 1,606,308,478,000 | 1,610,537,485,000 | 1,610,537,485,000 | CONTRIBUTOR | null | Hi,
I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant.
When I load a metric, or a dataset, its fine that it takes time.
The following ranges from 3 to 9 seconds:
```
python -m timeit -n 1 -r 1 'from datasets import load_dataset'
```
edi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/885/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/885/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/884/comments | https://api.github.com/repos/huggingface/datasets/issues/884/events | https://github.com/huggingface/datasets/pull/884 | 749,862,034 | MDExOlB1bGxSZXF1ZXN0NTI2NjA5MDc1 | 884 | Auto generate dummy data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"I took your comments into account.\r\nAlso now after compressing the dummy_data.zip file it runs a dummy data test (=make sure each split has at least 1 example using the dummy data)",
"I just tested the tool with some datasets and found out that it's not working for datasets that download files using `download_... | 1,606,235,494,000 | 1,606,400,327,000 | 1,606,400,326,000 | MEMBER | null | When adding a new dataset to the library, dummy data creation can take some time.
To make things easier I added a command line tool that automatically generates dummy data when possible.
The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml.
Here are some examples:
```
python data... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/884/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/884",
"html_url": "https://github.com/huggingface/datasets/pull/884",
"diff_url": "https://github.com/huggingface/datasets/pull/884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/884.patch",
"merged_at": 1606400326000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/883/comments | https://api.github.com/repos/huggingface/datasets/issues/883/events | https://github.com/huggingface/datasets/issues/883 | 749,750,801 | MDU6SXNzdWU3NDk3NTA4MDE= | 883 | Downloading/caching only a part of a datasets' dataset. | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.gi... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892912,
"node_id": "MDU6... | open | false | null | [] | null | [
"Not at the moment but we could likely support this feature.",
"?",
"I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources."
] | 1,606,227,918,000 | 1,606,485,115,000 | null | NONE | null | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/883/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/882/comments | https://api.github.com/repos/huggingface/datasets/issues/882/events | https://github.com/huggingface/datasets/pull/882 | 749,662,188 | MDExOlB1bGxSZXF1ZXN0NTI2NDQyMjA2 | 882 | Update README.md | {
"login": "vaibhavad",
"id": 32997732,
"node_id": "MDQ6VXNlcjMyOTk3NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/32997732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaibhavad",
"html_url": "https://github.com/vaibhavad",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [] | 1,606,220,632,000 | 1,611,916,867,000 | 1,611,916,867,000 | CONTRIBUTOR | null | "no label" is "-" in the original dataset but "-1" in Huggingface distribution. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/882/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/882",
"html_url": "https://github.com/huggingface/datasets/pull/882",
"diff_url": "https://github.com/huggingface/datasets/pull/882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/882.patch",
"merged_at": 1611916866000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/881/comments | https://api.github.com/repos/huggingface/datasets/issues/881/events | https://github.com/huggingface/datasets/pull/881 | 749,548,107 | MDExOlB1bGxSZXF1ZXN0NTI2MzQ5MDM2 | 881 | Use GCP download url instead of tensorflow custom download for boolq | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,606,211,231,000 | 1,606,212,754,000 | 1,606,212,753,000 | MEMBER | null | BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket.
It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError.
Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/881/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/881",
"html_url": "https://github.com/huggingface/datasets/pull/881",
"diff_url": "https://github.com/huggingface/datasets/pull/881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/881.patch",
"merged_at": 1606212753000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/880/comments | https://api.github.com/repos/huggingface/datasets/issues/880/events | https://github.com/huggingface/datasets/issues/880 | 748,949,606 | MDU6SXNzdWU3NDg5NDk2MDY= | 880 | Add SQA | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ",
"@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinat... | 1,606,149,115,000 | 1,608,731,904,000 | 1,608,731,903,000 | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/r... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/880/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/879/comments | https://api.github.com/repos/huggingface/datasets/issues/879/events | https://github.com/huggingface/datasets/issues/879 | 748,848,847 | MDU6SXNzdWU3NDg4NDg4NDc= | 879 | boolq does not load | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.",
"... | 1,606,141,708,000 | 1,606,485,071,000 | null | CONTRIBUTOR | null | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
d... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/879/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/878/comments | https://api.github.com/repos/huggingface/datasets/issues/878/events | https://github.com/huggingface/datasets/issues/878 | 748,621,981 | MDU6SXNzdWU3NDg2MjE5ODE= | 878 | Loading Data From S3 Path in Sagemaker | {
"login": "mahesh1amour",
"id": 42795522,
"node_id": "MDQ6VXNlcjQyNzk1NTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/42795522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahesh1amour",
"html_url": "https://github.com/mahesh1amour",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892912,
"node_id": "MDU6... | open | false | null | [] | null | [
"This would be a neat feature",
"> neat feature\r\n\r\nI dint get these clearly, can you please elaborate like how to work on these ",
"It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?",
"Thanks thomwolf and julien-c\r\n\r\nI'm still confusion on what you... | 1,606,123,042,000 | 1,608,717,188,000 | null | NONE | null | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/878/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/877/comments | https://api.github.com/repos/huggingface/datasets/issues/877/events | https://github.com/huggingface/datasets/issues/877 | 748,234,438 | MDU6SXNzdWU3NDgyMzQ0Mzg= | 877 | DataLoader(datasets) become more and more slowly within iterations | {
"login": "shexuan",
"id": 25664170,
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shexuan",
"html_url": "https://github.com/shexuan",
"followers_url": "https://api.github.com/users/shexua... | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not",
"> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset... | 1,606,048,870,000 | 1,606,664,712,000 | 1,606,664,712,000 | NONE | null | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, th... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/877/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/876/comments | https://api.github.com/repos/huggingface/datasets/issues/876/events | https://github.com/huggingface/datasets/issues/876 | 748,195,104 | MDU6SXNzdWU3NDgxOTUxMDQ= | 876 | imdb dataset cannot be loaded | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | closed | false | null | [] | null | [
"It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n``... | 1,606,033,483,000 | 1,637,924,836,000 | 1,608,831,527,000 | CONTRIBUTOR | null | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/876/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/875/comments | https://api.github.com/repos/huggingface/datasets/issues/875/events | https://github.com/huggingface/datasets/issues/875 | 748,194,311 | MDU6SXNzdWU3NDgxOTQzMTE= | 875 | bug in boolq dataset loading | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | closed | false | null | [] | null | [
"I just opened a PR to fix this.\r\nThanks for reporting !"
] | 1,606,033,114,000 | 1,606,212,753,000 | 1,606,212,753,000 | CONTRIBUTOR | null | Hi
I am trying to load boolq dataset:
```
import datasets
datasets.load_dataset("boolq")
```
I am getting the following errors, thanks for your help
```
>>> import datasets
2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/875/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/874/comments | https://api.github.com/repos/huggingface/datasets/issues/874/events | https://github.com/huggingface/datasets/issues/874 | 748,193,140 | MDU6SXNzdWU3NDgxOTMxNDA= | 874 | trec dataset unavailable | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | closed | false | null | [] | null | [
"This was fixed in #740 \r\nCould you try to update `datasets` and try again ?",
"This has been fixed in datasets 1.1.3"
] | 1,606,032,576,000 | 1,606,485,402,000 | 1,606,485,402,000 | CONTRIBUTOR | null | Hi
when I try to load the trec dataset I am getting these errors, thanks for your help
`datasets.load_dataset("trec", split="train")
`
```
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/874/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/873/comments | https://api.github.com/repos/huggingface/datasets/issues/873/events | https://github.com/huggingface/datasets/issues/873 | 747,959,523 | MDU6SXNzdWU3NDc5NTk1MjM= | 873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | {
"login": "vishal-burman",
"id": 19861874,
"node_id": "MDQ6VXNlcjE5ODYxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishal-burman",
"html_url": "https://github.com/vishal-burman",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"I get the same error. It was fixed some days ago, but again it appears",
"Hi @mrm8488 it's working again today without any fix so I am closing this issue.",
"I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is alr... | 1,605,940,245,000 | 1,651,735,199,000 | 1,606,047,485,000 | NONE | null | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/873/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/872/comments | https://api.github.com/repos/huggingface/datasets/issues/872/events | https://github.com/huggingface/datasets/pull/872 | 747,653,697 | MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx | 872 | Add IndicGLUE dataset and Metrics | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"thanks ! merging now"
] | 1,605,892,174,000 | 1,606,323,671,000 | 1,606,317,967,000 | CONTRIBUTOR | null | Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/872/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/872",
"html_url": "https://github.com/huggingface/datasets/pull/872",
"diff_url": "https://github.com/huggingface/datasets/pull/872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/872.patch",
"merged_at": 1606317967000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/871/comments | https://api.github.com/repos/huggingface/datasets/issues/871/events | https://github.com/huggingface/datasets/issues/871 | 747,470,136 | MDU6SXNzdWU3NDc0NzAxMzY= | 871 | terminate called after throwing an instance of 'google::protobuf::FatalException' | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/... | [] | closed | false | null | [] | null | [
"Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)",
"closing now, figured out this is because the max length of decoder w... | 1,605,876,984,000 | 1,607,807,792,000 | 1,607,807,792,000 | CONTRIBUTOR | null | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████████████████████████████████████████████████████████████████████████████████████████... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/871/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/870/comments | https://api.github.com/repos/huggingface/datasets/issues/870/events | https://github.com/huggingface/datasets/issues/870 | 747,021,996 | MDU6SXNzdWU3NDcwMjE5OTY= | 870 | [Feature Request] Add optional parameter in text loading script to preserve linebreaks | {
"login": "jncasey",
"id": 31020859,
"node_id": "MDQ6VXNlcjMxMDIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jncasey",
"html_url": "https://github.com/jncasey",
"followers_url": "https://api.github.com/users/jncase... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)",
"Resolved via #1913."
] | 1,605,829,891,000 | 1,654,097,153,000 | 1,654,097,152,000 | NONE | null | I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data.
I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great.
But the first time I processed all of ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/870/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/869/comments | https://api.github.com/repos/huggingface/datasets/issues/869/events | https://github.com/huggingface/datasets/pull/869 | 746,495,711 | MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw | 869 | Update ner datasets infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
":+1: Thanks for fixing it!"
] | 1,605,785,283,000 | 1,605,795,258,000 | 1,605,795,257,000 | MEMBER | null | Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel)
I also fixed the ner types of conll2003 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/869/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/869",
"html_url": "https://github.com/huggingface/datasets/pull/869",
"diff_url": "https://github.com/huggingface/datasets/pull/869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/869.patch",
"merged_at": 1605795257000
} | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.