url
large_stringlengths
58
61
repository_url
large_stringclasses
1 value
labels_url
large_stringlengths
72
75
comments_url
large_stringlengths
67
70
events_url
large_stringlengths
65
68
html_url
large_stringlengths
46
51
id
int64
599M
4.37B
node_id
large_stringlengths
18
32
number
int64
1
8.17k
title
large_stringlengths
1
290
user
dict
labels
listlengths
0
4
state
large_stringclasses
2 values
locked
bool
1 class
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
large_stringdate
2020-04-14 10:18:02
2026-05-01 20:27:58
updated_at
large_stringdate
2020-04-27 16:04:17
2026-05-02 02:19:38
closed_at
large_stringlengths
20
20
assignee
dict
author_association
large_stringclasses
4 values
issue_field_values
listlengths
0
0
type
float64
active_lock_reason
float64
sub_issues_summary
dict
issue_dependencies_summary
dict
body
large_stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
large_stringlengths
67
70
performed_via_github_app
float64
state_reason
large_stringclasses
4 values
pinned_comment
float64
draft
float64
0
1
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/876/comments
https://api.github.com/repos/huggingface/datasets/issues/876/events
https://github.com/huggingface/datasets/issues/876
748,195,104
MDU6SXNzdWU3NDgxOTUxMDQ=
876
imdb dataset cannot be loaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[]
closed
false
[]
null
[ "It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n``...
2020-11-22T08:24:43Z
2024-05-10T03:03:29Z
2020-12-24T17:38:47Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/...
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/876/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/875/comments
https://api.github.com/repos/huggingface/datasets/issues/875/events
https://github.com/huggingface/datasets/issues/875
748,194,311
MDU6SXNzdWU3NDgxOTQzMTE=
875
bug in boolq dataset loading
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[]
closed
false
[]
null
[ "I just opened a PR to fix this.\r\nThanks for reporting !" ]
2020-11-22T08:18:34Z
2020-11-24T10:12:33Z
2020-11-24T10:12:33Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/875/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[]
closed
false
[]
null
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
2020-11-22T08:09:36Z
2020-11-27T13:56:42Z
2020-11-27T13:56:42Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "login": "vishal-burman", "id": 19861874, "node_id": "MDQ6VXNlcjE5ODYxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishal-burman", "html_url": "https://github.com/vishal-burman", "followers_url": "https://api.githu...
[]
closed
false
[]
null
[ "I get the same error. It was fixed some days ago, but again it appears", "Hi @mrm8488 it's working again today without any fix so I am closing this issue.", "I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is alr...
2020-11-21T06:30:45Z
2023-08-03T12:07:03Z
2020-11-22T12:18:05Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() ...
{ "login": "vishal-burman", "id": 19861874, "node_id": "MDQ6VXNlcjE5ODYxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishal-burman", "html_url": "https://github.com/vishal-burman", "followers_url": "https://api.githu...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/872/comments
https://api.github.com/repos/huggingface/datasets/issues/872/events
https://github.com/huggingface/datasets/pull/872
747,653,697
MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx
872
Add IndicGLUE dataset and Metrics
{ "login": "sumanthd17", "id": 28291870, "node_id": "MDQ6VXNlcjI4MjkxODcw", "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumanthd17", "html_url": "https://github.com/sumanthd17", "followers_url": "https://api.github.com/use...
[]
closed
false
[]
null
[ "thanks ! merging now" ]
2020-11-20T17:09:34Z
2020-11-25T17:01:11Z
2020-11-25T15:26:07Z
null
CONTRIBUTOR
[]
null
null
null
null
Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/872/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/872", "html_url": "https://github.com/huggingface/datasets/pull/872", "diff_url": "https://github.com/huggingface/datasets/pull/872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/872.patch", "merged_at": "2020-11-25T15:26:07Z...
true
https://api.github.com/repos/huggingface/datasets/issues/871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/871/comments
https://api.github.com/repos/huggingface/datasets/issues/871/events
https://github.com/huggingface/datasets/issues/871
747,470,136
MDU6SXNzdWU3NDc0NzAxMzY=
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[]
closed
false
[]
null
[ "Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)", "closing now, figured out this is because the max length of decoder w...
2020-11-20T12:56:24Z
2020-12-12T21:16:32Z
2020-12-12T21:16:32Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|█████████████████████████████████████████████████████████████████████████████████████████████...
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/871/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/870/comments
https://api.github.com/repos/huggingface/datasets/issues/870/events
https://github.com/huggingface/datasets/issues/870
747,021,996
MDU6SXNzdWU3NDcwMjE5OTY=
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
{ "login": "jncasey", "id": 31020859, "node_id": "MDQ6VXNlcjMxMDIwODU5", "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jncasey", "html_url": "https://github.com/jncasey", "followers_url": "https://api.github.com/users/jncase...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
[]
null
[ "Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)", "Resolved via #1913." ]
2020-11-19T23:51:31Z
2022-06-01T15:25:53Z
2022-06-01T15:25:52Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of ...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/870/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/869/comments
https://api.github.com/repos/huggingface/datasets/issues/869/events
https://github.com/huggingface/datasets/pull/869
746,495,711
MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw
869
Update ner datasets infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[ ":+1: Thanks for fixing it!" ]
2020-11-19T11:28:03Z
2020-11-19T14:14:18Z
2020-11-19T14:14:17Z
null
MEMBER
[]
null
null
null
null
Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel) I also fixed the ner types of conll2003
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/869/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/869", "html_url": "https://github.com/huggingface/datasets/pull/869", "diff_url": "https://github.com/huggingface/datasets/pull/869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/869.patch", "merged_at": "2020-11-19T14:14:17Z...
true
https://api.github.com/repos/huggingface/datasets/issues/868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/868/comments
https://api.github.com/repos/huggingface/datasets/issues/868/events
https://github.com/huggingface/datasets/pull/868
745,889,882
MDExOlB1bGxSZXF1ZXN0NTIzMzc2MzQ3
868
Consistent metric outputs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "id": 4190228726, "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate", "name": "transfer-to-evaluate", "color": "E3165C", "default": false, "description": "" } ]
closed
false
[]
null
[ "I keep this PR in stand-by for next week's datasets sprint. If the next release is 2.0.0 then we can include it given that it's breaking for many metrics", "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
2020-11-18T18:05:59Z
2023-09-24T09:50:25Z
2023-07-11T09:35:52Z
null
MEMBER
[]
null
null
null
null
To automate the use of metrics, they should return consistent outputs. In particular I'm working on adding a conversion of metrics to keras metrics. To achieve this we need two things: - have each metric return dictionaries of string -> floats since each keras metrics should return one float - define in the metric ...
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/868/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/868", "html_url": "https://github.com/huggingface/datasets/pull/868", "diff_url": "https://github.com/huggingface/datasets/pull/868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/868.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/867/comments
https://api.github.com/repos/huggingface/datasets/issues/867/events
https://github.com/huggingface/datasets/pull/867
745,773,955
MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4
867
Fix some metrics feature types
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-18T15:46:11Z
2020-11-19T17:35:58Z
2020-11-19T17:35:57Z
null
MEMBER
[]
null
null
null
null
Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics: - accuracy - precision - recall - f1 I also added the sklearn citation and used keyword arguments to remove future warnings
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/867/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/867", "html_url": "https://github.com/huggingface/datasets/pull/867", "diff_url": "https://github.com/huggingface/datasets/pull/867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/867.patch", "merged_at": "2020-11-19T17:35:57Z...
true
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "login": "jchwenger", "id": 34098722, "node_id": "MDQ6VXNlcjM0MDk4NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jchwenger", "html_url": "https://github.com/jchwenger", "followers_url": "https://api.github.com/users/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though", "Grand, thanks for this!" ]
2020-11-18T14:40:54Z
2020-11-18T15:01:30Z
2020-11-18T15:01:30Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la...
{ "login": "jchwenger", "id": 34098722, "node_id": "MDQ6VXNlcjM0MDk4NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jchwenger", "html_url": "https://github.com/jchwenger", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/865/comments
https://api.github.com/repos/huggingface/datasets/issues/865/events
https://github.com/huggingface/datasets/issues/865
745,430,497
MDU6SXNzdWU3NDU0MzA0OTc=
865
Have Trouble importing `datasets`
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users...
[]
closed
false
[]
null
[ "I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise." ]
2020-11-18T08:04:41Z
2020-11-18T08:16:35Z
2020-11-18T08:16:35Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in ...
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/865/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/864/comments
https://api.github.com/repos/huggingface/datasets/issues/864/events
https://github.com/huggingface/datasets/issues/864
745,322,357
MDU6SXNzdWU3NDUzMjIzNTc=
864
Unable to download cnn_dailymail dataset
{ "login": "rohitashwa1907", "id": 46031058, "node_id": "MDQ6VXNlcjQ2MDMxMDU4", "avatar_url": "https://avatars.githubusercontent.com/u/46031058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohitashwa1907", "html_url": "https://github.com/rohitashwa1907", "followers_url": "https://api.gi...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Same error here!\r\n", "Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2", "I'm looking at it right now", "I couldn't reproduce unfortunately. I tried\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cnn_dailymai...
2020-11-18T04:38:02Z
2020-11-20T05:22:11Z
2020-11-20T05:22:10Z
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` -------------------------------------------------------------...
{ "login": "rohitashwa1907", "id": 46031058, "node_id": "MDQ6VXNlcjQ2MDMxMDU4", "avatar_url": "https://avatars.githubusercontent.com/u/46031058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohitashwa1907", "html_url": "https://github.com/rohitashwa1907", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/864/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/863/comments
https://api.github.com/repos/huggingface/datasets/issues/863/events
https://github.com/huggingface/datasets/pull/863
744,954,534
MDExOlB1bGxSZXF1ZXN0NTIyNTk0Mjg1
863
Add clear_cache parameter in the test command
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-17T17:52:29Z
2020-11-18T14:44:25Z
2020-11-18T14:44:24Z
null
MEMBER
[]
null
null
null
null
For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space. I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier gen...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/863/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/863", "html_url": "https://github.com/huggingface/datasets/pull/863", "diff_url": "https://github.com/huggingface/datasets/pull/863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/863.patch", "merged_at": "2020-11-18T14:44:24Z...
true
https://api.github.com/repos/huggingface/datasets/issues/862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/862/comments
https://api.github.com/repos/huggingface/datasets/issues/862/events
https://github.com/huggingface/datasets/pull/862
744,906,131
MDExOlB1bGxSZXF1ZXN0NTIyNTUzMzY1
862
Update head requests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-17T16:49:06Z
2020-11-18T14:43:53Z
2020-11-18T14:43:50Z
null
MEMBER
[]
null
null
null
null
Get requests and Head requests didn't have the same parameters.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/862/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/862", "html_url": "https://github.com/huggingface/datasets/pull/862", "diff_url": "https://github.com/huggingface/datasets/pull/862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/862.patch", "merged_at": "2020-11-18T14:43:50Z...
true
https://api.github.com/repos/huggingface/datasets/issues/861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/861/comments
https://api.github.com/repos/huggingface/datasets/issues/861/events
https://github.com/huggingface/datasets/issues/861
744,753,458
MDU6SXNzdWU3NDQ3NTM0NTg=
861
Possible Bug: Small training/dataset file creates gigantic output
{ "login": "NebelAI", "id": 7240417, "node_id": "MDQ6VXNlcjcyNDA0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NebelAI", "html_url": "https://github.com/NebelAI", "followers_url": "https://api.github.com/users/NebelAI/...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6...
closed
false
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is w...
2020-11-17T13:48:59Z
2021-03-30T14:04:04Z
2021-03-22T12:04:55Z
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/861/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/860/comments
https://api.github.com/repos/huggingface/datasets/issues/860/events
https://github.com/huggingface/datasets/issues/860
744,750,691
MDU6SXNzdWU3NDQ3NTA2OTE=
860
wmt16 cs-en does not donwload
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[]
null
[ "We know host this file, so downloading should be more robust." ]
2020-11-17T13:45:35Z
2022-10-05T12:27:00Z
2022-10-05T12:26:59Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/hom...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/860/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/859/comments
https://api.github.com/repos/huggingface/datasets/issues/859/events
https://github.com/huggingface/datasets/pull/859
743,917,091
MDExOlB1bGxSZXF1ZXN0NTIxNzI4MDM4
859
Integrate file_lock inside the lib for better logging control
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-16T15:13:39Z
2020-11-16T17:06:44Z
2020-11-16T17:06:42Z
null
MEMBER
[]
null
null
null
null
Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors. For example ```python import logging logging.basicConfig(level=logging.INFO) import datasets datasets.set_verbo...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/859/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/859/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/859", "html_url": "https://github.com/huggingface/datasets/pull/859", "diff_url": "https://github.com/huggingface/datasets/pull/859.diff", "patch_url": "https://github.com/huggingface/datasets/pull/859.patch", "merged_at": "2020-11-16T17:06:42Z...
true
https://api.github.com/repos/huggingface/datasets/issues/858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/858/comments
https://api.github.com/repos/huggingface/datasets/issues/858/events
https://github.com/huggingface/datasets/pull/858
743,904,516
MDExOlB1bGxSZXF1ZXN0NTIxNzE3ODQ4
858
Add SemEval-2010 task 8
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/us...
[]
closed
false
[]
null
[ "Added dummy data and encoding to open(). Now everything should be fine, hopefully :)" ]
2020-11-16T14:57:57Z
2020-11-26T17:28:55Z
2020-11-26T17:28:55Z
null
CONTRIBUTOR
[]
null
null
null
null
Hi, I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it. Cheers, Joel
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/858/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/858", "html_url": "https://github.com/huggingface/datasets/pull/858", "diff_url": "https://github.com/huggingface/datasets/pull/858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/858.patch", "merged_at": "2020-11-26T17:28:55Z...
true
https://api.github.com/repos/huggingface/datasets/issues/857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/857/comments
https://api.github.com/repos/huggingface/datasets/issues/857/events
https://github.com/huggingface/datasets/pull/857
743,863,214
MDExOlB1bGxSZXF1ZXN0NTIxNjg0ODIx
857
Use pandas reader in csv
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-16T14:05:45Z
2020-11-19T17:35:40Z
2020-11-19T17:35:38Z
null
MEMBER
[]
null
null
null
null
The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ). To fix that I switched to the pandas csv reader. The new reader is compatible with all the pandas parameters to read csv files. Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory. Fix #836...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/857/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/857", "html_url": "https://github.com/huggingface/datasets/pull/857", "diff_url": "https://github.com/huggingface/datasets/pull/857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/857.patch", "merged_at": "2020-11-19T17:35:38Z...
true
https://api.github.com/repos/huggingface/datasets/issues/856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/856/comments
https://api.github.com/repos/huggingface/datasets/issues/856/events
https://github.com/huggingface/datasets/pull/856
743,799,239
MDExOlB1bGxSZXF1ZXN0NTIxNjMzNTYz
856
Add open book corpus
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoj...
[]
closed
false
[]
null
[ "@lhoestq I fixed issues except for the dummy_data zip file. But I think I know why is it happening. So when unzipping dummy_data.zip it gets save in /tmp directory where glob doesn't pick it up. For regular downloads, the archive gets unzipped in ~/.cache/huggingface. Could that be a reason?", "Nice thanks :)\r\...
2020-11-16T12:30:02Z
2024-01-04T13:20:51Z
2020-11-17T15:22:18Z
null
CONTRIBUTOR
[]
null
null
null
null
Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/856/reactions", "total_count": 6, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/856/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/856", "html_url": "https://github.com/huggingface/datasets/pull/856", "diff_url": "https://github.com/huggingface/datasets/pull/856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/856.patch", "merged_at": "2020-11-17T15:22:17Z...
true
https://api.github.com/repos/huggingface/datasets/issues/855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/855/comments
https://api.github.com/repos/huggingface/datasets/issues/855/events
https://github.com/huggingface/datasets/pull/855
743,690,839
MDExOlB1bGxSZXF1ZXN0NTIxNTQ2Njkx
855
Fix kor nli csv reader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-16T09:53:41Z
2020-11-16T13:59:14Z
2020-11-16T13:59:12Z
null
MEMBER
[]
null
null
null
null
The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason. I fixed that by iterating through the lines directly instead of using a csv reader. I also changed the feature names to match the other NLI datasets (i.e. use "premise"...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/855/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/855/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/855", "html_url": "https://github.com/huggingface/datasets/pull/855", "diff_url": "https://github.com/huggingface/datasets/pull/855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/855.patch", "merged_at": "2020-11-16T13:59:12Z...
true
https://api.github.com/repos/huggingface/datasets/issues/854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/854/comments
https://api.github.com/repos/huggingface/datasets/issues/854/events
https://github.com/huggingface/datasets/issues/854
743,675,376
MDU6SXNzdWU3NDM2NzUzNzY=
854
wmt16 does not download
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[]
null
[ "Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks ", "It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).\r\nI searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the...
2020-11-16T09:31:51Z
2022-10-05T12:27:42Z
2022-10-05T12:27:42Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/854/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/853/comments
https://api.github.com/repos/huggingface/datasets/issues/853/events
https://github.com/huggingface/datasets/issues/853
743,426,583
MDU6SXNzdWU3NDM0MjY1ODM=
853
concatenate_datasets support axis=0 or 1 ?
{ "login": "renqingcolin", "id": 12437751, "node_id": "MDQ6VXNlcjEyNDM3NzUx", "avatar_url": "https://avatars.githubusercontent.com/u/12437751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/renqingcolin", "html_url": "https://github.com/renqingcolin", "followers_url": "https://api.github.c...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892884, "node_id": "MDU6...
closed
false
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_colum...
2020-11-16T02:46:23Z
2021-04-19T16:07:18Z
2021-04-19T16:07:18Z
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/853/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/852/comments
https://api.github.com/repos/huggingface/datasets/issues/852/events
https://github.com/huggingface/datasets/issues/852
743,396,240
MDU6SXNzdWU3NDMzOTYyNDA=
852
wmt cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[]
2020-11-16T01:04:41Z
2020-11-16T09:31:58Z
2020-11-16T09:31:58Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/852/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/850/comments
https://api.github.com/repos/huggingface/datasets/issues/850/events
https://github.com/huggingface/datasets/pull/850
742,369,419
MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3
850
Create ClassLabel for labelling tasks datasets
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", ...
[]
closed
false
[]
null
[ "@lhoestq Better?" ]
2020-11-13T11:07:22Z
2020-11-16T10:32:05Z
2020-11-16T10:31:58Z
null
CONTRIBUTOR
[]
null
null
null
null
This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking.
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/850/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/850", "html_url": "https://github.com/huggingface/datasets/pull/850", "diff_url": "https://github.com/huggingface/datasets/pull/850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/850.patch", "merged_at": "2020-11-16T10:31:58Z...
true
https://api.github.com/repos/huggingface/datasets/issues/849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/849/comments
https://api.github.com/repos/huggingface/datasets/issues/849/events
https://github.com/huggingface/datasets/issues/849
742,263,333
MDU6SXNzdWU3NDIyNjMzMzM=
849
Load amazon dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.gi...
[]
closed
false
[]
null
[ "Thanks for reporting !\r\nWe plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.\r\n\r\nAlso I think the bullet points formatting has been fixed" ]
2020-11-13T08:34:24Z
2020-11-17T07:22:59Z
2020-11-17T07:22:59Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amaz...
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.gi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/849/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/848/comments
https://api.github.com/repos/huggingface/datasets/issues/848/events
https://github.com/huggingface/datasets/issues/848
742,240,942
MDU6SXNzdWU3NDIyNDA5NDI=
848
Error when concatenate_datasets
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexua...
[]
closed
false
[]
null
[ "As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n\r\nThe indices mapping correspond to a mapping on top of the data table tha...
2020-11-13T07:56:02Z
2020-11-13T17:40:59Z
2020-11-13T15:55:10Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexua...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/848/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/847/comments
https://api.github.com/repos/huggingface/datasets/issues/847/events
https://github.com/huggingface/datasets/issues/847
742,179,495
MDU6SXNzdWU3NDIxNzk0OTU=
847
multiprocessing in dataset map "can only test a child process"
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.g...
[]
closed
false
[]
null
[ "It looks like an issue with wandb/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?", "It l...
2020-11-13T06:01:04Z
2022-10-05T12:22:51Z
2022-10-05T12:22:51Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/847/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/847/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/846/comments
https://api.github.com/repos/huggingface/datasets/issues/846/events
https://github.com/huggingface/datasets/issues/846
741,885,174
MDU6SXNzdWU3NDE4ODUxNzQ=
846
Add HoVer multi-hop fact verification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?", "Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), the...
2020-11-12T19:55:46Z
2020-12-10T21:47:33Z
2020-12-10T21:47:33Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/846/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/845/comments
https://api.github.com/repos/huggingface/datasets/issues/845/events
https://github.com/huggingface/datasets/pull/845
741,841,350
MDExOlB1bGxSZXF1ZXN0NTIwMDg1NDMy
845
amazon description fields as bullets
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
[]
closed
false
[]
null
[]
2020-11-12T18:50:41Z
2020-11-12T18:50:54Z
2020-11-12T18:50:54Z
null
CONTRIBUTOR
[]
null
null
null
null
One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown.
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/845/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/845", "html_url": "https://github.com/huggingface/datasets/pull/845", "diff_url": "https://github.com/huggingface/datasets/pull/845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/845.patch", "merged_at": "2020-11-12T18:50:54Z...
true
https://api.github.com/repos/huggingface/datasets/issues/844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/844/comments
https://api.github.com/repos/huggingface/datasets/issues/844/events
https://github.com/huggingface/datasets/pull/844
741,835,661
MDExOlB1bGxSZXF1ZXN0NTIwMDgwNzM5
844
add newlines to amazon desc
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
[]
closed
false
[]
null
[]
2020-11-12T18:41:20Z
2020-11-12T18:42:25Z
2020-11-12T18:42:21Z
null
CONTRIBUTOR
[]
null
null
null
null
Just a quick formatting fix to hopefully make it render nicer on Viewer
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/844/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/844", "html_url": "https://github.com/huggingface/datasets/pull/844", "diff_url": "https://github.com/huggingface/datasets/pull/844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/844.patch", "merged_at": "2020-11-12T18:42:21Z...
true
https://api.github.com/repos/huggingface/datasets/issues/843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/843/comments
https://api.github.com/repos/huggingface/datasets/issues/843/events
https://github.com/huggingface/datasets/issues/843
741,531,121
MDU6SXNzdWU3NDE1MzExMjE=
843
use_custom_baseline still produces errors for bertscore
{ "login": "penatbater", "id": 37921244, "node_id": "MDQ6VXNlcjM3OTIxMjQ0", "avatar_url": "https://avatars.githubusercontent.com/u/37921244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/penatbater", "html_url": "https://github.com/penatbater", "followers_url": "https://api.github.com/use...
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
[]
null
[ "Thanks for reporting ! That's a bug indeed\r\nIf you want to contribute, feel free to fix this issue and open a PR :)", "This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. ", ...
2020-11-12T11:44:32Z
2024-05-28T16:30:17Z
2021-02-09T14:21:48Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/843/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/842/comments
https://api.github.com/repos/huggingface/datasets/issues/842/events
https://github.com/huggingface/datasets/issues/842
741,208,428
MDU6SXNzdWU3NDEyMDg0Mjg=
842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.githu...
[]
open
false
[]
null
[ "Right now multiprocessing only runs on single node.\r\n\r\nHowever it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about p...
2020-11-12T02:04:38Z
2025-03-26T09:10:22Z
null
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ...
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/842/timeline
null
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/841/comments
https://api.github.com/repos/huggingface/datasets/issues/841/events
https://github.com/huggingface/datasets/issues/841
740,737,448
MDU6SXNzdWU3NDA3Mzc0NDg=
841
Can not reuse datasets already downloaded
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/fo...
[]
closed
false
[]
null
[ "It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py'\r\nWhere and how to assign this ```wikipedia.py``` after I manually download it ?", "\r\ndownload the ```wikipedia.py``` at the working directory and go with ```dataset = load_dataset('wikipedia.py', '20200501.en')``` ...
2020-11-11T12:42:15Z
2020-11-11T18:17:16Z
2020-11-11T18:17:16Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but...
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/fo...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/841/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/840/comments
https://api.github.com/repos/huggingface/datasets/issues/840/events
https://github.com/huggingface/datasets/pull/840
740,632,771
MDExOlB1bGxSZXF1ZXN0NTE5MDg2NDUw
840
Update squad_v2.py
{ "login": "Javier-Jimenez99", "id": 38747614, "node_id": "MDQ6VXNlcjM4NzQ3NjE0", "avatar_url": "https://avatars.githubusercontent.com/u/38747614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Javier-Jimenez99", "html_url": "https://github.com/Javier-Jimenez99", "followers_url": "https://...
[]
closed
false
[]
null
[ "With this change all the checks are passed.", "Good" ]
2020-11-11T09:58:41Z
2020-11-11T15:29:34Z
2020-11-11T15:26:35Z
null
CONTRIBUTOR
[]
null
null
null
null
Change lines 100 and 102 to prevent overwriting ```predictions``` variable.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/840/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/840", "html_url": "https://github.com/huggingface/datasets/pull/840", "diff_url": "https://github.com/huggingface/datasets/pull/840.diff", "patch_url": "https://github.com/huggingface/datasets/pull/840.patch", "merged_at": "2020-11-11T15:26:35Z...
true
https://api.github.com/repos/huggingface/datasets/issues/839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/839/comments
https://api.github.com/repos/huggingface/datasets/issues/839/events
https://github.com/huggingface/datasets/issues/839
740,355,270
MDU6SXNzdWU3NDAzNTUyNzA=
839
XSum dataset missing spaces between sentences
{ "login": "loganlebanoff", "id": 10007282, "node_id": "MDQ6VXNlcjEwMDA3Mjgy", "avatar_url": "https://avatars.githubusercontent.com/u/10007282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loganlebanoff", "html_url": "https://github.com/loganlebanoff", "followers_url": "https://api.githu...
[]
open
false
[]
null
[]
2020-11-11T00:34:43Z
2020-11-11T00:34:43Z
null
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set): `The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like ...
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/839/timeline
null
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/838/comments
https://api.github.com/repos/huggingface/datasets/issues/838/events
https://github.com/huggingface/datasets/pull/838
740,328,382
MDExOlB1bGxSZXF1ZXN0NTE4ODM0NTE5
838
CNN/Dailymail Dataset Card
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.gi...
[]
closed
false
[]
null
[]
2020-11-10T23:56:43Z
2020-11-25T21:09:51Z
2020-11-25T21:09:50Z
null
CONTRIBUTOR
[]
null
null
null
null
Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/838/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/838", "html_url": "https://github.com/huggingface/datasets/pull/838", "diff_url": "https://github.com/huggingface/datasets/pull/838.diff", "patch_url": "https://github.com/huggingface/datasets/pull/838.patch", "merged_at": "2020-11-25T21:09:50Z...
true
https://api.github.com/repos/huggingface/datasets/issues/837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/837/comments
https://api.github.com/repos/huggingface/datasets/issues/837/events
https://github.com/huggingface/datasets/pull/837
740,250,215
MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5
837
AlloCiné dataset card
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.gi...
[]
closed
false
[]
null
[]
2020-11-10T21:19:53Z
2020-11-25T21:56:27Z
2020-11-25T21:56:27Z
null
CONTRIBUTOR
[]
null
null
null
null
Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creat...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/837/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/837", "html_url": "https://github.com/huggingface/datasets/pull/837", "diff_url": "https://github.com/huggingface/datasets/pull/837.diff", "patch_url": "https://github.com/huggingface/datasets/pull/837.patch", "merged_at": "2020-11-25T21:56:27Z...
true
https://api.github.com/repos/huggingface/datasets/issues/836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/836/comments
https://api.github.com/repos/huggingface/datasets/issues/836/events
https://github.com/huggingface/datasets/issues/836
740,187,613
MDU6SXNzdWU3NDAxODc2MTM=
836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
{ "login": "randubin", "id": 8919490, "node_id": "MDQ6VXNlcjg5MTk0OTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8919490?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randubin", "html_url": "https://github.com/randubin", "followers_url": "https://api.github.com/users/randu...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[]
null
[ "Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?", "Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5", "I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nTh...
2020-11-10T19:35:40Z
2021-11-24T16:59:19Z
2020-11-19T17:35:38Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/836/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/835/comments
https://api.github.com/repos/huggingface/datasets/issues/835/events
https://github.com/huggingface/datasets/issues/835
740,102,210
MDU6SXNzdWU3NDAxMDIyMTA=
835
Wikipedia postprocessing
{ "login": "bminixhofer", "id": 13353204, "node_id": "MDQ6VXNlcjEzMzUzMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bminixhofer", "html_url": "https://github.com/bminixhofer", "followers_url": "https://api.github.com/...
[]
closed
false
[]
null
[ "Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool", ...
2020-11-10T17:26:38Z
2020-11-10T18:23:20Z
2020-11-10T17:49:21Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir...
{ "login": "bminixhofer", "id": 13353204, "node_id": "MDQ6VXNlcjEzMzUzMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bminixhofer", "html_url": "https://github.com/bminixhofer", "followers_url": "https://api.github.com/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/835/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/834/comments
https://api.github.com/repos/huggingface/datasets/issues/834/events
https://github.com/huggingface/datasets/issues/834
740,082,890
MDU6SXNzdWU3NDAwODI4OTA=
834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?", "Hi @KMFODA ! A version of WikiLingua is actually already accessible in ...
2020-11-10T17:00:43Z
2021-04-15T12:04:09Z
2021-04-15T12:01:38Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** h...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/834/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/833/comments
https://api.github.com/repos/huggingface/datasets/issues/833/events
https://github.com/huggingface/datasets/issues/833
740,079,692
MDU6SXNzdWU3NDAwNzk2OTI=
833
[GEM] add ASSET text simplification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[]
2020-11-10T16:56:30Z
2020-12-03T13:38:15Z
2020-12-03T13:38:15Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** ASSET - **Description:** ASSET is a crowdsourced multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf - **Dat...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/833/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/832/comments
https://api.github.com/repos/huggingface/datasets/issues/832/events
https://github.com/huggingface/datasets/issues/832
740,077,228
MDU6SXNzdWU3NDAwNzcyMjg=
832
[GEM] add WikiAuto text simplification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[]
2020-11-10T16:53:23Z
2020-12-03T13:38:08Z
2020-12-03T13:38:08Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** WikiAuto - **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.70...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/832/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/831/comments
https://api.github.com/repos/huggingface/datasets/issues/831/events
https://github.com/huggingface/datasets/issues/831
740,071,697
MDU6SXNzdWU3NDAwNzE2OTc=
831
[GEM] Add WebNLG dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[]
2020-11-10T16:46:48Z
2020-12-03T13:38:01Z
2020-12-03T13:38:01Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** WebNLG - **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian - **Paper:** https://ww...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/831/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/830/comments
https://api.github.com/repos/huggingface/datasets/issues/830/events
https://github.com/huggingface/datasets/issues/830
740,065,376
MDU6SXNzdWU3NDAwNjUzNzY=
830
[GEM] add ToTTo Table-to-text dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "closed via #1098 " ]
2020-11-10T16:38:34Z
2020-12-10T13:06:02Z
2020-12-10T13:06:01Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** ToTTo - **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. - **Paper:** https://arxiv.o...
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://ap...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/830/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/829/comments
https://api.github.com/repos/huggingface/datasets/issues/829/events
https://github.com/huggingface/datasets/issues/829
740,061,699
MDU6SXNzdWU3NDAwNjE2OTk=
829
[GEM] add Schema-Guided Dialogue
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[]
2020-11-10T16:33:44Z
2020-12-03T13:37:50Z
2020-12-03T13:37:50Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** The Schema-Guided Dialogue Dataset - **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 d...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/829/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/828/comments
https://api.github.com/repos/huggingface/datasets/issues/828/events
https://github.com/huggingface/datasets/pull/828
740,008,683
MDExOlB1bGxSZXF1ZXN0NTE4NTcwMjY3
828
Add writer_batch_size attribute to GeneratorBasedBuilder
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-10T15:28:19Z
2020-11-10T16:27:36Z
2020-11-10T16:27:36Z
null
MEMBER
[]
null
null
null
null
As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/828/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/828", "html_url": "https://github.com/huggingface/datasets/pull/828", "diff_url": "https://github.com/huggingface/datasets/pull/828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/828.patch", "merged_at": "2020-11-10T16:27:35Z...
true
https://api.github.com/repos/huggingface/datasets/issues/827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/827/comments
https://api.github.com/repos/huggingface/datasets/issues/827/events
https://github.com/huggingface/datasets/issues/827
739,983,024
MDU6SXNzdWU3Mzk5ODMwMjQ=
827
[GEM] MultiWOZ dialogue dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hi @yjernite can I help in adding this dataset? \r\n\r\nI am excited about this because this will be my first contribution to the datasets library as well as to hugginface.", "Resolved via https://github.com/huggingface/datasets/pull/979" ]
2020-11-10T14:57:50Z
2022-10-05T12:31:13Z
2022-10-05T12:31:13Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/827/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/826/comments
https://api.github.com/repos/huggingface/datasets/issues/826/events
https://github.com/huggingface/datasets/issues/826
739,976,716
MDU6SXNzdWU3Mzk5NzY3MTY=
826
[GEM] Add E2E dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[]
2020-11-10T14:50:40Z
2020-12-03T13:37:57Z
2020-12-03T13:37:57Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** E2E NLG dataset (for End-to-end natural language generation) - **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 refer...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/826/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/825/comments
https://api.github.com/repos/huggingface/datasets/issues/825/events
https://github.com/huggingface/datasets/pull/825
739,925,960
MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx
825
Add accuracy, precision, recall and F1 metrics
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", ...
[]
closed
false
[]
null
[]
2020-11-10T13:50:35Z
2020-11-11T19:23:48Z
2020-11-11T19:23:43Z
null
CONTRIBUTOR
[]
null
null
null
null
This PR adds several single metrics, namely: - Accuracy - Precision - Recall - F1 They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model: - have a macro/micro/per label/weighted/binary/per sample score - score only t...
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/825/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/825/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/825", "html_url": "https://github.com/huggingface/datasets/pull/825", "diff_url": "https://github.com/huggingface/datasets/pull/825.diff", "patch_url": "https://github.com/huggingface/datasets/pull/825.patch", "merged_at": "2020-11-11T19:23:43Z...
true
https://api.github.com/repos/huggingface/datasets/issues/824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/824/comments
https://api.github.com/repos/huggingface/datasets/issues/824/events
https://github.com/huggingface/datasets/issues/824
739,896,526
MDU6SXNzdWU3Mzk4OTY1MjY=
824
Discussion using datasets in offline mode
{ "login": "mandubian", "id": 77193, "node_id": "MDQ6VXNlcjc3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/77193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mandubian", "html_url": "https://github.com/mandubian", "followers_url": "https://api.github.com/users/mandubian/...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6...
closed
false
[]
null
[ "No comments ?", "I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the ...
2020-11-10T13:10:51Z
2023-10-26T09:26:26Z
2022-02-15T10:32:36Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/824/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/824/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/823/comments
https://api.github.com/repos/huggingface/datasets/issues/823/events
https://github.com/huggingface/datasets/issues/823
739,815,763
MDU6SXNzdWU3Mzk4MTU3NjM=
823
how processing in batch works in datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hi I don’t think this is a request for a dataset like you labeled it.\r\n\r\nI also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.", ...
2020-11-10T11:11:17Z
2020-11-10T13:11:10Z
2020-11-10T13:11:09Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented ...
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/823/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/822/comments
https://api.github.com/repos/huggingface/datasets/issues/822/events
https://github.com/huggingface/datasets/issues/822
739,579,314
MDU6SXNzdWU3Mzk1NzkzMTQ=
822
datasets freezes
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[]
null
[ "Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text col...
2020-11-10T05:10:19Z
2023-07-20T16:08:14Z
2023-07-20T16:08:13Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_datase...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/822/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/821/comments
https://api.github.com/repos/huggingface/datasets/issues/821/events
https://github.com/huggingface/datasets/issues/821
739,506,859
MDU6SXNzdWU3Mzk1MDY4NTk=
821
`kor_nli` dataset doesn't being loaded properly
{ "login": "sackoh", "id": 30492059, "node_id": "MDQ6VXNlcjMwNDkyMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/30492059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sackoh", "html_url": "https://github.com/sackoh", "followers_url": "https://api.github.com/users/sackoh/fo...
[]
closed
false
[]
null
[]
2020-11-10T02:04:12Z
2020-11-16T13:59:12Z
2020-11-16T13:59:12Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
There are two issues from `kor_nli` dataset 1. csv.DictReader failed to split features by tab - Should not exist `None` value in label feature, but there it is. ```python kor_nli_train['train'].unique('gold_label') # ['neutral', 'entailment', 'contradiction', None] ``` -...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/821/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/820/comments
https://api.github.com/repos/huggingface/datasets/issues/820/events
https://github.com/huggingface/datasets/pull/820
739,387,617
MDExOlB1bGxSZXF1ZXN0NTE4MDYwMjQ0
820
Update quail dataset to v1.3
{ "login": "ngdodd", "id": 4889636, "node_id": "MDQ6VXNlcjQ4ODk2MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngdodd", "html_url": "https://github.com/ngdodd", "followers_url": "https://api.github.com/users/ngdodd/foll...
[]
closed
false
[]
null
[]
2020-11-09T21:49:26Z
2020-11-10T09:06:35Z
2020-11-10T09:06:35Z
null
CONTRIBUTOR
[]
null
null
null
null
Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806).
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/820/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/820", "html_url": "https://github.com/huggingface/datasets/pull/820", "diff_url": "https://github.com/huggingface/datasets/pull/820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/820.patch", "merged_at": "2020-11-10T09:06:35Z...
true
https://api.github.com/repos/huggingface/datasets/issues/819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/819/comments
https://api.github.com/repos/huggingface/datasets/issues/819/events
https://github.com/huggingface/datasets/pull/819
739,250,624
MDExOlB1bGxSZXF1ZXN0NTE3OTQ2MjYy
819
Make save function use deterministic global vars order
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[ "Sorry, asking for help here, but the dill thread stop around 2013. Is it possible to use dill deterministically? I tried to monkeypatch the solution presented here into dill, but I suppose it requires forking their project.", "Hi ! What we did was to subclass `dill`'s Pickler to fix the non-deterministic behavio...
2020-11-09T18:12:03Z
2021-11-30T13:34:09Z
2020-11-11T15:20:51Z
null
MEMBER
[]
null
null
null
null
The `dumps` function need to be deterministic for the caching mechanism. However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary. I had to add a re...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/819/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/819", "html_url": "https://github.com/huggingface/datasets/pull/819", "diff_url": "https://github.com/huggingface/datasets/pull/819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/819.patch", "merged_at": "2020-11-11T15:20:50Z...
true
https://api.github.com/repos/huggingface/datasets/issues/818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/818/comments
https://api.github.com/repos/huggingface/datasets/issues/818/events
https://github.com/huggingface/datasets/pull/818
739,173,861
MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0
818
Fix type hints pickling in python 3.6
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-11-09T16:27:47Z
2020-11-10T09:07:03Z
2020-11-10T09:07:02Z
null
MEMBER
[]
null
null
null
null
Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6 However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway. The idea is just to implement the pickling/unpickling of parame...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/818/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/818/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/818", "html_url": "https://github.com/huggingface/datasets/pull/818", "diff_url": "https://github.com/huggingface/datasets/pull/818.diff", "patch_url": "https://github.com/huggingface/datasets/pull/818.patch", "merged_at": "2020-11-10T09:07:01Z...
true
https://api.github.com/repos/huggingface/datasets/issues/817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/817/comments
https://api.github.com/repos/huggingface/datasets/issues/817/events
https://github.com/huggingface/datasets/issues/817
739,145,369
MDU6SXNzdWU3MzkxNDUzNjk=
817
Add MRQA dataset
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/use...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Done! cf #1117 and #1022" ]
2020-11-09T15:52:19Z
2020-12-04T15:44:42Z
2020-12-04T15:44:41Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** MRQA - **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. Th...
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/817/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/816/comments
https://api.github.com/repos/huggingface/datasets/issues/816/events
https://github.com/huggingface/datasets/issues/816
739,102,686
MDU6SXNzdWU3MzkxMDI2ODY=
816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[ "To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order" ]
2020-11-09T15:01:20Z
2020-11-11T15:20:50Z
2020-11-11T15:20:50Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementati...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/816/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/815/comments
https://api.github.com/repos/huggingface/datasets/issues/815/events
https://github.com/huggingface/datasets/issues/815
738,842,092
MDU6SXNzdWU3Mzg4NDIwOTI=
815
Is dataset iterative or not?
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate the...
2020-11-09T09:11:48Z
2020-11-10T10:50:03Z
2020-11-10T10:50:03Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/815/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/814/comments
https://api.github.com/repos/huggingface/datasets/issues/814/events
https://github.com/huggingface/datasets/issues/814
738,500,443
MDU6SXNzdWU3Mzg1MDA0NDM=
814
Joining multiple datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks " ]
2020-11-08T16:19:30Z
2020-11-08T19:38:48Z
2020-11-08T19:38:48Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/814/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/813/comments
https://api.github.com/repos/huggingface/datasets/issues/813/events
https://github.com/huggingface/datasets/issues/813
738,489,852
MDU6SXNzdWU3Mzg0ODk4NTI=
813
How to implement DistributedSampler with datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ", "Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to g...
2020-11-08T15:27:11Z
2022-10-05T12:54:23Z
2022-10-05T12:54:23Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/813/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/812/comments
https://api.github.com/repos/huggingface/datasets/issues/812/events
https://github.com/huggingface/datasets/issues/812
738,340,217
MDU6SXNzdWU3MzgzNDAyMTc=
812
Too much logging
{ "login": "dspoka", "id": 6183050, "node_id": "MDQ6VXNlcjYxODMwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dspoka", "html_url": "https://github.com/dspoka", "followers_url": "https://api.github.com/users/dspoka/foll...
[]
closed
false
[]
null
[ "Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that", "+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these mess...
2020-11-07T23:56:30Z
2021-01-26T14:31:34Z
2020-11-16T17:06:42Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/812/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/811/comments
https://api.github.com/repos/huggingface/datasets/issues/811/events
https://github.com/huggingface/datasets/issues/811
738,280,132
MDU6SXNzdWU3MzgyODAxMzI=
811
nlp viewer error
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/fo...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
[]
null
[ "and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n![image](https://user-images.githubusercontent.com/30210529/98557329-5c182800-22a4-11eb-9b01-5b910fb8fcd4.png)\r\n", "Is this the problem of my local computer or ??", "Related to:\r\n- #673" ]
2020-11-07T17:08:58Z
2022-02-15T10:51:44Z
2022-02-14T15:24:20Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hello, when I select amazon_us_reviews in nlp viewer, it shows error. https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews ![image](https://user-images.githubusercontent.com/30210529/98447334-4aa81200-2124-11eb-9dca-82c3ab34ccc2.png)
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/811/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/810/comments
https://api.github.com/repos/huggingface/datasets/issues/810/events
https://github.com/huggingface/datasets/pull/810
737,878,370
MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3
810
Fix seqeval metric
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
[]
closed
false
[]
null
[]
2020-11-06T16:11:43Z
2020-11-09T14:04:29Z
2020-11-09T14:04:28Z
null
CONTRIBUTOR
[]
null
null
null
null
The current seqeval metric returns the following error when computed: ``` ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix) 102 scores = {} 103 for type_...
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/810/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/810", "html_url": "https://github.com/huggingface/datasets/pull/810", "diff_url": "https://github.com/huggingface/datasets/pull/810.diff", "patch_url": "https://github.com/huggingface/datasets/pull/810.patch", "merged_at": "2020-11-09T14:04:27Z...
true
https://api.github.com/repos/huggingface/datasets/issues/809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/809/comments
https://api.github.com/repos/huggingface/datasets/issues/809/events
https://github.com/huggingface/datasets/issues/809
737,832,701
MDU6SXNzdWU3Mzc4MzI3MDE=
809
Add Google Taskmaster dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
[]
null
[ "Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?", "You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/h...
2020-11-06T15:10:41Z
2021-04-20T13:09:26Z
2021-04-20T13:09:26Z
null
MEMBER
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation...
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/809/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/808/comments
https://api.github.com/repos/huggingface/datasets/issues/808/events
https://github.com/huggingface/datasets/pull/808
737,638,942
MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0
808
dataset(dgs): initial dataset loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
[]
closed
false
[]
null
[ "Hi @AmitMY, \r\n\r\nWere you able to figure this out?", "I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as ...
2020-11-06T10:14:43Z
2021-03-23T06:18:55Z
2021-03-23T06:18:55Z
null
CONTRIBUTOR
[]
null
null
null
null
When trying to create dummy data I get: > Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. ...
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/808/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/808", "html_url": "https://github.com/huggingface/datasets/pull/808", "diff_url": "https://github.com/huggingface/datasets/pull/808.diff", "patch_url": "https://github.com/huggingface/datasets/pull/808.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/807/comments
https://api.github.com/repos/huggingface/datasets/issues/807/events
https://github.com/huggingface/datasets/issues/807
737,509,954
MDU6SXNzdWU3Mzc1MDk5NTQ=
807
load_dataset for LOCAL CSV files report CONNECTION ERROR
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexua...
[]
closed
false
[]
null
[ "Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?", "> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does y...
2020-11-06T06:33:04Z
2021-01-11T01:30:27Z
2020-11-14T05:30:34Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexua...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/807/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/806/comments
https://api.github.com/repos/huggingface/datasets/issues/806/events
https://github.com/huggingface/datasets/issues/806
737,215,430
MDU6SXNzdWU3MzcyMTU0MzA=
806
Quail dataset urls are out of date
{ "login": "ngdodd", "id": 4889636, "node_id": "MDQ6VXNlcjQ4ODk2MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngdodd", "html_url": "https://github.com/ngdodd", "followers_url": "https://api.github.com/users/ngdodd/foll...
[]
closed
false
[]
null
[ "Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ", "Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata ...
2020-11-05T19:40:19Z
2020-11-10T14:02:51Z
2020-11-10T14:02:51Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.co...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/806/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/805/comments
https://api.github.com/repos/huggingface/datasets/issues/805/events
https://github.com/huggingface/datasets/issues/805
737,019,360
MDU6SXNzdWU3MzcwMTkzNjA=
805
On loading a metric from datasets, I get the following error
{ "login": "laibamehnaz", "id": 36405283, "node_id": "MDQ6VXNlcjM2NDA1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laibamehnaz", "html_url": "https://github.com/laibamehnaz", "followers_url": "https://api.github.com/...
[]
closed
false
[]
null
[ "Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```" ]
2020-11-05T15:14:38Z
2022-02-14T15:32:59Z
2022-02-14T15:32:59Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
`from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you.
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/805/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/804/comments
https://api.github.com/repos/huggingface/datasets/issues/804/events
https://github.com/huggingface/datasets/issues/804
736,858,507
MDU6SXNzdWU3MzY4NTg1MDc=
804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/use...
[]
closed
false
[]
null
[ "cc @yjernite is this expected ?", "Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface...
2020-11-05T11:38:01Z
2020-11-09T14:14:59Z
2020-11-09T14:14:58Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tas...
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/804/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/803/comments
https://api.github.com/repos/huggingface/datasets/issues/803/events
https://github.com/huggingface/datasets/pull/803
736,818,917
MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2
803
fix: typos in tutorial to map KILT and TriviaQA
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/use...
[]
closed
false
[]
null
[]
2020-11-05T10:42:00Z
2020-11-10T09:08:07Z
2020-11-10T09:08:07Z
null
CONTRIBUTOR
[]
null
null
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/803/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/803", "html_url": "https://github.com/huggingface/datasets/pull/803", "diff_url": "https://github.com/huggingface/datasets/pull/803.diff", "patch_url": "https://github.com/huggingface/datasets/pull/803.patch", "merged_at": "2020-11-10T09:08:07Z...
true
https://api.github.com/repos/huggingface/datasets/issues/802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/802/comments
https://api.github.com/repos/huggingface/datasets/issues/802/events
https://github.com/huggingface/datasets/pull/802
736,296,343
MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0
802
Add XGlue
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
[]
null
[ "Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc." ]
2020-11-04T17:29:54Z
2022-04-28T08:15:36Z
2020-12-01T15:58:27Z
null
CONTRIBUTOR
[]
null
null
null
null
Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for ```python load_dataset("xglue", "ner") # wo...
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/802/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/802", "html_url": "https://github.com/huggingface/datasets/pull/802", "diff_url": "https://github.com/huggingface/datasets/pull/802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/802.patch", "merged_at": "2020-12-01T15:58:27Z...
true
https://api.github.com/repos/huggingface/datasets/issues/801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/801/comments
https://api.github.com/repos/huggingface/datasets/issues/801/events
https://github.com/huggingface/datasets/issues/801
735,790,876
MDU6SXNzdWU3MzU3OTA4NzY=
801
How to join two datasets?
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.githu...
[]
closed
false
[]
null
[ "Hi this is also my question. thanks ", "Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n", "Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining dataset...
2020-11-04T03:53:11Z
2020-12-23T14:02:58Z
2020-12-23T14:02:58Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/801/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/800/comments
https://api.github.com/repos/huggingface/datasets/issues/800/events
https://github.com/huggingface/datasets/pull/800
735,772,775
MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3
800
Update loading_metrics.rst
{ "login": "ayushidalmia", "id": 5400513, "node_id": "MDQ6VXNlcjU0MDA1MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/5400513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushidalmia", "html_url": "https://github.com/ayushidalmia", "followers_url": "https://api.github.com...
[]
closed
false
[]
null
[]
2020-11-04T02:57:11Z
2020-11-11T15:28:32Z
2020-11-11T15:28:32Z
null
CONTRIBUTOR
[]
null
null
null
null
Minor bug
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/800/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/800", "html_url": "https://github.com/huggingface/datasets/pull/800", "diff_url": "https://github.com/huggingface/datasets/pull/800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/800.patch", "merged_at": "2020-11-11T15:28:32Z...
true
https://api.github.com/repos/huggingface/datasets/issues/799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/799/comments
https://api.github.com/repos/huggingface/datasets/issues/799/events
https://github.com/huggingface/datasets/pull/799
735,551,165
MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx
799
switch amazon reviews class label order
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
[]
closed
false
[]
null
[]
2020-11-03T18:38:58Z
2020-11-03T18:44:14Z
2020-11-03T18:44:10Z
null
CONTRIBUTOR
[]
null
null
null
null
Switches the label order to be more intuitive for amazon reviews, #791.
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/799/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/799", "html_url": "https://github.com/huggingface/datasets/pull/799", "diff_url": "https://github.com/huggingface/datasets/pull/799.diff", "patch_url": "https://github.com/huggingface/datasets/pull/799.patch", "merged_at": "2020-11-03T18:44:10Z...
true
https://api.github.com/repos/huggingface/datasets/issues/798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/798/comments
https://api.github.com/repos/huggingface/datasets/issues/798/events
https://github.com/huggingface/datasets/issues/798
735,518,805
MDU6SXNzdWU3MzU1MTg4MDU=
798
Cannot load TREC dataset: ConnectionError
{ "login": "kaletap", "id": 25740957, "node_id": "MDQ6VXNlcjI1NzQwOTU3", "avatar_url": "https://avatars.githubusercontent.com/u/25740957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaletap", "html_url": "https://github.com/kaletap", "followers_url": "https://api.github.com/users/kaleta...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[]
null
[ "Hi ! Indeed there's an issue with those links.\r\nWe should probably use the target urls of the redirections instead", "Hi, the same issue here, could you tell me how to download it through datasets? thanks ", "Same issue. ", "Actually it's already fixed on the master branch since #740 \r\nI'll do the 1.1.3 ...
2020-11-03T17:45:22Z
2022-02-14T15:34:22Z
2022-02-14T15:34:22Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/798/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/798/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/797/comments
https://api.github.com/repos/huggingface/datasets/issues/797/events
https://github.com/huggingface/datasets/issues/797
735,420,332
MDU6SXNzdWU3MzU0MjAzMzI=
797
Token classification labels are strings and we don't have the list of labels
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067401494, "node_id": "MDU6...
closed
false
[]
null
[ "Indeed. Pinging @stefan-it here if he want to give an expert opinion :)", "Related is https://github.com/huggingface/datasets/pull/636", "Should definitely be a ClassLabel 👍 ", "Already done." ]
2020-11-03T15:33:30Z
2022-02-14T15:41:54Z
2022-02-14T15:41:53Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy acces...
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/797/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/795/comments
https://api.github.com/repos/huggingface/datasets/issues/795/events
https://github.com/huggingface/datasets/issues/795
735,198,265
MDU6SXNzdWU3MzUxOTgyNjU=
795
Descriptions of raw and processed versions of wikitext are inverted
{ "login": "fraboniface", "id": 16835358, "node_id": "MDQ6VXNlcjE2ODM1MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/16835358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fraboniface", "html_url": "https://github.com/fraboniface", "followers_url": "https://api.github.com/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
[]
null
[ "Yes indeed ! Thanks for reporting", "Fixed by:\r\n- #3241" ]
2020-11-03T10:24:51Z
2022-02-14T15:46:21Z
2022-02-14T15:46:21Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselv...
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/795/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/794/comments
https://api.github.com/repos/huggingface/datasets/issues/794/events
https://github.com/huggingface/datasets/issues/794
735,158,725
MDU6SXNzdWU3MzUxNTg3MjU=
794
self.options cannot be converted to a Python object for pickling
{ "login": "hzqjyyx", "id": 9635713, "node_id": "MDQ6VXNlcjk2MzU3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/9635713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hzqjyyx", "html_url": "https://github.com/hzqjyyx", "followers_url": "https://api.github.com/users/hzqjyyx/...
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
[]
null
[ "Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon" ]
2020-11-03T09:27:34Z
2020-11-19T17:35:38Z
2020-11-19T17:35:38Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object. Here is a code snippet ```python from datasets import load_dataset from pyarrow.csv import ReadOptions load_dataset("csv", data_files=["out.csv"], read_options=ReadOpt...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/794/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/793/comments
https://api.github.com/repos/huggingface/datasets/issues/793/events
https://github.com/huggingface/datasets/pull/793
735,105,907
MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5
793
[Datasets] fix discofuse links
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
[]
null
[]
2020-11-03T08:03:45Z
2020-11-03T08:16:41Z
2020-11-03T08:16:40Z
null
CONTRIBUTOR
[]
null
null
null
null
The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558. The old links are broken I changed the links and created the new dataset_infos.json. Pinging @thomwolf @lhoestq for notification.
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/793/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/793", "html_url": "https://github.com/huggingface/datasets/pull/793", "diff_url": "https://github.com/huggingface/datasets/pull/793.diff", "patch_url": "https://github.com/huggingface/datasets/pull/793.patch", "merged_at": "2020-11-03T08:16:40Z...
true
https://api.github.com/repos/huggingface/datasets/issues/792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/792/comments
https://api.github.com/repos/huggingface/datasets/issues/792/events
https://github.com/huggingface/datasets/issues/792
734,693,652
MDU6SXNzdWU3MzQ2OTM2NTI=
792
KILT dataset: empty string in triviaqa input field
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/use...
[]
closed
false
[]
null
[ "Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))" ]
2020-11-02T17:33:54Z
2020-11-05T10:34:59Z
2020-11-05T10:34:59Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
# What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/Pa...
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/792/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/791/comments
https://api.github.com/repos/huggingface/datasets/issues/791/events
https://github.com/huggingface/datasets/pull/791
734,656,518
MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5
791
add amazon reviews
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
[]
closed
false
[]
null
[ "@patrickvonplaten Yeah this is adapted from tfds so a lot is just how they wrote the code. Addressed your comments and also simplified the weird `AmazonUSReviewsConfig` definition. Will merge once tests pass.", "Thanks for checking this one :) \r\nLooks good to me \r\n\r\nJust one question : is there a particula...
2020-11-02T16:42:57Z
2020-11-03T20:15:06Z
2020-11-03T16:43:57Z
null
CONTRIBUTOR
[]
null
null
null
null
Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/791/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/791/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/791", "html_url": "https://github.com/huggingface/datasets/pull/791", "diff_url": "https://github.com/huggingface/datasets/pull/791.diff", "patch_url": "https://github.com/huggingface/datasets/pull/791.patch", "merged_at": "2020-11-03T16:43:57Z...
true
https://api.github.com/repos/huggingface/datasets/issues/790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/790/comments
https://api.github.com/repos/huggingface/datasets/issues/790/events
https://github.com/huggingface/datasets/issues/790
734,470,197
MDU6SXNzdWU3MzQ0NzAxOTc=
790
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
{ "login": "shawwn", "id": 59632, "node_id": "MDQ6VXNlcjU5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shawwn", "html_url": "https://github.com/shawwn", "followers_url": "https://api.github.com/users/shawwn/followers", ...
[]
closed
false
[]
null
[ "I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now", "Closing this one.\r\nFeel free to re-open if you still have issues" ]
2020-11-02T12:36:35Z
2020-11-10T14:05:02Z
2020-11-10T14:05:02Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e "....
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/790/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/789/comments
https://api.github.com/repos/huggingface/datasets/issues/789/events
https://github.com/huggingface/datasets/pull/789
734,237,839
MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0
789
dataset(ncslgr): add initial loading script
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
[]
closed
false
[]
null
[ "Hi @AmitMY, sorry for leaving you hanging for a minute :) \r\n\r\nWe've developed a new pipeline for adding datasets with a few extra steps, including adding a dataset card. You can find the full process [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)\r\n\r\nWould you be up for addin...
2020-11-02T06:50:10Z
2020-12-01T13:41:37Z
2020-12-01T13:41:36Z
null
CONTRIBUTOR
[]
null
null
null
null
Its a small dataset, but its heavily annotated https://www.bu.edu/asllrp/ncslgr.html ![image](https://user-images.githubusercontent.com/5757359/97838609-3c539380-1ce9-11eb-885b-a15d4c91ea49.png)
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/789/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/789", "html_url": "https://github.com/huggingface/datasets/pull/789", "diff_url": "https://github.com/huggingface/datasets/pull/789.diff", "patch_url": "https://github.com/huggingface/datasets/pull/789.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/788/comments
https://api.github.com/repos/huggingface/datasets/issues/788/events
https://github.com/huggingface/datasets/issues/788
734,136,124
MDU6SXNzdWU3MzQxMzYxMjQ=
788
failed to reuse cache
{ "login": "WangHexie", "id": 31768052, "node_id": "MDQ6VXNlcjMxNzY4MDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangHexie", "html_url": "https://github.com/WangHexie", "followers_url": "https://api.github.com/users/...
[]
closed
false
[]
null
[]
2020-11-02T02:42:36Z
2020-11-02T12:26:15Z
2020-11-02T12:26:15Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown si...
{ "login": "WangHexie", "id": 31768052, "node_id": "MDQ6VXNlcjMxNzY4MDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangHexie", "html_url": "https://github.com/WangHexie", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/788/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/787/comments
https://api.github.com/repos/huggingface/datasets/issues/787/events
https://github.com/huggingface/datasets/pull/787
734,070,162
MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz
787
Adding nli_tr dataset
{ "login": "e-budur", "id": 2246791, "node_id": "MDQ6VXNlcjIyNDY3OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-budur", "html_url": "https://github.com/e-budur", "followers_url": "https://api.github.com/users/e-budur/...
[]
closed
false
[]
null
[ "Thank you @lhoestq for the time you take to review our pull request. We appreciate your help.\r\n\r\nWe've made the changes you described. Hope that it is ready for being merged. Please let me know if you have any additional requests for revisions. " ]
2020-11-01T21:49:44Z
2020-11-12T19:06:02Z
2020-11-12T19:06:02Z
null
CONTRIBUTOR
[]
null
null
null
null
Hello, In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf) The dataset is the neural machine transl...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/787/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/787", "html_url": "https://github.com/huggingface/datasets/pull/787", "diff_url": "https://github.com/huggingface/datasets/pull/787.diff", "patch_url": "https://github.com/huggingface/datasets/pull/787.patch", "merged_at": "2020-11-12T19:06:02Z...
true
https://api.github.com/repos/huggingface/datasets/issues/786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/786/comments
https://api.github.com/repos/huggingface/datasets/issues/786/events
https://github.com/huggingface/datasets/issues/786
733,761,717
MDU6SXNzdWU3MzM3NjE3MTc=
786
feat(dataset): multiprocessing _generate_examples
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
[]
closed
false
[]
null
[ "I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik", "`_generate_examples` can n...
2020-10-31T16:52:16Z
2023-01-16T10:59:13Z
2023-01-16T10:59:13Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case...
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/786/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/785/comments
https://api.github.com/repos/huggingface/datasets/issues/785/events
https://github.com/huggingface/datasets/pull/785
733,719,419
MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1
785
feat(aslg_pc12): add dev and test data splits
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
[]
closed
false
[]
null
[ "Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.g...
2020-10-31T13:25:38Z
2020-11-10T15:29:30Z
2020-11-10T15:29:30Z
null
CONTRIBUTOR
[]
null
null
null
null
For reproducibility sake, it's best if there are defined dev and test splits. The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define: - 5/7th for train - 1/7th for dev - 1/7th for test
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/foll...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/785/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/785", "html_url": "https://github.com/huggingface/datasets/pull/785", "diff_url": "https://github.com/huggingface/datasets/pull/785.diff", "patch_url": "https://github.com/huggingface/datasets/pull/785.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/784/comments
https://api.github.com/repos/huggingface/datasets/issues/784/events
https://github.com/huggingface/datasets/issues/784
733,700,463
MDU6SXNzdWU3MzM3MDA0NjM=
784
Issue with downloading Wikipedia data for low resource language
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https:/...
[]
closed
false
[]
null
[ "Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?", "@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n...
2020-10-31T11:40:00Z
2022-02-09T17:50:16Z
2020-11-25T15:42:13Z
null
NONE
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these tw...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/784/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/784/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/783/comments
https://api.github.com/repos/huggingface/datasets/issues/783/events
https://github.com/huggingface/datasets/pull/783
733,536,254
MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz
783
updated links to v1.3 of quail, fixed the description
{ "login": "annargrs", "id": 1450322, "node_id": "MDQ6VXNlcjE0NTAzMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1450322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/annargrs", "html_url": "https://github.com/annargrs", "followers_url": "https://api.github.com/users/annar...
[]
closed
false
[]
null
[ "we're using quail 1.3 now thanks.\r\nclosing this one" ]
2020-10-30T21:47:33Z
2020-11-29T23:05:19Z
2020-11-29T23:05:18Z
null
NONE
[]
null
null
null
null
updated links to v1.3 of quail, fixed the description
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/783/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/783", "html_url": "https://github.com/huggingface/datasets/pull/783", "diff_url": "https://github.com/huggingface/datasets/pull/783.diff", "patch_url": "https://github.com/huggingface/datasets/pull/783.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/782/comments
https://api.github.com/repos/huggingface/datasets/issues/782/events
https://github.com/huggingface/datasets/pull/782
733,316,463
MDExOlB1bGxSZXF1ZXN0NTEzMTE2MTM0
782
Fix metric deletion when attribuets are missing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-10-30T16:16:10Z
2020-10-30T16:47:53Z
2020-10-30T16:47:52Z
null
MEMBER
[]
null
null
null
null
When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted. I just added `if hasattr(...)` to make sure it doesn't crash
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/782/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/782", "html_url": "https://github.com/huggingface/datasets/pull/782", "diff_url": "https://github.com/huggingface/datasets/pull/782.diff", "patch_url": "https://github.com/huggingface/datasets/pull/782.patch", "merged_at": "2020-10-30T16:47:52Z...
true
https://api.github.com/repos/huggingface/datasets/issues/781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/781/comments
https://api.github.com/repos/huggingface/datasets/issues/781/events
https://github.com/huggingface/datasets/pull/781
733,168,609
MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw
781
Add XNLI train set
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[ "Hi! Thanks for adding the translated MNLI! Do you know what translations system / model you used when you created the datasets in the other languages?", "According to the [paper](https://arxiv.org/pdf/1809.05053.pdf) it's the result of the work of professional translators ;)", "Thanks for getting back to me.\n...
2020-10-30T13:21:53Z
2022-06-09T23:26:46Z
2020-11-09T18:22:49Z
null
MEMBER
[]
null
null
null
null
I added the train set that was built using the translated MNLI. Now you can load the dataset specifying one language: ```python from datasets import load_dataset xnli_en = load_dataset("xnli", "en") print(xnli_en["train"][0]) # {'hypothesis': 'Product and geography are what make cream skimming work .', 'label':...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/781/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/781", "html_url": "https://github.com/huggingface/datasets/pull/781", "diff_url": "https://github.com/huggingface/datasets/pull/781.diff", "patch_url": "https://github.com/huggingface/datasets/pull/781.patch", "merged_at": "2020-11-09T18:22:49Z...
true
https://api.github.com/repos/huggingface/datasets/issues/780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/780/comments
https://api.github.com/repos/huggingface/datasets/issues/780/events
https://github.com/huggingface/datasets/pull/780
732,738,647
MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0
780
Add ASNQ dataset
{ "login": "mkserge", "id": 2992022, "node_id": "MDQ6VXNlcjI5OTIwMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mkserge", "html_url": "https://github.com/mkserge", "followers_url": "https://api.github.com/users/mkserge/...
[]
closed
false
[]
null
[ "Very nice !\r\nWhat do the `sentence1` and `sentence2` correspond to exactly ?\r\nAlso maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)", "> What do the `sentence1` and `sentence2` correspon...
2020-10-29T23:31:56Z
2020-11-10T09:26:23Z
2020-11-10T09:26:23Z
null
CONTRIBUTOR
[]
null
null
null
null
This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118 The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Mosch...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/780/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/780", "html_url": "https://github.com/huggingface/datasets/pull/780", "diff_url": "https://github.com/huggingface/datasets/pull/780.diff", "patch_url": "https://github.com/huggingface/datasets/pull/780.patch", "merged_at": "2020-11-10T09:26:23Z...
true
https://api.github.com/repos/huggingface/datasets/issues/779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/779/comments
https://api.github.com/repos/huggingface/datasets/issues/779/events
https://github.com/huggingface/datasets/pull/779
732,514,887
MDExOlB1bGxSZXF1ZXN0NTEyNDQzMjY0
779
Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales
{ "login": "rathoreanirudh", "id": 11327413, "node_id": "MDQ6VXNlcjExMzI3NDEz", "avatar_url": "https://avatars.githubusercontent.com/u/11327413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rathoreanirudh", "html_url": "https://github.com/rathoreanirudh", "followers_url": "https://api.gi...
[ { "id": 4190228726, "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate", "name": "transfer-to-evaluate", "color": "E3165C", "default": false, "description": "" } ]
closed
false
[]
null
[ "Hi ! This looks interesting, thanks for adding it :) \r\n\r\nFor metrics there should only be two features fields: references and predictions.\r\nBoth of them can be defined as you want using nested structures if you need to.\r\nAlso I'm not sure what goes into references and what goes into predictions, could you ...
2020-10-29T17:31:14Z
2023-07-11T09:36:30Z
2023-07-11T09:36:30Z
null
NONE
[]
null
null
null
null
This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020).
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/779/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/779", "html_url": "https://github.com/huggingface/datasets/pull/779", "diff_url": "https://github.com/huggingface/datasets/pull/779.diff", "patch_url": "https://github.com/huggingface/datasets/pull/779.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/778/comments
https://api.github.com/repos/huggingface/datasets/issues/778/events
https://github.com/huggingface/datasets/issues/778
732,449,652
MDU6SXNzdWU3MzI0NDk2NTI=
778
Unexpected behavior when loading cached csv file?
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/...
[]
closed
false
[]
null
[ "Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)", "Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! " ]
2020-10-29T16:06:10Z
2020-10-29T21:21:27Z
2020-10-29T21:21:27Z
null
CONTRIBUTOR
[]
null
null
{ "total": 0, "completed": 0, "percent_completed": 0 }
{ "blocked_by": 0, "total_blocked_by": 0, "blocking": 0, "total_blocking": 0 }
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n...
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/778/timeline
null
completed
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/777/comments
https://api.github.com/repos/huggingface/datasets/issues/777/events
https://github.com/huggingface/datasets/pull/777
732,376,648
MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2
777
Better error message for uninitialized metric
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-10-29T14:42:50Z
2020-10-29T15:18:26Z
2020-10-29T15:18:24Z
null
MEMBER
[]
null
null
null
null
When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message Fix #729
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/777/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/777", "html_url": "https://github.com/huggingface/datasets/pull/777", "diff_url": "https://github.com/huggingface/datasets/pull/777.diff", "patch_url": "https://github.com/huggingface/datasets/pull/777.patch", "merged_at": "2020-10-29T15:18:23Z...
true
https://api.github.com/repos/huggingface/datasets/issues/776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/776/comments
https://api.github.com/repos/huggingface/datasets/issues/776/events
https://github.com/huggingface/datasets/pull/776
732,343,550
MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx
776
Allow custom split names in text dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[ "Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!" ]
2020-10-29T14:04:06Z
2020-10-30T13:46:45Z
2020-10-30T13:23:52Z
null
MEMBER
[]
null
null
null
null
The `text` dataset used to return only splits like train, test and validation. Other splits were ignored. Now any split name is allowed. I did the same for `json`, `pandas` and `csv` Fix #735
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/776/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/776", "html_url": "https://github.com/huggingface/datasets/pull/776", "diff_url": "https://github.com/huggingface/datasets/pull/776.diff", "patch_url": "https://github.com/huggingface/datasets/pull/776.patch", "merged_at": "2020-10-30T13:23:52Z...
true
https://api.github.com/repos/huggingface/datasets/issues/775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/775/comments
https://api.github.com/repos/huggingface/datasets/issues/775/events
https://github.com/huggingface/datasets/pull/775
732,287,504
MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3
775
Properly delete metrics when a process is killed
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
[]
null
[]
2020-10-29T12:52:07Z
2020-10-29T14:01:20Z
2020-10-29T14:01:19Z
null
MEMBER
[]
null
null
null
null
Tests are flaky when using metrics in distributed setup. There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error. However if the error is raised, all the processes of the metric are killed, and the open files (arrow + loc...
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/775/timeline
null
null
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/775", "html_url": "https://github.com/huggingface/datasets/pull/775", "diff_url": "https://github.com/huggingface/datasets/pull/775.diff", "patch_url": "https://github.com/huggingface/datasets/pull/775.patch", "merged_at": "2020-10-29T14:01:19Z...
true