title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Adding Enriched WebNLG dataset
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
https://github.com/huggingface/datasets/pull/1206
[ "Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD", "Aaaaand it's rebase time - the new one is at #1264 !", "closing this one since a new PR was created" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1206", "html_url": "https://github.com/huggingface/datasets/pull/1206", "diff_url": "https://github.com/huggingface/datasets/pull/1206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1206.patch", "merged_at": null }
1,206
true
add lst20 with manual download
passed on local: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20 ``` Not sure how to test: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20 ``` ``` LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand. It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries. At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is considered large enough for developing joint neural models for NLP. Manually download at https://aiforthai.in.th/corpus.php ```
https://github.com/huggingface/datasets/pull/1205
[ "The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead", "@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https://github.com/huggingface/datasets/pull/1253 too. Will fix them before review." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1205", "html_url": "https://github.com/huggingface/datasets/pull/1205", "diff_url": "https://github.com/huggingface/datasets/pull/1205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1205.patch", "merged_at": "2020-12-09T16:33:10" }
1,205
true
adding meta_woz dataset
https://github.com/huggingface/datasets/pull/1204
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1204", "html_url": "https://github.com/huggingface/datasets/pull/1204", "diff_url": "https://github.com/huggingface/datasets/pull/1204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1204.patch", "merged_at": "2020-12-16T15:05:24" }
1,204
true
Add Neural Code Search Dataset
https://github.com/huggingface/datasets/pull/1203
[ "> Really good thanks !\r\n> \r\n> I left a few comments\r\n\r\nThanks, resolved them :) ", "looks like this PR includes changes about many other files than the ones for Code Search\r\n\r\ncan you create another branch and another PR please ?", "> looks like this PR includes changes about many other files than ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1203", "html_url": "https://github.com/huggingface/datasets/pull/1203", "diff_url": "https://github.com/huggingface/datasets/pull/1203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1203.patch", "merged_at": null }
1,203
true
Medical question pairs
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view **No splits added**
https://github.com/huggingface/datasets/pull/1202
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1202", "html_url": "https://github.com/huggingface/datasets/pull/1202", "diff_url": "https://github.com/huggingface/datasets/pull/1202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1202.patch", "merged_at": null }
1,202
true
adding medical-questions-pairs
https://github.com/huggingface/datasets/pull/1201
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1201", "html_url": "https://github.com/huggingface/datasets/pull/1201", "diff_url": "https://github.com/huggingface/datasets/pull/1201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1201.patch", "merged_at": null }
1,201
true
Update ADD_NEW_DATASET.md
Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands. This issue arises all the time when adding torch as a dependency, but because so many novice users seem to participate in adding datasets, it may be useful to add an explicit note for Windows users to ensure that they do not run into issues.
https://github.com/huggingface/datasets/pull/1200
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1200", "html_url": "https://github.com/huggingface/datasets/pull/1200", "diff_url": "https://github.com/huggingface/datasets/pull/1200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1200.patch", "merged_at": "2020-12-07T08:32:39" }
1,200
true
Turkish NER dataset, script works fine, couldn't generate dummy data
I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json.
https://github.com/huggingface/datasets/pull/1199
[ "the .DUMP file looks like a txt with one example per line so adding `--match_text_files *.DUMP --n_lines 50` to the dummy generation command might work .", "We can close this PR since a new PR was open at #1268 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1199", "html_url": "https://github.com/huggingface/datasets/pull/1199", "diff_url": "https://github.com/huggingface/datasets/pull/1199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1199.patch", "merged_at": null }
1,199
true
Add ALT
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
https://github.com/huggingface/datasets/pull/1198
[ "the `RemoteDatasetTest ` erros in the CI are fixed on master so it's fine", "used `Translation ` feature type and fixed few typos as you suggested.", "Sorry, I made a mistake. please see new PR here. https://github.com/huggingface/datasets/pull/1436" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1198", "html_url": "https://github.com/huggingface/datasets/pull/1198", "diff_url": "https://github.com/huggingface/datasets/pull/1198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1198.patch", "merged_at": null }
1,198
true
add taskmaster-2
Adding taskmaster-2 dataset. https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020
https://github.com/huggingface/datasets/pull/1197
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1197", "html_url": "https://github.com/huggingface/datasets/pull/1197", "diff_url": "https://github.com/huggingface/datasets/pull/1197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1197.patch", "merged_at": "2020-12-07T15:22:43" }
1,197
true
Add IWSLT'15 English-Vietnamese machine translation Data
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
https://github.com/huggingface/datasets/pull/1196
[ "Thanks ! feel free to ping me once you've added the tags in the dataset card :) ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1196", "html_url": "https://github.com/huggingface/datasets/pull/1196", "diff_url": "https://github.com/huggingface/datasets/pull/1196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1196.patch", "merged_at": "2020-12-11T18:26:51" }
1,196
true
addition of py_ast
The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in the AST and we aim to remove obfuscated files
https://github.com/huggingface/datasets/pull/1195
[ "Hi @reshinthadithyan !\r\n\r\nAs mentioned on the Slack, it would be better in this case to parse the file lines into the following feature structure:\r\n```python\r\n\"ast\": datasets.Sequence(\r\n {\r\n \"type\": datasets.Value(\"string\"),\r\n \"value\": datasets.Value(\"string\"),\r\n \...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1195", "html_url": "https://github.com/huggingface/datasets/pull/1195", "diff_url": "https://github.com/huggingface/datasets/pull/1195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1195.patch", "merged_at": null }
1,195
true
Add msr_text_compression
Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)
https://github.com/huggingface/datasets/pull/1194
[ "the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1194", "html_url": "https://github.com/huggingface/datasets/pull/1194", "diff_url": "https://github.com/huggingface/datasets/pull/1194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1194.patch", "merged_at": "2020-12-09T10:53:45" }
1,194
true
add taskmaster-1
Adding Taskmaster-1 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019
https://github.com/huggingface/datasets/pull/1193
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1193", "html_url": "https://github.com/huggingface/datasets/pull/1193", "diff_url": "https://github.com/huggingface/datasets/pull/1193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1193.patch", "merged_at": "2020-12-07T15:08:39" }
1,193
true
Add NewsPH_NLI dataset
This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing. Link to the paper: https://arxiv.org/pdf/2010.11574.pdf Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1192
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1192", "html_url": "https://github.com/huggingface/datasets/pull/1192", "diff_url": "https://github.com/huggingface/datasets/pull/1192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1192.patch", "merged_at": "2020-12-07T15:39:43" }
1,192
true
Added Translator Human Parity Data For a Chinese-English news transla…
…tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.
https://github.com/huggingface/datasets/pull/1191
[ "Can you run `make style` to format the code and fix the CI please ?", "> Can you run `make style` to format the code and fix the CI please ?\r\n\r\nI ran `make style` before this PR and just a few minutes ago. No changes to the code. Not sure why the CI is failing.", "Also, I attempted to see if I can get the ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1191", "html_url": "https://github.com/huggingface/datasets/pull/1191", "diff_url": "https://github.com/huggingface/datasets/pull/1191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1191.patch", "merged_at": "2020-12-09T13:22:45" }
1,191
true
Add Fake News Detection in Filipino dataset
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html Link to the dataset/repo: https://github.com/jcblaisecruz02/Tagalog-fake-news
https://github.com/huggingface/datasets/pull/1190
[ "Hi! I'm the author of this paper (surprised to see our datasets have been added already).\r\n\r\nThat paper link only leads to the conference index, here's a link to the actual paper: https://www.aclweb.org/anthology/2020.lrec-1.316/\r\n\r\nWould it be fine if I also edited your gsheet entry to reflect this change...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1190", "html_url": "https://github.com/huggingface/datasets/pull/1190", "diff_url": "https://github.com/huggingface/datasets/pull/1190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1190.patch", "merged_at": "2020-12-07T15:39:27" }
1,190
true
Add Dengue dataset in Filipino
This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. Link to the paper: https://ieeexplore.ieee.org/document/8459963 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1189
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1189", "html_url": "https://github.com/huggingface/datasets/pull/1189", "diff_url": "https://github.com/huggingface/datasets/pull/1189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1189.patch", "merged_at": "2020-12-07T15:38:58" }
1,189
true
adding hind_encorp dataset
adding Hindi_Encorp05 dataset
https://github.com/huggingface/datasets/pull/1188
[ "help needed in dummy data", "extension of the file is .plaintext so dummy data generation is failing\r\n", "you can add the `--match_text_file \"*.plaintext\"` flag when generating the dummy data\r\n\r\nalso it looks like the PR is empty, is this expected ?", "yes it is expected because I made all my change...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1188", "html_url": "https://github.com/huggingface/datasets/pull/1188", "diff_url": "https://github.com/huggingface/datasets/pull/1188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1188.patch", "merged_at": null }
1,188
true
Added AQUA-RAT (Algebra Question Answering with Rationales) Dataset
https://github.com/huggingface/datasets/pull/1187
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1187", "html_url": "https://github.com/huggingface/datasets/pull/1187", "diff_url": "https://github.com/huggingface/datasets/pull/1187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1187.patch", "merged_at": "2020-12-07T15:37:12" }
1,187
true
all test passed
need help creating dummy data
https://github.com/huggingface/datasets/pull/1186
[ "looks like this PR includes changes to 5000 files\r\ncould you create a new branch and a new PR ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1186", "html_url": "https://github.com/huggingface/datasets/pull/1186", "diff_url": "https://github.com/huggingface/datasets/pull/1186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1186.patch", "merged_at": null }
1,186
true
Add Hate Speech Dataset in Filipino
This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. Link to the paper: https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1185
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1185", "html_url": "https://github.com/huggingface/datasets/pull/1185", "diff_url": "https://github.com/huggingface/datasets/pull/1185.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1185.patch", "merged_at": "2020-12-07T15:35:33" }
1,185
true
Add Adversarial SQuAD dataset
# Adversarial SQuAD Adding the Adversarial [SQuAD](https://github.com/robinjia/adversarial-squad) dataset as part of the sprint 🎉 This dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data is intended for use in evaluation. (Which could of course be also used for training if one wants). So there is no classical train/val/test split, but a split based on the number of adversaries added. There are 2 splits of this dataset: - AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way. - AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way. (The AddAny and AddCommon datasets mentioned in the paper are dynamically generated based on model's output distribution thus are not included here) The failing test look like some unrelated timeout thing, will probably clear if rerun. - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
https://github.com/huggingface/datasets/pull/1184
[ "the CI error was just a connection error due to all the activity on the repo this week ^^'\r\nI re-ran it so it should be good now", "I hadn't realized the problem with the dummies since it had passed without errors.\r\nSuggestion: maybe we can show the user a warning based on the generated dummy size.", "Than...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1184", "html_url": "https://github.com/huggingface/datasets/pull/1184", "diff_url": "https://github.com/huggingface/datasets/pull/1184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1184.patch", "merged_at": "2020-12-16T16:12:58" }
1,184
true
add mkb dataset
This PR will add Mann Ki Baat dataset (parallel data for Indian languages).
https://github.com/huggingface/datasets/pull/1183
[ "Could you update the languages tags before we merge @VasudevGupta7 ?", "done.", "thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1183", "html_url": "https://github.com/huggingface/datasets/pull/1183", "diff_url": "https://github.com/huggingface/datasets/pull/1183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1183.patch", "merged_at": "2020-12-09T09:38:50" }
1,183
true
ADD COVID-QA dataset
This PR adds the COVID-QA dataset, a question answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19 Link to the paper: https://openreview.net/forum?id=JENSKEEzsoU Link to the dataset/repo: https://github.com/deepset-ai/COVID-QA
https://github.com/huggingface/datasets/pull/1182
[ "merging since the CI is fixed on master", "Wow, thanks for including this dataset from my side as well!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1182", "html_url": "https://github.com/huggingface/datasets/pull/1182", "diff_url": "https://github.com/huggingface/datasets/pull/1182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1182.patch", "merged_at": "2020-12-07T14:23:27" }
1,182
true
added emotions detection in arabic dataset
Dataset for Emotions detection in Arabic text more info: https://github.com/AmrMehasseb/Emotional-Tone
https://github.com/huggingface/datasets/pull/1181
[ "Hi @abdulelahsm did you manage to fix your issue ?\r\nFeel free to ping me if you have questions or if you're ready for a review", "@lhoestq fixed it! ready to merge. I hope haha", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1181", "html_url": "https://github.com/huggingface/datasets/pull/1181", "diff_url": "https://github.com/huggingface/datasets/pull/1181.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1181.patch", "merged_at": "2020-12-21T09:53:51" }
1,181
true
Add KorQuAD v2 Dataset
# The Korean Question Answering Dataset v2 Adding the [KorQuAD](https://korquad.github.io/) v2 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD and is an extension of [squad_kor_v1](https://github.com/huggingface/datasets/pull/1178) which is why I added it as `squad_kor_v2`. - Crowd generated questions and answer (1-answer per question) for Wikipedia articles. Differently from V1 it includes the html structure and markup, which makes it a different enough dataset. (doesn't share ids between v1 and v2 either) - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could) Edit: 🤦 looks like squad_kor_v1 commit sneaked in here too
https://github.com/huggingface/datasets/pull/1180
[ "looks like this PR also includes the changes for the V1\r\nCould you only include the files of the V2 ?", "hmm I have made the dummy data lighter retested on local and it passed not sure why it fails here?", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1180", "html_url": "https://github.com/huggingface/datasets/pull/1180", "diff_url": "https://github.com/huggingface/datasets/pull/1180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1180.patch", "merged_at": "2020-12-16T16:10:30" }
1,180
true
Small update to the doc: add flatten_indices in doc
Small update to the doc: add flatten_indices in doc
https://github.com/huggingface/datasets/pull/1179
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1179", "html_url": "https://github.com/huggingface/datasets/pull/1179", "diff_url": "https://github.com/huggingface/datasets/pull/1179.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1179.patch", "merged_at": "2020-12-07T13:42:56" }
1,179
true
Add KorQuAD v1 Dataset
# The Korean Question Answering Dataset Adding the [KorQuAD](https://korquad.github.io/KorQuad%201.0/) v1 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD which is why I added it as `squad_kor_v1`. There is also a v2 which I added [here](https://github.com/huggingface/datasets/pull/1180). - Crowd generated questions and answer (1-answer per question) for Wikipedia articles. - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
https://github.com/huggingface/datasets/pull/1178
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1178", "html_url": "https://github.com/huggingface/datasets/pull/1178", "diff_url": "https://github.com/huggingface/datasets/pull/1178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1178.patch", "merged_at": "2020-12-07T13:41:37" }
1,178
true
Add Korean NER dataset
This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
https://github.com/huggingface/datasets/pull/1177
[ "Closed via #1219 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1177", "html_url": "https://github.com/huggingface/datasets/pull/1177", "diff_url": "https://github.com/huggingface/datasets/pull/1177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1177.patch", "merged_at": null }
1,177
true
Add OpenPI Dataset
Add the OpenPI Dataset by AI2 (AllenAI)
https://github.com/huggingface/datasets/pull/1176
[ "Hi @Bharat123rox ! It looks like some of the dummy data is broken or missing. Did you auto-generate it? Does the local test pass for you?", "@yjernite requesting you to have a look as to why the tests are failing only on Windows, there seems to be a backslash error somewhere, could it be the result of `os.path.j...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1176", "html_url": "https://github.com/huggingface/datasets/pull/1176", "diff_url": "https://github.com/huggingface/datasets/pull/1176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1176.patch", "merged_at": null }
1,176
true
added ReDial dataset
Updating README Dataset link: https://redialdata.github.io/website/datasheet
https://github.com/huggingface/datasets/pull/1175
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1175", "html_url": "https://github.com/huggingface/datasets/pull/1175", "diff_url": "https://github.com/huggingface/datasets/pull/1175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1175.patch", "merged_at": "2020-12-07T13:21:43" }
1,175
true
Add Universal Morphologies
Adding unimorph universal morphology annotations for 110 languages, pfew!!! one lemma per row with all possible forms and annotations https://unimorph.github.io/
https://github.com/huggingface/datasets/pull/1174
[ "Sorry for the delay, changed the default language to \"ady\" (first alphabetical) and only downloading the relevant files for each config (dataset_infos is till 918KB though)", "Thanks for merging it ! Looks all good\r\n\r\nLooks like I didn't reply to your last message, sorry about that.\r\nFeel free to ping me...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1174", "html_url": "https://github.com/huggingface/datasets/pull/1174", "diff_url": "https://github.com/huggingface/datasets/pull/1174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1174.patch", "merged_at": "2021-01-26T16:41:48" }
1,174
true
add wikipedia biography dataset
My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). It passes all the tests.
https://github.com/huggingface/datasets/pull/1173
[ "Does anyone know why am I getting this \"Some checks were not successful\" message? For the _code_quality_ one, I have successfully run the flake8 command.", "Ok, I need to update the README.md, but don't know if that will fix the errors", "Hi @ACR0S , thanks for adding the dataset!\r\n\r\nIt looks like `black...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1173", "html_url": "https://github.com/huggingface/datasets/pull/1173", "diff_url": "https://github.com/huggingface/datasets/pull/1173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1173.patch", "merged_at": "2020-12-07T11:13:14" }
1,173
true
Add proto_qa dataset
Added dataset tags as required.
https://github.com/huggingface/datasets/pull/1172
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1172", "html_url": "https://github.com/huggingface/datasets/pull/1172", "diff_url": "https://github.com/huggingface/datasets/pull/1172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1172.patch", "merged_at": "2020-12-07T11:12:24" }
1,172
true
Add imdb Urdu Reviews dataset.
Added the imdb Urdu reviews dataset. More info about the dataset over <a href="https://github.com/mirfan899/Urdu">here</a>.
https://github.com/huggingface/datasets/pull/1171
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1171", "html_url": "https://github.com/huggingface/datasets/pull/1171", "diff_url": "https://github.com/huggingface/datasets/pull/1171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1171.patch", "merged_at": "2020-12-07T11:11:16" }
1,171
true
Fix path handling for Windows
https://github.com/huggingface/datasets/pull/1170
[ "@lhoestq here's the fix!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1170", "html_url": "https://github.com/huggingface/datasets/pull/1170", "diff_url": "https://github.com/huggingface/datasets/pull/1170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1170.patch", "merged_at": "2020-12-07T10:47:23" }
1,170
true
Add Opus fiskmo dataset for Finnish and Swedish for MT task
Adding fiskmo, a massive parallel corpus for Finnish and Swedish. for more info : http://opus.nlpl.eu/fiskmo.php
https://github.com/huggingface/datasets/pull/1169
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1169", "html_url": "https://github.com/huggingface/datasets/pull/1169", "diff_url": "https://github.com/huggingface/datasets/pull/1169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1169.patch", "merged_at": "2020-12-07T11:04:11" }
1,169
true
Add Naver sentiment movie corpus
This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
https://github.com/huggingface/datasets/pull/1168
[ "Closed via #1252 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1168", "html_url": "https://github.com/huggingface/datasets/pull/1168", "diff_url": "https://github.com/huggingface/datasets/pull/1168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1168.patch", "merged_at": null }
1,168
true
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern. I guess the solution would entail wrapping a dataset into a Pytorch dataset. As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html) ```python import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): # instead of doing this beforehand, I'd like to do tokenization on the fly self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) ``` How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers? ---- Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant ```python class CustomPytorchDataset(Dataset): def __init__(self): self.dataset = some_hf_dataset(...) self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") def __getitem__(self, batch_idx): instance = self.dataset[text_col][batch_idx] tokenized_text = self.tokenizer(instance, truncation=True, padding=True) return tokenized_text def __len__(self): return len(self.dataset) @staticmethod def collate_fn(batch): # batch is a list, however it will always contain 1 item because we should not use the # batch_size argument as batch_size is controlled by the sampler return {k: torch.tensor(v) for k, v in batch[0].items()} torch_ds = CustomPytorchDataset() # NOTE: batch_sampler returns list of integers and since here we have SequentialSampler # it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)` batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True) # NOTE: no `batch_size` as now the it is controlled by the sampler! dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn) ```
https://github.com/huggingface/datasets/issues/1167
[ "We're working on adding on-the-fly transforms in datasets.\r\nCurrently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.\r\nFor example\r\n```python\r\ndataset.set_format(\"torch\")\r\n```\r\napplies `torch.Tensor` to t...
null
1,167
false
Opus montenegrinsubs
Opus montenegrinsubs - language pair en-me more info : http://opus.nlpl.eu/MontenegrinSubs.php
https://github.com/huggingface/datasets/pull/1166
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1166", "html_url": "https://github.com/huggingface/datasets/pull/1166", "diff_url": "https://github.com/huggingface/datasets/pull/1166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1166.patch", "merged_at": "2020-12-07T11:02:49" }
1,166
true
Add ar rest reviews
added restaurants reviews in Arabic for sentiment analysis tasks
https://github.com/huggingface/datasets/pull/1165
[ "Copy-pasted from the Slack discussion:\r\nthe annotation and language creators should be found , not unknown\r\nthe example should go under the \"Data Instances\" paragraph, not \"Data fields\"\r\ncan you remove the abstract from the citation and add it to the dataset description? More people will see that", "@y...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1165", "html_url": "https://github.com/huggingface/datasets/pull/1165", "diff_url": "https://github.com/huggingface/datasets/pull/1165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1165.patch", "merged_at": "2020-12-21T17:06:23" }
1,165
true
Add DaNe dataset
https://github.com/huggingface/datasets/pull/1164
[ "Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1164", "html_url": "https://github.com/huggingface/datasets/pull/1164", "diff_url": "https://github.com/huggingface/datasets/pull/1164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1164.patch", "merged_at": null }
1,164
true
Added memat : Xhosa-English parallel corpora
Added memat : Xhosa-English parallel corpora for more info : http://opus.nlpl.eu/memat.php
https://github.com/huggingface/datasets/pull/1163
[ "The `RemoteDatasetTest` CI fail is fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1163", "html_url": "https://github.com/huggingface/datasets/pull/1163", "diff_url": "https://github.com/huggingface/datasets/pull/1163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1163.patch", "merged_at": "2020-12-07T10:40:24" }
1,163
true
Add Mocha dataset
More information: https://allennlp.org/mocha
https://github.com/huggingface/datasets/pull/1162
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1162", "html_url": "https://github.com/huggingface/datasets/pull/1162", "diff_url": "https://github.com/huggingface/datasets/pull/1162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1162.patch", "merged_at": "2020-12-07T10:09:39" }
1,162
true
Linguisticprobing
Adding Linguistic probing datasets from What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties https://www.aclweb.org/anthology/P18-1198/
https://github.com/huggingface/datasets/pull/1161
[ "Thanks for your contribution, @sileod.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nAs you already created this dataset under your organization namespace (https://huggingface.co/datasets/metaeval/linguisticprobing),...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1161", "html_url": "https://github.com/huggingface/datasets/pull/1161", "diff_url": "https://github.com/huggingface/datasets/pull/1161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1161.patch", "merged_at": null }
1,161
true
adding TabFact dataset
Adding TabFact: A Large-scale Dataset for Table-based Fact Verification. https://github.com/wenhuchen/Table-Fact-Checking - The tables are stored as individual csv files, so need to download 16,573 🤯 csv files. As a result the `datasets_infos.json` file is huge (6.62 MB). - Original dataset has nested structure where, where table is one example and each table has multiple statements, flattening the structure here so that each statement is one example.
https://github.com/huggingface/datasets/pull/1160
[ "FYI you guys are on GitHub's homepage 😍\r\n\r\n<img width=\"1589\" alt=\"Screenshot 2020-12-09 at 12 34 28\" src=\"https://user-images.githubusercontent.com/326577/101624883-a0ecc700-39e8-11eb-8a97-11af0d036536.png\">\r\n", "Yeayy 😍 🔥" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1160", "html_url": "https://github.com/huggingface/datasets/pull/1160", "diff_url": "https://github.com/huggingface/datasets/pull/1160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1160.patch", "merged_at": "2020-12-09T09:12:40" }
1,160
true
Add Roman Urdu dataset
This PR adds the [Roman Urdu dataset](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set#).
https://github.com/huggingface/datasets/pull/1159
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1159", "html_url": "https://github.com/huggingface/datasets/pull/1159", "diff_url": "https://github.com/huggingface/datasets/pull/1159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1159.patch", "merged_at": "2020-12-07T09:59:03" }
1,159
true
Add BBC Hindi NLI Dataset
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71" - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Context and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. [More Information Needed] ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - Train and Test files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'} ``` ### Data Fields - Each row contatins 4 columns - Premise, Hypothesis, Label and Topic. ### Data Splits - Train : 15553 - Valid : 2581 - Test : 2593 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia - We processed this dataset to combine two sets of relevant but low prevalence classes. - Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international. - Likewise, we also merged samples from news, business, social, learning english, and institutional as news. - Lastly, we also removed the class multimedia because there were very few samples. #### Who are the source language producers? Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/avinsit123/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ```
https://github.com/huggingface/datasets/pull/1158
[ "Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ", "Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help", "@l...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1158", "html_url": "https://github.com/huggingface/datasets/pull/1158", "diff_url": "https://github.com/huggingface/datasets/pull/1158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1158.patch", "merged_at": "2021-02-05T09:48:31" }
1,158
true
Add dataset XhosaNavy English -Xhosa
Add dataset XhosaNavy English -Xhosa More info : http://opus.nlpl.eu/XhosaNavy.php
https://github.com/huggingface/datasets/pull/1157
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1157", "html_url": "https://github.com/huggingface/datasets/pull/1157", "diff_url": "https://github.com/huggingface/datasets/pull/1157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1157.patch", "merged_at": "2020-12-07T09:11:33" }
1,157
true
add telugu-news corpus
Adding Telugu News Corpus to datasets.
https://github.com/huggingface/datasets/pull/1156
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1156", "html_url": "https://github.com/huggingface/datasets/pull/1156", "diff_url": "https://github.com/huggingface/datasets/pull/1156.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1156.patch", "merged_at": "2020-12-07T09:08:48" }
1,156
true
Add BSD
This PR adds BSD, the Japanese-English business dialogue corpus by [Rikters et al., 2020](https://www.aclweb.org/anthology/D19-5204.pdf).
https://github.com/huggingface/datasets/pull/1155
[ "Glad to have more Japanese data! Couple of comments:\r\n- the abbreviation might confuse some people as there is also an OPUS BSD corpus, would you mind renaming it as `bsd_ja_en`?\r\n- `flake8` is throwing some errors, you can run it locally (`flake8 datasets`) and fix what it tells you until it's happy :)\r\n- W...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1155", "html_url": "https://github.com/huggingface/datasets/pull/1155", "diff_url": "https://github.com/huggingface/datasets/pull/1155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1155.patch", "merged_at": "2020-12-07T09:27:46" }
1,155
true
Opus sardware
Added Opus sardware dataset for machine translation English to Sardinian. for more info : http://opus.nlpl.eu/sardware.php
https://github.com/huggingface/datasets/pull/1154
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1154", "html_url": "https://github.com/huggingface/datasets/pull/1154", "diff_url": "https://github.com/huggingface/datasets/pull/1154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1154.patch", "merged_at": "2020-12-05T17:05:45" }
1,154
true
Adding dataset for proto_qa in huggingface datasets library
Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning Followed all steps for adding a new dataset.
https://github.com/huggingface/datasets/pull/1153
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1153", "html_url": "https://github.com/huggingface/datasets/pull/1153", "diff_url": "https://github.com/huggingface/datasets/pull/1153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1153.patch", "merged_at": null }
1,153
true
hindi discourse analysis dataset commit
https://github.com/huggingface/datasets/pull/1152
[ "That's a great dataset to have! We need a couple more things to be good to go:\r\n- you should `make style` and `flake8 datasets` before pushing to make the code quality check happy :) \r\n- the dataset will need some dummy data which you should be able to auto-generate and test locally: https://github.com/hugging...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1152", "html_url": "https://github.com/huggingface/datasets/pull/1152", "diff_url": "https://github.com/huggingface/datasets/pull/1152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1152.patch", "merged_at": "2020-12-14T19:44:48" }
1,152
true
adding psc dataset
https://github.com/huggingface/datasets/pull/1151
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1151", "html_url": "https://github.com/huggingface/datasets/pull/1151", "diff_url": "https://github.com/huggingface/datasets/pull/1151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1151.patch", "merged_at": "2020-12-09T11:38:41" }
1,151
true
adding dyk dataset
https://github.com/huggingface/datasets/pull/1150
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1150", "html_url": "https://github.com/huggingface/datasets/pull/1150", "diff_url": "https://github.com/huggingface/datasets/pull/1150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1150.patch", "merged_at": "2020-12-05T16:52:19" }
1,150
true
Fix typo in the comment in _info function
https://github.com/huggingface/datasets/pull/1149
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1149", "html_url": "https://github.com/huggingface/datasets/pull/1149", "diff_url": "https://github.com/huggingface/datasets/pull/1149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1149.patch", "merged_at": "2020-12-05T16:19:26" }
1,149
true
adding polemo2 dataset
https://github.com/huggingface/datasets/pull/1148
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1148", "html_url": "https://github.com/huggingface/datasets/pull/1148", "diff_url": "https://github.com/huggingface/datasets/pull/1148.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1148.patch", "merged_at": "2020-12-05T16:51:38" }
1,148
true
Vinay/add/telugu books
Real data tests are failing as this dataset needs to be manually downloaded
https://github.com/huggingface/datasets/pull/1147
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1147", "html_url": "https://github.com/huggingface/datasets/pull/1147", "diff_url": "https://github.com/huggingface/datasets/pull/1147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1147.patch", "merged_at": "2020-12-05T16:36:03" }
1,147
true
Add LINNAEUS
https://github.com/huggingface/datasets/pull/1146
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1146", "html_url": "https://github.com/huggingface/datasets/pull/1146", "diff_url": "https://github.com/huggingface/datasets/pull/1146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1146.patch", "merged_at": "2020-12-05T16:35:53" }
1,146
true
Add Species-800
https://github.com/huggingface/datasets/pull/1145
[ "thanks @lhoestq ! I probably need to do the same change in the `SplitGenerator`s (lines 107, 110 and 113). I'll open a new PR for that", "Yes indeed ! Good catch 👍 \r\nFeel free to open a PR and ping me", "Hi , theres a issue pulling species_800 dataset , throws google drive error \r\n\r\nerror: \r\n\r\n```...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1145", "html_url": "https://github.com/huggingface/datasets/pull/1145", "diff_url": "https://github.com/huggingface/datasets/pull/1145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1145.patch", "merged_at": "2020-12-05T16:35:01" }
1,145
true
Add JFLEG
This PR adds [JFLEG ](https://www.aclweb.org/anthology/E17-2037/), an English grammatical error correction benchmark. The tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target sentences**. The original dataset comprise files in a flat structure, labelled by split then by source/target (e.g., dev.src, dev.ref0, ..., dev.ref3). Not sure what is the best way of adding this. I imagine I can treat each distinct source-target pair as its own split? But having so many copies of the source sentence feels redundant, and it would make it less convenient to end-users who might want to access multiple gold standard targets simultaneously.
https://github.com/huggingface/datasets/pull/1144
[ "Hi @j-chim ! You're right it does feel redundant: your option works better, but I'd even suggest having the references in a Sequence feature, which you can declare as:\r\n```\r\n\t features=datasets.Features(\r\n {\r\n \"sentence\": datasets.Value(\"string\"),\r\n ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1144", "html_url": "https://github.com/huggingface/datasets/pull/1144", "diff_url": "https://github.com/huggingface/datasets/pull/1144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1144.patch", "merged_at": "2020-12-06T18:16:04" }
1,144
true
Add the Winograd Schema Challenge
Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples. - https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html The data format was a bit of a nightmare but I think I got it to a workable format.
https://github.com/huggingface/datasets/pull/1143
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1143", "html_url": "https://github.com/huggingface/datasets/pull/1143", "diff_url": "https://github.com/huggingface/datasets/pull/1143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1143.patch", "merged_at": "2020-12-09T09:32:34" }
1,143
true
Fix PerSenT
New PR for dataset PerSenT
https://github.com/huggingface/datasets/pull/1142
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1142", "html_url": "https://github.com/huggingface/datasets/pull/1142", "diff_url": "https://github.com/huggingface/datasets/pull/1142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1142.patch", "merged_at": "2020-12-14T13:39:34" }
1,142
true
Add GitHub version of ETH Py150 Corpus
Add the redistributable version of **ETH Py150 Corpus**
https://github.com/huggingface/datasets/pull/1141
[ "The `RemoteDatasetTest` is fixed on master so it's fine", "thanks for rebasing :)\r\n\r\nCI is green now, merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1141", "html_url": "https://github.com/huggingface/datasets/pull/1141", "diff_url": "https://github.com/huggingface/datasets/pull/1141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1141.patch", "merged_at": "2020-12-07T10:00:24" }
1,141
true
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
https://github.com/huggingface/datasets/pull/1140
[ "@lhoestq have made the suggested changes in the README file.", "@lhoestq Created a new PR #1231 with only the relevant files.\r\nclosing this one :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1140", "html_url": "https://github.com/huggingface/datasets/pull/1140", "diff_url": "https://github.com/huggingface/datasets/pull/1140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1140.patch", "merged_at": null }
1,140
true
Add ReFreSD dataset
This PR adds the **ReFreSD dataset**. The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data. Need feedback on: - I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work. - The feature names. - I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit. - There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better. - The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple. Thanks in advance
https://github.com/huggingface/datasets/pull/1139
[ "Cool dataset! Replying in-line:\r\n\r\n> This PR adds the **ReFreSD dataset**.\r\n> The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.\r\n> \r\n> Need feedback on:\r\n> \r\n> * I couldn't generate the dummy data. The ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1139", "html_url": "https://github.com/huggingface/datasets/pull/1139", "diff_url": "https://github.com/huggingface/datasets/pull/1139.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1139.patch", "merged_at": "2020-12-16T16:01:18" }
1,139
true
updated after the class name update
@lhoestq <---
https://github.com/huggingface/datasets/pull/1138
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1138", "html_url": "https://github.com/huggingface/datasets/pull/1138", "diff_url": "https://github.com/huggingface/datasets/pull/1138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1138.patch", "merged_at": "2020-12-05T15:43:32" }
1,138
true
add wmt mlqe 2020 shared task
First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation) Note that I copied the tags in the README for only one (of the 7 configurations): `en-de`. There is one configuration for each pair of languages.
https://github.com/huggingface/datasets/pull/1137
[ "re-created in #1218 because this was too messy" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1137", "html_url": "https://github.com/huggingface/datasets/pull/1137", "diff_url": "https://github.com/huggingface/datasets/pull/1137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1137.patch", "merged_at": null }
1,137
true
minor change in description in paws-x.py and updated dataset_infos
https://github.com/huggingface/datasets/pull/1136
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1136", "html_url": "https://github.com/huggingface/datasets/pull/1136", "diff_url": "https://github.com/huggingface/datasets/pull/1136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1136.patch", "merged_at": "2020-12-06T18:02:57" }
1,136
true
added paws
Updating README and tags for dataset card in a while
https://github.com/huggingface/datasets/pull/1135
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1135", "html_url": "https://github.com/huggingface/datasets/pull/1135", "diff_url": "https://github.com/huggingface/datasets/pull/1135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1135.patch", "merged_at": "2020-12-09T17:17:13" }
1,135
true
adding xquad-r dataset
https://github.com/huggingface/datasets/pull/1134
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1134", "html_url": "https://github.com/huggingface/datasets/pull/1134", "diff_url": "https://github.com/huggingface/datasets/pull/1134.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1134.patch", "merged_at": "2020-12-05T16:50:47" }
1,134
true
Adding XQUAD-R Dataset
https://github.com/huggingface/datasets/pull/1133
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1133", "html_url": "https://github.com/huggingface/datasets/pull/1133", "diff_url": "https://github.com/huggingface/datasets/pull/1133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1133.patch", "merged_at": null }
1,133
true
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
https://github.com/huggingface/datasets/pull/1132
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1132", "html_url": "https://github.com/huggingface/datasets/pull/1132", "diff_url": "https://github.com/huggingface/datasets/pull/1132.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1132.patch", "merged_at": null }
1,132
true
Adding XQUAD-R Dataset
https://github.com/huggingface/datasets/pull/1131
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1131", "html_url": "https://github.com/huggingface/datasets/pull/1131", "diff_url": "https://github.com/huggingface/datasets/pull/1131.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1131.patch", "merged_at": null }
1,131
true
adding discovery
https://github.com/huggingface/datasets/pull/1130
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1130", "html_url": "https://github.com/huggingface/datasets/pull/1130", "diff_url": "https://github.com/huggingface/datasets/pull/1130.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1130.patch", "merged_at": "2020-12-14T13:03:14" }
1,130
true
Adding initial version of cord-19 dataset
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### TODO: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
https://github.com/huggingface/datasets/pull/1129
[ "Hi @ggdupont !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or when you're ready for a review", "> Hi @ggdupont !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or when you're ready for a r...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1129", "html_url": "https://github.com/huggingface/datasets/pull/1129", "diff_url": "https://github.com/huggingface/datasets/pull/1129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1129.patch", "merged_at": null }
1,129
true
Add xquad-r dataset
https://github.com/huggingface/datasets/pull/1128
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1128", "html_url": "https://github.com/huggingface/datasets/pull/1128", "diff_url": "https://github.com/huggingface/datasets/pull/1128.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1128.patch", "merged_at": null }
1,128
true
Add wikiqaar dataset
Arabic Wiki Question Answering Corpus.
https://github.com/huggingface/datasets/pull/1127
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1127", "html_url": "https://github.com/huggingface/datasets/pull/1127", "diff_url": "https://github.com/huggingface/datasets/pull/1127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1127.patch", "merged_at": "2020-12-07T16:39:41" }
1,127
true
Adding babi dataset
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment. Supersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).
https://github.com/huggingface/datasets/pull/1126
[ "This is ok now @lhoestq\r\n\r\nI've included the tweak to `dummy_data` to only use the data transmitted to `_generate_examples` by default (it only do that if it can find at least one path to an existing file in the `gen_kwargs` and this can be unactivated with a flag).\r\n\r\nShould I extract it in another PR or ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1126", "html_url": "https://github.com/huggingface/datasets/pull/1126", "diff_url": "https://github.com/huggingface/datasets/pull/1126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1126.patch", "merged_at": null }
1,126
true
Add Urdu fake news dataset.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1125
[ "@lhoestq looks like a lot of files were updated... shall I create a new PR?", "Hi @chaitnayabasava ! you can try rebasing and see if that fixes the number of files changed, otherwise please do open a new PR with only the relevant files and close this one :) ", "Created a new PR #1230.\r\nclosing this one :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1125", "html_url": "https://github.com/huggingface/datasets/pull/1125", "diff_url": "https://github.com/huggingface/datasets/pull/1125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1125.patch", "merged_at": null }
1,125
true
Add Xitsonga Ner
Clean Xitsonga Ner PR
https://github.com/huggingface/datasets/pull/1124
[ "looks like this PR includes changes about many files other than the ones related to xitsonga NER\r\n\r\ncould you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1124", "html_url": "https://github.com/huggingface/datasets/pull/1124", "diff_url": "https://github.com/huggingface/datasets/pull/1124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1124.patch", "merged_at": null }
1,124
true
adding cdt dataset
https://github.com/huggingface/datasets/pull/1123
[ "the `ms_terms` formatting CI fails is fixed on master", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1123", "html_url": "https://github.com/huggingface/datasets/pull/1123", "diff_url": "https://github.com/huggingface/datasets/pull/1123.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1123.patch", "merged_at": "2020-12-04T17:05:56" }
1,123
true
Add Urdu fake news.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1122
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1122", "html_url": "https://github.com/huggingface/datasets/pull/1122", "diff_url": "https://github.com/huggingface/datasets/pull/1122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1122.patch", "merged_at": null }
1,122
true
adding cdt dataset
https://github.com/huggingface/datasets/pull/1121
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1121", "html_url": "https://github.com/huggingface/datasets/pull/1121", "diff_url": "https://github.com/huggingface/datasets/pull/1121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1121.patch", "merged_at": null }
1,121
true
Add conda environment activation
Added activation of Conda environment before installing.
https://github.com/huggingface/datasets/pull/1120
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1120", "html_url": "https://github.com/huggingface/datasets/pull/1120", "diff_url": "https://github.com/huggingface/datasets/pull/1120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1120.patch", "merged_at": "2020-12-04T16:40:57" }
1,120
true
Add Google Great Code Dataset
https://github.com/huggingface/datasets/pull/1119
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1119", "html_url": "https://github.com/huggingface/datasets/pull/1119", "diff_url": "https://github.com/huggingface/datasets/pull/1119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1119.patch", "merged_at": "2020-12-06T17:33:13" }
1,119
true
Add Tashkeela dataset
Arabic Vocalized Words Dataset.
https://github.com/huggingface/datasets/pull/1118
[ "Sorry @lhoestq for the trouble, sometime I forget to change the names :/", "> Sorry @lhoestq for the trouble, sometime I forget to change the names :/\r\n\r\nhaha it's ok ;)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1118", "html_url": "https://github.com/huggingface/datasets/pull/1118", "diff_url": "https://github.com/huggingface/datasets/pull/1118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1118.patch", "merged_at": "2020-12-04T15:46:50" }
1,118
true
Fix incorrect MRQA train+SQuAD URL
Fix issue #1115
https://github.com/huggingface/datasets/pull/1117
[ "Thanks ! could you regenerate the dataset_infos.json file ?\r\n\r\n```\r\ndatasets-cli test ./datasets/mrqa --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nalso cc @VictorSanh ", "Oooops, good catch @jimmycode ", "> Thanks ! could you regenerate the dataset_infos.json file ?\r\n> \r\n> ```\r\n>...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1117", "html_url": "https://github.com/huggingface/datasets/pull/1117", "diff_url": "https://github.com/huggingface/datasets/pull/1117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1117.patch", "merged_at": "2020-12-06T17:14:10" }
1,117
true
add dbpedia_14 dataset
This dataset corresponds to the DBpedia dataset requested in https://github.com/huggingface/datasets/issues/353.
https://github.com/huggingface/datasets/pull/1116
[ "Thanks for the review. \r\nCheers!", "Hi @hfawaz, this week we are doing the 🤗 `datasets` sprint (see some details [here](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176)).\r\n\r\nNothing more to do on your side but it means that if you regis...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1116", "html_url": "https://github.com/huggingface/datasets/pull/1116", "diff_url": "https://github.com/huggingface/datasets/pull/1116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1116.patch", "merged_at": "2020-12-05T15:36:23" }
1,116
true
Incorrect URL for MRQA SQuAD train subset
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53 The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
https://github.com/huggingface/datasets/issues/1115
[ "good catch !" ]
null
1,115
false
Add sesotho ner corpus
Clean Sesotho PR
https://github.com/huggingface/datasets/pull/1114
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1114", "html_url": "https://github.com/huggingface/datasets/pull/1114", "diff_url": "https://github.com/huggingface/datasets/pull/1114.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1114.patch", "merged_at": "2020-12-04T15:02:07" }
1,114
true
add qed
adding QED: Dataset for Explanations in Question Answering https://github.com/google-research-datasets/QED https://arxiv.org/abs/2009.06354
https://github.com/huggingface/datasets/pull/1113
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1113", "html_url": "https://github.com/huggingface/datasets/pull/1113", "diff_url": "https://github.com/huggingface/datasets/pull/1113.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1113.patch", "merged_at": "2020-12-05T15:41:57" }
1,113
true
Initial version of cord-19 dataset from AllenAI with only the abstract
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [ ] Both tests for the real data and the dummy data pass. ### TODO: - [ ] add more metadata - [ ] add full text - [ ] add pre-computed document embedding
https://github.com/huggingface/datasets/pull/1112
[ "too ugly, I'll make a clean one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1112", "html_url": "https://github.com/huggingface/datasets/pull/1112", "diff_url": "https://github.com/huggingface/datasets/pull/1112.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1112.patch", "merged_at": null }
1,112
true
Add Siswati Ner corpus
Clean Siswati PR
https://github.com/huggingface/datasets/pull/1111
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1111", "html_url": "https://github.com/huggingface/datasets/pull/1111", "diff_url": "https://github.com/huggingface/datasets/pull/1111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1111.patch", "merged_at": "2020-12-04T14:43:00" }
1,111
true
Using a feature named "_type" fails with certain operations
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column. Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict. Best wishes and keep up the awesome work!
https://github.com/huggingface/datasets/issues/1110
[ "Thanks for reporting !\r\n\r\nIndeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.\r\nWe can probably change `_type` to something that is less likely to collide with user feature names.\r\nIn this case we would want something backward ...
null
1,110
false
add woz_dialogue
Adding Wizard-of-Oz task oriented dialogue dataset https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz https://arxiv.org/abs/1604.04562
https://github.com/huggingface/datasets/pull/1109
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1109", "html_url": "https://github.com/huggingface/datasets/pull/1109", "diff_url": "https://github.com/huggingface/datasets/pull/1109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1109.patch", "merged_at": "2020-12-05T15:40:18" }
1,109
true
Add Sepedi NER corpus
Finally a clean PR for Sepedi
https://github.com/huggingface/datasets/pull/1108
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1108", "html_url": "https://github.com/huggingface/datasets/pull/1108", "diff_url": "https://github.com/huggingface/datasets/pull/1108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1108.patch", "merged_at": "2020-12-04T14:39:00" }
1,108
true
Add arsentd_lev dataset
Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) Paper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830) Homepage: http://oma-project.com/
https://github.com/huggingface/datasets/pull/1107
[ "thanks ! can you also regenerate the dataset_infos.json file please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1107", "html_url": "https://github.com/huggingface/datasets/pull/1107", "diff_url": "https://github.com/huggingface/datasets/pull/1107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1107.patch", "merged_at": "2020-12-05T15:38:09" }
1,107
true