id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
⌀ | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
⌀ | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
758,229,304
| 1,234
|
Added ade_corpus_v2, with 3 configs for relation extraction and classification task
|
Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data
|
closed
|
https://github.com/huggingface/datasets/pull/1234
| 2020-12-07T07:05:14
| 2020-12-14T17:49:14
| 2020-12-14T17:49:14
|
{
"login": "Nilanshrajput",
"id": 28673745,
"type": "User"
}
|
[] | true
|
[] |
758,188,699
| 1,233
|
Add Curiosity Dialogs Dataset
|
Add Facebook [Curiosity Dialogs](https://github.com/facebookresearch/curiosity) Dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/1233
| 2020-12-07T06:01:00
| 2020-12-20T13:34:09
| 2020-12-09T14:50:29
|
{
"login": "vineeths96",
"id": 50873201,
"type": "User"
}
|
[] | true
|
[] |
758,180,669
| 1,232
|
Add Grail QA dataset
|
For more information: https://dki-lab.github.io/GrailQA/
|
closed
|
https://github.com/huggingface/datasets/pull/1232
| 2020-12-07T05:46:45
| 2020-12-08T13:03:19
| 2020-12-08T13:03:19
|
{
"login": "mattbui",
"id": 46804938,
"type": "User"
}
|
[] | true
|
[] |
758,121,398
| 1,231
|
Add Urdu Sentiment Corpus (USC)
|
@lhoestq opened a clean PR containing only relevant files.
old PR #1140
|
closed
|
https://github.com/huggingface/datasets/pull/1231
| 2020-12-07T03:25:20
| 2020-12-07T18:05:16
| 2020-12-07T16:43:23
|
{
"login": "chaitnayabasava",
"id": 44389205,
"type": "User"
}
|
[] | true
|
[] |
758,119,342
| 1,230
|
Add Urdu fake news dataset
|
@lhoestq opened a clean PR containing only relevant files.
old PR #1125
|
closed
|
https://github.com/huggingface/datasets/pull/1230
| 2020-12-07T03:19:50
| 2020-12-07T18:04:55
| 2020-12-07T16:57:54
|
{
"login": "chaitnayabasava",
"id": 44389205,
"type": "User"
}
|
[] | true
|
[] |
758,100,707
| 1,229
|
Muchocine - Spanish movie reviews dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1229
| 2020-12-07T02:23:29
| 2020-12-21T10:09:09
| 2020-12-21T10:09:09
|
{
"login": "mapmeld",
"id": 643918,
"type": "User"
}
|
[] | true
|
[] |
|
758,049,068
| 1,228
|
add opus_100 dataset
|
This PR will add [opus100 dataset](http://opus.nlpl.eu/opus-100.php).
|
closed
|
https://github.com/huggingface/datasets/pull/1228
| 2020-12-06T23:17:24
| 2020-12-09T14:54:00
| 2020-12-09T14:54:00
|
{
"login": "thevasudevgupta",
"id": 53136577,
"type": "User"
}
|
[] | true
|
[] |
758,049,060
| 1,227
|
readme: remove link to Google's responsible AI practices
|
...maybe we'll find a company that reallly stands behind responsible AI practices ;)
|
closed
|
https://github.com/huggingface/datasets/pull/1227
| 2020-12-06T23:17:22
| 2020-12-07T08:35:19
| 2020-12-06T23:20:41
|
{
"login": "stefan-it",
"id": 20651387,
"type": "User"
}
|
[] | true
|
[] |
758,036,979
| 1,226
|
Add menyo_20k_mt dataset
|
Add menyo_20k_mt dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1226
| 2020-12-06T22:16:15
| 2020-12-10T19:22:14
| 2020-12-10T19:22:14
|
{
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
}
|
[] | true
|
[] |
758,035,501
| 1,225
|
Add Winobias dataset
|
Pardon me for different commits with same message. There were conflicts after I rebased master while simultaneously pushing my changes to local repo, hence the duplicate entries.
|
closed
|
https://github.com/huggingface/datasets/pull/1225
| 2020-12-06T22:08:20
| 2020-12-07T06:45:59
| 2020-12-07T06:40:50
|
{
"login": "akshayb7",
"id": 29649801,
"type": "User"
}
|
[] | true
|
[] |
758,022,998
| 1,224
|
adding conceptnet5
|
Adding the conceptnet5 and omcs txt files used to create the conceptnet5 dataset. Conceptne5 is a common sense dataset. More info can be found here: https://github.com/commonsense/conceptnet5/wiki
|
closed
|
https://github.com/huggingface/datasets/pull/1224
| 2020-12-06T21:06:53
| 2020-12-09T16:38:16
| 2020-12-09T14:37:17
|
{
"login": "huu4ontocord",
"id": 8900094,
"type": "User"
}
|
[] | true
|
[] |
758,022,208
| 1,223
|
🇸🇪 Added Swedish Reviews dataset for sentiment classification in Sw…
|
perhaps: @lhoestq 🤗
|
closed
|
https://github.com/huggingface/datasets/pull/1223
| 2020-12-06T21:02:54
| 2020-12-08T10:54:56
| 2020-12-08T10:54:56
|
{
"login": "timpal0l",
"id": 6556710,
"type": "User"
}
|
[] | true
|
[] |
758,018,953
| 1,222
|
Add numeric fused head dataset
|
Adding the [NFH: Numeric Fused Head](https://nlp.biu.ac.il/~lazary/fh/) dataset.
Everything looks sensible and I've included both the identification and resolution tasks. I haven't personally used this dataset in my research so am unable to specify what the default configuration / supervised keys should be.
I've filled out the basic info on the model card to the best of my knowledge but it's a little tricky to understand exactly what the fields represent.
Dataset author: @yanaiela
|
closed
|
https://github.com/huggingface/datasets/pull/1222
| 2020-12-06T20:46:53
| 2020-12-08T11:17:56
| 2020-12-08T11:17:55
|
{
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
}
|
[] | true
|
[] |
758,016,032
| 1,221
|
Add HKCanCor
|
This PR adds the [Hong Kong Cantonese Corpus](http://compling.hss.ntu.edu.sg/hkcancor/), by [Luke and Wong 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf).
The dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https://github.com/fcbond/hkcancor/blob/master/sample/d1_v.txt) for example) that requires a few processing steps.
|
closed
|
https://github.com/huggingface/datasets/pull/1221
| 2020-12-06T20:32:07
| 2020-12-09T16:34:18
| 2020-12-09T16:34:18
|
{
"login": "j-chim",
"id": 22435209,
"type": "User"
}
|
[] | true
|
[] |
758,015,894
| 1,220
|
add Korean HateSpeech dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1220
| 2020-12-06T20:31:29
| 2020-12-08T15:21:09
| 2020-12-08T11:05:42
|
{
"login": "stevhliu",
"id": 59462357,
"type": "User"
}
|
[] | true
|
[] |
|
758,013,368
| 1,219
|
Add Korean NER dataset
|
Supersedes #1177
> This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
|
closed
|
https://github.com/huggingface/datasets/pull/1219
| 2020-12-06T20:19:06
| 2021-12-29T00:50:59
| 2020-12-08T10:25:33
|
{
"login": "jaketae",
"id": 25360440,
"type": "User"
}
|
[] | true
|
[] |
758,009,113
| 1,218
|
Add WMT20 MLQE 3 shared tasks
|
3 tasks for the WMT 20 MLQE shared tasks -> 3 different datasets
(I re-created #1137 because it was too messy).
Note that in L199 `task3.py`, I used `logging.warning` to print some missing data in the train set.
|
closed
|
https://github.com/huggingface/datasets/pull/1218
| 2020-12-06T19:59:12
| 2020-12-15T15:27:30
| 2020-12-15T15:27:29
|
{
"login": "VictorSanh",
"id": 16107619,
"type": "User"
}
|
[] | true
|
[] |
758,008,321
| 1,217
|
adding DataCommons fact checking
|
Adding the data from: https://datacommons.org/factcheck/
Had to cheat a bit with the dummy data as the test doesn't recognize `.txt.gz`: had to rename uncompressed files with the `.gz` extension manually without actually compressing
|
closed
|
https://github.com/huggingface/datasets/pull/1217
| 2020-12-06T19:56:12
| 2020-12-16T16:22:48
| 2020-12-16T16:22:48
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[] | true
|
[] |
758,005,982
| 1,216
|
Add limit
|
This PR adds [LiMiT](https://github.com/ilmgut/limit_dataset), a dataset for literal motion classification/extraction by [Manotas et al., 2020](https://www.aclweb.org/anthology/2020.findings-emnlp.88.pdf).
|
closed
|
https://github.com/huggingface/datasets/pull/1216
| 2020-12-06T19:46:18
| 2020-12-08T07:52:11
| 2020-12-08T07:52:11
|
{
"login": "j-chim",
"id": 22435209,
"type": "User"
}
|
[] | true
|
[] |
758,002,885
| 1,215
|
Add irc disentanglement
|
added files for irc disentanglement dataset
was unable to test dummy data as a result of vpn/proxy issues
|
closed
|
https://github.com/huggingface/datasets/pull/1215
| 2020-12-06T19:30:46
| 2020-12-16T16:18:25
| 2020-12-16T16:18:25
|
{
"login": "dhruvjoshi1998",
"id": 32560035,
"type": "User"
}
|
[] | true
|
[] |
758,002,786
| 1,214
|
adding medical-questions-pairs dataset
|
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors.
Dataset : https://github.com/curai/medical-question-pair-dataset
Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view
|
closed
|
https://github.com/huggingface/datasets/pull/1214
| 2020-12-06T19:30:12
| 2020-12-09T14:42:53
| 2020-12-09T14:42:53
|
{
"login": "tuner007",
"id": 46425391,
"type": "User"
}
|
[] | true
|
[] |
757,983,884
| 1,213
|
add taskmaster3
|
Adding Taskmaster-3 dataset
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020.
The dataset structure almost same as original dataset with these two changes
1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex.
```python
args = {"name.movie": "Mulan", "name.theater": ": "Mountain AMC 16"}
```
becomes
```python
[
{
"arg_name": "name.movie",
"arg_value": "Mulan"
},
{
"arg_name": "name.theater",
"arg_value": "Mountain AMC 16"
}
]
```
2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.
|
closed
|
https://github.com/huggingface/datasets/pull/1213
| 2020-12-06T17:56:03
| 2020-12-09T11:05:10
| 2020-12-09T11:00:29
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
757,978,795
| 1,212
|
Add Sanskrit Classic texts in datasets
|
closed
|
https://github.com/huggingface/datasets/pull/1212
| 2020-12-06T17:31:31
| 2020-12-07T19:04:08
| 2020-12-07T19:04:08
|
{
"login": "parmarsuraj99",
"id": 9317265,
"type": "User"
}
|
[] | true
|
[] |
|
757,973,719
| 1,211
|
Add large spanish corpus
|
Adds a collection of Spanish corpora that can be useful for pretraining language models.
Following a nice suggestion from @yjernite we provide the user with three main ways to preprocess / load either
* the whole corpus (17GB!)
* one specific sub-corpus
* the whole corpus, but return a single split. this is useful if you want to cache the whole preprocessing step once and interact with individual sub-corpora
See the dataset card for more details.
Ready for review!
|
closed
|
https://github.com/huggingface/datasets/pull/1211
| 2020-12-06T17:06:50
| 2020-12-09T13:36:36
| 2020-12-09T13:36:36
|
{
"login": "lewtun",
"id": 26859204,
"type": "User"
}
|
[] | true
|
[] |
757,966,959
| 1,210
|
Add XSUM Hallucination Annotations Dataset
|
Adding Google [XSum Hallucination Annotations](https://github.com/google-research-datasets/xsum_hallucination_annotations) dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/1210
| 2020-12-06T16:40:19
| 2020-12-20T13:34:56
| 2020-12-16T16:57:11
|
{
"login": "vineeths96",
"id": 50873201,
"type": "User"
}
|
[] | true
|
[] |
757,965,934
| 1,209
|
[AfriBooms] Dataset exists already
|
When trying to add "AfriBooms": https://docs.google.com/spreadsheets/d/12ShVow0M6RavnzbBEabm5j5dv12zBaf0y-niwEPPlo4/edit#gid=1386399609 I noticed that the dataset exists already as a config of Universal Dependencies (universal_dependencies.py). I checked and the data exactly matches so that the new data link does not give any new data.
This PR improves the config's description a bit by linking to the paper.
|
closed
|
https://github.com/huggingface/datasets/pull/1209
| 2020-12-06T16:35:13
| 2020-12-07T16:52:24
| 2020-12-07T16:52:23
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true
|
[] |
757,961,368
| 1,208
|
Add HKCanCor
|
(Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order)
|
closed
|
https://github.com/huggingface/datasets/pull/1208
| 2020-12-06T16:14:43
| 2020-12-06T20:23:17
| 2020-12-06T20:21:54
|
{
"login": "j-chim",
"id": 22435209,
"type": "User"
}
|
[] | true
|
[] |
757,953,830
| 1,207
|
Add msr_genomics_kbcomp Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1207
| 2020-12-06T15:40:05
| 2020-12-07T15:55:17
| 2020-12-07T15:55:11
|
{
"login": "manandey",
"id": 6687858,
"type": "User"
}
|
[] | true
|
[] |
|
757,952,992
| 1,206
|
Adding Enriched WebNLG dataset
|
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1206
| 2020-12-06T15:36:20
| 2023-09-24T09:51:43
| 2020-12-09T09:40:32
|
{
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
}
|
[] | true
|
[] |
757,942,403
| 1,205
|
add lst20 with manual download
|
passed on local:
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20
```
Not sure how to test:
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20
```
```
LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php
```
|
closed
|
https://github.com/huggingface/datasets/pull/1205
| 2020-12-06T14:49:10
| 2020-12-09T16:33:10
| 2020-12-09T16:33:10
|
{
"login": "cstorm125",
"id": 15519308,
"type": "User"
}
|
[] | true
|
[] |
757,939,475
| 1,204
|
adding meta_woz dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1204
| 2020-12-06T14:34:13
| 2020-12-16T15:05:25
| 2020-12-16T15:05:24
|
{
"login": "pacman100",
"id": 13534540,
"type": "User"
}
|
[] | true
|
[] |
|
757,935,170
| 1,203
|
Add Neural Code Search Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1203
| 2020-12-06T14:12:39
| 2020-12-09T16:40:15
| 2020-12-09T16:40:15
|
{
"login": "vinaykudari",
"id": 34424769,
"type": "User"
}
|
[] | true
|
[] |
|
757,934,408
| 1,202
|
Medical question pairs
|
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors.
Dataset : https://github.com/curai/medical-question-pair-dataset
Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view
**No splits added**
|
closed
|
https://github.com/huggingface/datasets/pull/1202
| 2020-12-06T14:09:07
| 2020-12-06T17:41:28
| 2020-12-06T17:41:28
|
{
"login": "tuner007",
"id": 46425391,
"type": "User"
}
|
[] | true
|
[] |
757,927,941
| 1,201
|
adding medical-questions-pairs
|
closed
|
https://github.com/huggingface/datasets/pull/1201
| 2020-12-06T13:36:52
| 2020-12-06T13:39:44
| 2020-12-06T13:39:32
|
{
"login": "tuner007",
"id": 46425391,
"type": "User"
}
|
[] | true
|
[] |
|
757,926,823
| 1,200
|
Update ADD_NEW_DATASET.md
|
Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands.
This issue arises all the time when adding torch as a dependency, but because so many novice users seem to participate in adding datasets, it may be useful to add an explicit note for Windows users to ensure that they do not run into issues.
|
closed
|
https://github.com/huggingface/datasets/pull/1200
| 2020-12-06T13:31:32
| 2020-12-07T08:32:39
| 2020-12-07T08:32:39
|
{
"login": "BramVanroy",
"id": 2779410,
"type": "User"
}
|
[] | true
|
[] |
757,909,237
| 1,199
|
Turkish NER dataset, script works fine, couldn't generate dummy data
|
I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json.
|
closed
|
https://github.com/huggingface/datasets/pull/1199
| 2020-12-06T12:00:03
| 2020-12-16T16:13:24
| 2020-12-16T16:13:24
|
{
"login": "merveenoyan",
"id": 53175384,
"type": "User"
}
|
[] | true
|
[] |
757,903,453
| 1,198
|
Add ALT
|
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
|
closed
|
https://github.com/huggingface/datasets/pull/1198
| 2020-12-06T11:25:30
| 2020-12-10T04:18:12
| 2020-12-10T04:18:12
|
{
"login": "chameleonTK",
"id": 6429850,
"type": "User"
}
|
[] | true
|
[] |
757,900,160
| 1,197
|
add taskmaster-2
|
Adding taskmaster-2 dataset.
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020
|
closed
|
https://github.com/huggingface/datasets/pull/1197
| 2020-12-06T11:05:18
| 2020-12-07T15:22:43
| 2020-12-07T15:22:43
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
757,894,920
| 1,196
|
Add IWSLT'15 English-Vietnamese machine translation Data
|
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
|
closed
|
https://github.com/huggingface/datasets/pull/1196
| 2020-12-06T10:36:31
| 2020-12-11T18:26:51
| 2020-12-11T18:26:51
|
{
"login": "Nilanshrajput",
"id": 28673745,
"type": "User"
}
|
[] | true
|
[] |
757,889,045
| 1,195
|
addition of py_ast
|
The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository)
,keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files
|
closed
|
https://github.com/huggingface/datasets/pull/1195
| 2020-12-06T10:00:52
| 2020-12-08T06:19:24
| 2020-12-08T06:19:24
|
{
"login": "reshinthadithyan",
"id": 36307201,
"type": "User"
}
|
[] | true
|
[] |
757,880,647
| 1,194
|
Add msr_text_compression
|
Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)
|
closed
|
https://github.com/huggingface/datasets/pull/1194
| 2020-12-06T09:06:11
| 2020-12-09T10:53:45
| 2020-12-09T10:53:45
|
{
"login": "jeromeku",
"id": 2455711,
"type": "User"
}
|
[] | true
|
[] |
757,840,830
| 1,193
|
add taskmaster-1
|
Adding Taskmaster-1 dataset
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019
|
closed
|
https://github.com/huggingface/datasets/pull/1193
| 2020-12-06T04:09:57
| 2020-12-07T15:23:24
| 2020-12-07T15:08:39
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
757,839,671
| 1,192
|
Add NewsPH_NLI dataset
|
This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
Link to the paper: https://arxiv.org/pdf/2010.11574.pdf
Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
|
closed
|
https://github.com/huggingface/datasets/pull/1192
| 2020-12-06T04:00:31
| 2020-12-07T15:39:43
| 2020-12-07T15:39:43
|
{
"login": "anaerobeth",
"id": 3663322,
"type": "User"
}
|
[] | true
|
[] |
757,836,654
| 1,191
|
Added Translator Human Parity Data For a Chinese-English news transla…
|
…tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.
|
closed
|
https://github.com/huggingface/datasets/pull/1191
| 2020-12-06T03:34:13
| 2020-12-09T13:22:45
| 2020-12-09T13:22:45
|
{
"login": "leoxzhao",
"id": 7915719,
"type": "User"
}
|
[] | true
|
[] |
757,833,698
| 1,190
|
Add Fake News Detection in Filipino dataset
|
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake.
Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html
Link to the dataset/repo: https://github.com/jcblaisecruz02/Tagalog-fake-news
|
closed
|
https://github.com/huggingface/datasets/pull/1190
| 2020-12-06T03:12:15
| 2020-12-07T15:39:27
| 2020-12-07T15:39:27
|
{
"login": "anaerobeth",
"id": 3663322,
"type": "User"
}
|
[] | true
|
[] |
757,831,035
| 1,189
|
Add Dengue dataset in Filipino
|
This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.
Link to the paper: https://ieeexplore.ieee.org/document/8459963
Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
|
closed
|
https://github.com/huggingface/datasets/pull/1189
| 2020-12-06T02:50:47
| 2020-12-07T15:38:58
| 2020-12-07T15:38:58
|
{
"login": "anaerobeth",
"id": 3663322,
"type": "User"
}
|
[] | true
|
[] |
757,827,407
| 1,188
|
adding hind_encorp dataset
|
adding Hindi_Encorp05 dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1188
| 2020-12-06T02:18:45
| 2020-12-11T17:40:41
| 2020-12-11T17:40:41
|
{
"login": "rahul-art",
"id": 56379013,
"type": "User"
}
|
[] | true
|
[] |
757,826,707
| 1,187
|
Added AQUA-RAT (Algebra Question Answering with Rationales) Dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1187
| 2020-12-06T02:12:52
| 2020-12-07T15:37:12
| 2020-12-07T15:37:12
|
{
"login": "arkhalid",
"id": 14899066,
"type": "User"
}
|
[] | true
|
[] |
|
757,826,660
| 1,186
|
all test passed
|
need help creating dummy data
|
closed
|
https://github.com/huggingface/datasets/pull/1186
| 2020-12-06T02:12:32
| 2020-12-07T15:06:55
| 2020-12-07T15:06:55
|
{
"login": "rahul-art",
"id": 56379013,
"type": "User"
}
|
[] | true
|
[] |
757,825,413
| 1,185
|
Add Hate Speech Dataset in Filipino
|
This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.
Link to the paper: https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019
Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
|
closed
|
https://github.com/huggingface/datasets/pull/1185
| 2020-12-06T02:01:56
| 2020-12-07T15:35:33
| 2020-12-07T15:35:33
|
{
"login": "anaerobeth",
"id": 3663322,
"type": "User"
}
|
[] | true
|
[] |
757,807,583
| 1,184
|
Add Adversarial SQuAD dataset
|
# Adversarial SQuAD
Adding the Adversarial [SQuAD](https://github.com/robinjia/adversarial-squad) dataset as part of the sprint 🎉
This dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data is intended for use in evaluation. (Which could of course be also used for training if one wants). So there is no classical train/val/test split, but a split based on the number of adversaries added.
There are 2 splits of this dataset:
- AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way.
- AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way.
(The AddAny and AddCommon datasets mentioned in the paper are dynamically generated based on model's output distribution thus are not included here)
The failing test look like some unrelated timeout thing, will probably clear if rerun.
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
|
closed
|
https://github.com/huggingface/datasets/pull/1184
| 2020-12-05T23:51:57
| 2020-12-16T16:12:58
| 2020-12-16T16:12:58
|
{
"login": "cceyda",
"id": 15624271,
"type": "User"
}
|
[] | true
|
[] |
757,806,570
| 1,183
|
add mkb dataset
|
This PR will add Mann Ki Baat dataset (parallel data for Indian languages).
|
closed
|
https://github.com/huggingface/datasets/pull/1183
| 2020-12-05T23:44:33
| 2020-12-09T09:38:50
| 2020-12-09T09:38:50
|
{
"login": "thevasudevgupta",
"id": 53136577,
"type": "User"
}
|
[] | true
|
[] |
757,804,877
| 1,182
|
ADD COVID-QA dataset
|
This PR adds the COVID-QA dataset, a question answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19
Link to the paper: https://openreview.net/forum?id=JENSKEEzsoU
Link to the dataset/repo: https://github.com/deepset-ai/COVID-QA
|
closed
|
https://github.com/huggingface/datasets/pull/1182
| 2020-12-05T23:31:56
| 2020-12-28T13:23:14
| 2020-12-07T14:23:27
|
{
"login": "olinguyen",
"id": 4341867,
"type": "User"
}
|
[] | true
|
[] |
757,791,992
| 1,181
|
added emotions detection in arabic dataset
|
Dataset for Emotions detection in Arabic text
more info: https://github.com/AmrMehasseb/Emotional-Tone
|
closed
|
https://github.com/huggingface/datasets/pull/1181
| 2020-12-05T22:08:46
| 2020-12-21T09:53:51
| 2020-12-21T09:53:51
|
{
"login": "abdulelahsm",
"id": 28743265,
"type": "User"
}
|
[] | true
|
[] |
757,784,612
| 1,180
|
Add KorQuAD v2 Dataset
|
# The Korean Question Answering Dataset v2
Adding the [KorQuAD](https://korquad.github.io/) v2 dataset as part of the sprint 🎉
This dataset is very similar to SQuAD and is an extension of [squad_kor_v1](https://github.com/huggingface/datasets/pull/1178) which is why I added it as `squad_kor_v2`.
- Crowd generated questions and answer (1-answer per question) for Wikipedia articles. Differently from V1 it includes the html structure and markup, which makes it a different enough dataset. (doesn't share ids between v1 and v2 either)
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
Edit: 🤦 looks like squad_kor_v1 commit sneaked in here too
|
closed
|
https://github.com/huggingface/datasets/pull/1180
| 2020-12-05T21:33:34
| 2020-12-16T16:10:30
| 2020-12-16T16:10:30
|
{
"login": "cceyda",
"id": 15624271,
"type": "User"
}
|
[] | true
|
[] |
757,784,074
| 1,179
|
Small update to the doc: add flatten_indices in doc
|
Small update to the doc: add flatten_indices in doc
|
closed
|
https://github.com/huggingface/datasets/pull/1179
| 2020-12-05T21:30:10
| 2020-12-07T13:42:57
| 2020-12-07T13:42:56
|
{
"login": "thomwolf",
"id": 7353373,
"type": "User"
}
|
[] | true
|
[] |
757,783,435
| 1,178
|
Add KorQuAD v1 Dataset
|
# The Korean Question Answering Dataset
Adding the [KorQuAD](https://korquad.github.io/KorQuad%201.0/) v1 dataset as part of the sprint 🎉
This dataset is very similar to SQuAD which is why I added it as `squad_kor_v1`. There is also a v2 which I added [here](https://github.com/huggingface/datasets/pull/1180).
- Crowd generated questions and answer (1-answer per question) for Wikipedia articles.
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
|
closed
|
https://github.com/huggingface/datasets/pull/1178
| 2020-12-05T21:25:46
| 2020-12-07T13:41:37
| 2020-12-07T13:41:37
|
{
"login": "cceyda",
"id": 15624271,
"type": "User"
}
|
[] | true
|
[] |
757,778,684
| 1,177
|
Add Korean NER dataset
|
This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
|
closed
|
https://github.com/huggingface/datasets/pull/1177
| 2020-12-05T20:56:00
| 2020-12-06T20:19:48
| 2020-12-06T20:19:48
|
{
"login": "jaketae",
"id": 25360440,
"type": "User"
}
|
[] | true
|
[] |
757,778,365
| 1,176
|
Add OpenPI Dataset
|
Add the OpenPI Dataset by AI2 (AllenAI)
|
closed
|
https://github.com/huggingface/datasets/pull/1176
| 2020-12-05T20:54:06
| 2022-10-03T09:39:54
| 2022-10-03T09:39:54
|
{
"login": "bharatr21",
"id": 13381361,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
757,770,077
| 1,175
|
added ReDial dataset
|
Updating README
Dataset link: https://redialdata.github.io/website/datasheet
|
closed
|
https://github.com/huggingface/datasets/pull/1175
| 2020-12-05T20:04:18
| 2020-12-07T13:21:43
| 2020-12-07T13:21:43
|
{
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
}
|
[] | true
|
[] |
757,768,474
| 1,174
|
Add Universal Morphologies
|
Adding unimorph universal morphology annotations for 110 languages, pfew!!!
one lemma per row with all possible forms and annotations
https://unimorph.github.io/
|
closed
|
https://github.com/huggingface/datasets/pull/1174
| 2020-12-05T19:54:43
| 2021-01-26T16:50:16
| 2021-01-26T16:41:48
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[] | true
|
[] |
757,761,967
| 1,173
|
add wikipedia biography dataset
|
My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). It passes all the tests.
|
closed
|
https://github.com/huggingface/datasets/pull/1173
| 2020-12-05T19:14:50
| 2020-12-07T11:13:14
| 2020-12-07T11:13:14
|
{
"login": "alejandrocros",
"id": 39712560,
"type": "User"
}
|
[] | true
|
[] |
757,758,532
| 1,172
|
Add proto_qa dataset
|
Added dataset tags as required.
|
closed
|
https://github.com/huggingface/datasets/pull/1172
| 2020-12-05T18:55:04
| 2020-12-07T11:12:24
| 2020-12-07T11:12:24
|
{
"login": "bpatidar",
"id": 12439573,
"type": "User"
}
|
[] | true
|
[] |
757,757,000
| 1,171
|
Add imdb Urdu Reviews dataset.
|
Added the imdb Urdu reviews dataset. More info about the dataset over <a href="https://github.com/mirfan899/Urdu">here</a>.
|
closed
|
https://github.com/huggingface/datasets/pull/1171
| 2020-12-05T18:46:05
| 2020-12-07T11:11:17
| 2020-12-07T11:11:17
|
{
"login": "chaitnayabasava",
"id": 44389205,
"type": "User"
}
|
[] | true
|
[] |
757,754,378
| 1,170
|
Fix path handling for Windows
|
closed
|
https://github.com/huggingface/datasets/pull/1170
| 2020-12-05T18:31:54
| 2020-12-07T10:47:23
| 2020-12-07T10:47:23
|
{
"login": "edugp",
"id": 17855740,
"type": "User"
}
|
[] | true
|
[] |
|
757,747,997
| 1,169
|
Add Opus fiskmo dataset for Finnish and Swedish for MT task
|
Adding fiskmo, a massive parallel corpus for Finnish and Swedish.
for more info : http://opus.nlpl.eu/fiskmo.php
|
closed
|
https://github.com/huggingface/datasets/pull/1169
| 2020-12-05T17:56:55
| 2020-12-07T11:04:11
| 2020-12-07T11:04:11
|
{
"login": "spatil6",
"id": 6419011,
"type": "User"
}
|
[] | true
|
[] |
757,740,780
| 1,168
|
Add Naver sentiment movie corpus
|
This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
|
closed
|
https://github.com/huggingface/datasets/pull/1168
| 2020-12-05T17:25:23
| 2020-12-07T13:34:09
| 2020-12-07T13:34:09
|
{
"login": "jaketae",
"id": 25360440,
"type": "User"
}
|
[] | true
|
[] |
757,722,921
| 1,167
|
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
|
Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern.
I guess the solution would entail wrapping a dataset into a Pytorch dataset.
As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html)
```python
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
# instead of doing this beforehand, I'd like to do tokenization on the fly
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
```
How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers?
----
Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant
```python
class CustomPytorchDataset(Dataset):
def __init__(self):
self.dataset = some_hf_dataset(...)
self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
def __getitem__(self, batch_idx):
instance = self.dataset[text_col][batch_idx]
tokenized_text = self.tokenizer(instance, truncation=True, padding=True)
return tokenized_text
def __len__(self):
return len(self.dataset)
@staticmethod
def collate_fn(batch):
# batch is a list, however it will always contain 1 item because we should not use the
# batch_size argument as batch_size is controlled by the sampler
return {k: torch.tensor(v) for k, v in batch[0].items()}
torch_ds = CustomPytorchDataset()
# NOTE: batch_sampler returns list of integers and since here we have SequentialSampler
# it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)`
batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True)
# NOTE: no `batch_size` as now the it is controlled by the sampler!
dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn)
```
|
closed
|
https://github.com/huggingface/datasets/issues/1167
| 2020-12-05T17:02:56
| 2023-07-20T15:49:42
| 2023-07-20T15:49:42
|
{
"login": "pietrolesci",
"id": 61748653,
"type": "User"
}
|
[
{
"name": "question",
"color": "d876e3"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false
|
[] |
757,721,208
| 1,166
|
Opus montenegrinsubs
|
Opus montenegrinsubs - language pair en-me
more info : http://opus.nlpl.eu/MontenegrinSubs.php
|
closed
|
https://github.com/huggingface/datasets/pull/1166
| 2020-12-05T17:00:44
| 2020-12-07T11:02:49
| 2020-12-07T11:02:49
|
{
"login": "spatil6",
"id": 6419011,
"type": "User"
}
|
[] | true
|
[] |
757,720,226
| 1,165
|
Add ar rest reviews
|
added restaurants reviews in Arabic for sentiment analysis tasks
|
closed
|
https://github.com/huggingface/datasets/pull/1165
| 2020-12-05T16:56:42
| 2020-12-21T17:06:23
| 2020-12-21T17:06:23
|
{
"login": "abdulelahsm",
"id": 28743265,
"type": "User"
}
|
[] | true
|
[] |
757,716,575
| 1,164
|
Add DaNe dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1164
| 2020-12-05T16:36:50
| 2020-12-08T12:50:18
| 2020-12-08T12:49:55
|
{
"login": "ophelielacroix",
"id": 28562991,
"type": "User"
}
|
[] | true
|
[] |
|
757,711,340
| 1,163
|
Added memat : Xhosa-English parallel corpora
|
Added memat : Xhosa-English parallel corpora
for more info : http://opus.nlpl.eu/memat.php
|
closed
|
https://github.com/huggingface/datasets/pull/1163
| 2020-12-05T16:08:50
| 2020-12-07T10:40:24
| 2020-12-07T10:40:24
|
{
"login": "spatil6",
"id": 6419011,
"type": "User"
}
|
[] | true
|
[] |
757,707,085
| 1,162
|
Add Mocha dataset
|
More information: https://allennlp.org/mocha
|
closed
|
https://github.com/huggingface/datasets/pull/1162
| 2020-12-05T15:45:14
| 2020-12-07T10:09:39
| 2020-12-07T10:09:39
|
{
"login": "mattbui",
"id": 46804938,
"type": "User"
}
|
[] | true
|
[] |
757,705,286
| 1,161
|
Linguisticprobing
|
Adding Linguistic probing datasets from
What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties
https://www.aclweb.org/anthology/P18-1198/
|
closed
|
https://github.com/huggingface/datasets/pull/1161
| 2020-12-05T15:35:18
| 2022-10-03T09:40:04
| 2022-10-03T09:40:04
|
{
"login": "sileod",
"id": 9168444,
"type": "User"
}
|
[
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true
|
[] |
757,677,188
| 1,160
|
adding TabFact dataset
|
Adding TabFact: A Large-scale Dataset for Table-based Fact Verification.
https://github.com/wenhuchen/Table-Fact-Checking
- The tables are stored as individual csv files, so need to download 16,573 🤯 csv files. As a result the `datasets_infos.json` file is huge (6.62 MB).
- Original dataset has nested structure where, where table is one example and each table has multiple statements,
flattening the structure here so that each statement is one example.
|
closed
|
https://github.com/huggingface/datasets/pull/1160
| 2020-12-05T13:05:52
| 2020-12-09T11:41:39
| 2020-12-09T09:12:41
|
{
"login": "patil-suraj",
"id": 27137566,
"type": "User"
}
|
[] | true
|
[] |
757,661,128
| 1,159
|
Add Roman Urdu dataset
|
This PR adds the [Roman Urdu dataset](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set#).
|
closed
|
https://github.com/huggingface/datasets/pull/1159
| 2020-12-05T11:36:43
| 2020-12-07T13:41:21
| 2020-12-07T09:59:03
|
{
"login": "jaketae",
"id": 25360440,
"type": "User"
}
|
[] | true
|
[] |
757,658,926
| 1,158
|
Add BBC Hindi NLI Dataset
|
# Dataset Card for BBC Hindi NLI Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- HomePage : https://github.com/midas-research/hindi-nli-data
- Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71"
- Point of Contact : https://github.com/midas-research/hindi-nli-data
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Context and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
[More Information Needed]
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- Train and Test files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'}
```
### Data Fields
- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.
### Data Splits
- Train : 15553
- Valid : 2581
- Test : 2593
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia
- We processed this dataset to combine two sets of relevant but low prevalence classes.
- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.
- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.
- Lastly, we also removed the class multimedia because there were very few samples.
#### Who are the source language producers?
Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/avinsit123/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
|
closed
|
https://github.com/huggingface/datasets/pull/1158
| 2020-12-05T11:25:34
| 2021-02-05T09:48:31
| 2021-02-05T09:48:31
|
{
"login": "avinsit123",
"id": 33565881,
"type": "User"
}
|
[] | true
|
[] |
757,657,888
| 1,157
|
Add dataset XhosaNavy English -Xhosa
|
Add dataset XhosaNavy English -Xhosa
More info : http://opus.nlpl.eu/XhosaNavy.php
|
closed
|
https://github.com/huggingface/datasets/pull/1157
| 2020-12-05T11:19:54
| 2020-12-07T09:11:33
| 2020-12-07T09:11:33
|
{
"login": "spatil6",
"id": 6419011,
"type": "User"
}
|
[] | true
|
[] |
757,656,094
| 1,156
|
add telugu-news corpus
|
Adding Telugu News Corpus to datasets.
|
closed
|
https://github.com/huggingface/datasets/pull/1156
| 2020-12-05T11:07:56
| 2020-12-07T09:08:48
| 2020-12-07T09:08:48
|
{
"login": "oostopitre",
"id": 3135345,
"type": "User"
}
|
[] | true
|
[] |
757,652,517
| 1,155
|
Add BSD
|
This PR adds BSD, the Japanese-English business dialogue corpus by
[Rikters et al., 2020](https://www.aclweb.org/anthology/D19-5204.pdf).
|
closed
|
https://github.com/huggingface/datasets/pull/1155
| 2020-12-05T10:43:48
| 2020-12-07T09:27:46
| 2020-12-07T09:27:46
|
{
"login": "j-chim",
"id": 22435209,
"type": "User"
}
|
[] | true
|
[] |
757,651,669
| 1,154
|
Opus sardware
|
Added Opus sardware dataset for machine translation English to Sardinian.
for more info : http://opus.nlpl.eu/sardware.php
|
closed
|
https://github.com/huggingface/datasets/pull/1154
| 2020-12-05T10:38:02
| 2020-12-05T17:05:45
| 2020-12-05T17:05:45
|
{
"login": "spatil6",
"id": 6419011,
"type": "User"
}
|
[] | true
|
[] |
757,643,302
| 1,153
|
Adding dataset for proto_qa in huggingface datasets library
|
Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
Followed all steps for adding a new dataset.
|
closed
|
https://github.com/huggingface/datasets/pull/1153
| 2020-12-05T09:43:28
| 2020-12-05T18:53:10
| 2020-12-05T18:53:10
|
{
"login": "bpatidar",
"id": 12439573,
"type": "User"
}
|
[] | true
|
[] |
757,640,506
| 1,152
|
hindi discourse analysis dataset commit
|
closed
|
https://github.com/huggingface/datasets/pull/1152
| 2020-12-05T09:24:01
| 2020-12-14T19:44:48
| 2020-12-14T19:44:48
|
{
"login": "duttahritwik",
"id": 31453142,
"type": "User"
}
|
[] | true
|
[] |
|
757,517,092
| 1,151
|
adding psc dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1151
| 2020-12-05T02:40:01
| 2020-12-09T11:38:41
| 2020-12-09T11:38:41
|
{
"login": "abecadel",
"id": 1654113,
"type": "User"
}
|
[] | true
|
[] |
|
757,512,441
| 1,150
|
adding dyk dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1150
| 2020-12-05T02:11:42
| 2020-12-05T16:52:19
| 2020-12-05T16:52:19
|
{
"login": "abecadel",
"id": 1654113,
"type": "User"
}
|
[] | true
|
[] |
|
757,504,068
| 1,149
|
Fix typo in the comment in _info function
|
closed
|
https://github.com/huggingface/datasets/pull/1149
| 2020-12-05T01:26:20
| 2020-12-05T16:19:26
| 2020-12-05T16:19:26
|
{
"login": "vinaykudari",
"id": 34424769,
"type": "User"
}
|
[] | true
|
[] |
|
757,503,918
| 1,148
|
adding polemo2 dataset
|
closed
|
https://github.com/huggingface/datasets/pull/1148
| 2020-12-05T01:25:29
| 2020-12-05T16:51:39
| 2020-12-05T16:51:39
|
{
"login": "abecadel",
"id": 1654113,
"type": "User"
}
|
[] | true
|
[] |
|
757,502,199
| 1,147
|
Vinay/add/telugu books
|
Real data tests are failing as this dataset needs to be manually downloaded
|
closed
|
https://github.com/huggingface/datasets/pull/1147
| 2020-12-05T01:17:02
| 2020-12-05T16:36:04
| 2020-12-05T16:36:04
|
{
"login": "vinaykudari",
"id": 34424769,
"type": "User"
}
|
[] | true
|
[] |
757,498,565
| 1,146
|
Add LINNAEUS
|
closed
|
https://github.com/huggingface/datasets/pull/1146
| 2020-12-05T01:01:09
| 2020-12-05T16:35:53
| 2020-12-05T16:35:53
|
{
"login": "edugp",
"id": 17855740,
"type": "User"
}
|
[] | true
|
[] |
|
757,477,349
| 1,145
|
Add Species-800
|
closed
|
https://github.com/huggingface/datasets/pull/1145
| 2020-12-04T23:44:51
| 2022-01-13T03:09:20
| 2020-12-05T16:35:01
|
{
"login": "edugp",
"id": 17855740,
"type": "User"
}
|
[] | true
|
[] |
|
757,452,831
| 1,144
|
Add JFLEG
|
This PR adds [JFLEG ](https://www.aclweb.org/anthology/E17-2037/), an English grammatical error correction benchmark.
The tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target sentences**. The original dataset comprise files in a flat structure, labelled by split then by source/target (e.g., dev.src, dev.ref0, ..., dev.ref3). Not sure what is the best way of adding this.
I imagine I can treat each distinct source-target pair as its own split? But having so many copies of the source sentence feels redundant, and it would make it less convenient to end-users who might want to access multiple gold standard targets simultaneously.
|
closed
|
https://github.com/huggingface/datasets/pull/1144
| 2020-12-04T22:36:38
| 2020-12-06T18:16:04
| 2020-12-06T18:16:04
|
{
"login": "j-chim",
"id": 22435209,
"type": "User"
}
|
[] | true
|
[] |
757,448,920
| 1,143
|
Add the Winograd Schema Challenge
|
Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples.
- https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html
The data format was a bit of a nightmare but I think I got it to a workable format.
|
closed
|
https://github.com/huggingface/datasets/pull/1143
| 2020-12-04T22:26:59
| 2020-12-09T15:11:31
| 2020-12-09T09:32:34
|
{
"login": "joeddav",
"id": 9353833,
"type": "User"
}
|
[] | true
|
[] |
757,413,920
| 1,142
|
Fix PerSenT
|
New PR for dataset PerSenT
|
closed
|
https://github.com/huggingface/datasets/pull/1142
| 2020-12-04T21:21:02
| 2020-12-14T13:39:34
| 2020-12-14T13:39:34
|
{
"login": "jeromeku",
"id": 2455711,
"type": "User"
}
|
[] | true
|
[] |
757,411,057
| 1,141
|
Add GitHub version of ETH Py150 Corpus
|
Add the redistributable version of **ETH Py150 Corpus**
|
closed
|
https://github.com/huggingface/datasets/pull/1141
| 2020-12-04T21:16:08
| 2020-12-09T18:32:44
| 2020-12-07T10:00:24
|
{
"login": "bharatr21",
"id": 13381361,
"type": "User"
}
|
[] | true
|
[] |
757,399,142
| 1,140
|
Add Urdu Sentiment Corpus (USC).
|
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
|
closed
|
https://github.com/huggingface/datasets/pull/1140
| 2020-12-04T20:55:27
| 2020-12-07T03:27:23
| 2020-12-07T03:27:23
|
{
"login": "chaitnayabasava",
"id": 44389205,
"type": "User"
}
|
[] | true
|
[] |
757,393,158
| 1,139
|
Add ReFreSD dataset
|
This PR adds the **ReFreSD dataset**.
The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.
Need feedback on:
- I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.
- The feature names.
- I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.
- There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.
- The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.
Thanks in advance
|
closed
|
https://github.com/huggingface/datasets/pull/1139
| 2020-12-04T20:45:11
| 2020-12-16T16:01:18
| 2020-12-16T16:01:18
|
{
"login": "mpariente",
"id": 18496796,
"type": "User"
}
|
[] | true
|
[] |
757,378,406
| 1,138
|
updated after the class name update
|
@lhoestq <---
|
closed
|
https://github.com/huggingface/datasets/pull/1138
| 2020-12-04T20:19:43
| 2020-12-05T15:43:32
| 2020-12-05T15:43:32
|
{
"login": "timpal0l",
"id": 6556710,
"type": "User"
}
|
[] | true
|
[] |
757,358,145
| 1,137
|
add wmt mlqe 2020 shared task
|
First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation)
Note that I copied the tags in the README for only one (of the 7 configurations): `en-de`.
There is one configuration for each pair of languages.
|
closed
|
https://github.com/huggingface/datasets/pull/1137
| 2020-12-04T19:45:34
| 2020-12-06T19:59:44
| 2020-12-06T19:53:46
|
{
"login": "VictorSanh",
"id": 16107619,
"type": "User"
}
|
[] | true
|
[] |
757,341,607
| 1,136
|
minor change in description in paws-x.py and updated dataset_infos
|
closed
|
https://github.com/huggingface/datasets/pull/1136
| 2020-12-04T19:17:49
| 2020-12-06T18:02:57
| 2020-12-06T18:02:57
|
{
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
}
|
[] | true
|
[] |
|
757,325,741
| 1,135
|
added paws
|
Updating README and tags for dataset card in a while
|
closed
|
https://github.com/huggingface/datasets/pull/1135
| 2020-12-04T18:52:38
| 2020-12-09T17:17:13
| 2020-12-09T17:17:13
|
{
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
}
|
[] | true
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.