id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
755,936,327
1,034
add scb_mt_enth_2020
## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus The primary objective of our work is to build a large-scale English-Thai dataset for machine translation. We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources, namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents. Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner. We train machine translation models based on this dataset. Our models' performance are comparable to that of Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is included in the training data for both Thai-English and English-Thai translation. The dataset, pre-trained models, and source code to reproduce our work are available for public use.
closed
https://github.com/huggingface/datasets/pull/1034
2020-12-03T07:13:49
2020-12-03T16:57:23
2020-12-03T16:57:23
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
755,921,927
1,033
Add support for ".txm" format
In dummy data generation, add support for XML-like ".txm" file format. Also support filenames with additional compression extension: ".txm.gz".
closed
https://github.com/huggingface/datasets/pull/1033
2020-12-03T06:52:08
2021-02-21T19:47:11
2021-02-21T19:47:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
755,858,785
1,032
IIT B English to Hindi machine translation dataset
Adding IIT Bombay English-Hindi Corpus dataset more info : http://www.cfilt.iitb.ac.in/iitb_parallel/
closed
https://github.com/huggingface/datasets/pull/1032
2020-12-03T05:18:45
2021-01-10T08:44:51
2021-01-10T08:44:15
{ "login": "spatil6", "id": 6419011, "type": "User" }
[]
true
[]
755,844,004
1,031
add crows_pairs
This PR adds CrowS-Pairs datasets. More info: https://github.com/nyu-mll/crows-pairs/ https://arxiv.org/pdf/2010.00133.pdf
closed
https://github.com/huggingface/datasets/pull/1031
2020-12-03T05:05:11
2020-12-03T18:29:52
2020-12-03T18:29:39
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
755,777,438
1,030
allegro_reviews dataset
- **Name:** *allegro_reviews* - **Description:** *Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).* - **Data:** *https://github.com/allegro/klejbenchmark-allegroreviews* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
closed
https://github.com/huggingface/datasets/pull/1030
2020-12-03T03:11:39
2020-12-04T10:56:29
2020-12-03T16:34:47
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
755,767,616
1,029
Add PEC
A persona-based empathetic conversation dataset.
closed
https://github.com/huggingface/datasets/pull/1029
2020-12-03T02:46:08
2020-12-04T10:58:19
2020-12-03T16:15:06
{ "login": "zhongpeixiang", "id": 11826803, "type": "User" }
[]
true
[]
755,712,854
1,028
Add ASSET dataset for text simplification evaluation
Adding the ASSET dataset from https://github.com/facebookresearch/asset One config for the simplification data, one for the human ratings of quality. The README.md borrows from that written by @juand-r
closed
https://github.com/huggingface/datasets/pull/1028
2020-12-03T00:28:29
2020-12-17T10:03:06
2020-12-03T16:34:37
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
755,695,420
1,027
Hi
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1027
2020-12-02T23:47:14
2020-12-03T16:42:41
2020-12-03T16:42:41
{ "login": "suemori87", "id": 75398394, "type": "User" }
[]
false
[]
755,689,195
1,026
Lío o
````l````````` ``` O ``` ````` Ño ``` ```` ```
closed
https://github.com/huggingface/datasets/issues/1026
2020-12-02T23:32:25
2020-12-03T16:42:47
2020-12-03T16:42:47
{ "login": "ghost", "id": 10137, "type": "User" }
[]
false
[]
755,673,371
1,025
Add Sesotho Ner
closed
https://github.com/huggingface/datasets/pull/1025
2020-12-02T23:00:15
2020-12-16T16:27:03
2020-12-16T16:27:02
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
755,664,113
1,024
Add ZEST: ZEroShot learning from Task descriptions
Adds the ZEST dataset on zero-shot learning from task descriptions from AI2. - Webpage: https://allenai.org/data/zest - Paper: https://arxiv.org/abs/2011.08115 The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that...
closed
https://github.com/huggingface/datasets/pull/1024
2020-12-02T22:41:20
2020-12-03T19:21:00
2020-12-03T16:09:15
{ "login": "joeddav", "id": 9353833, "type": "User" }
[]
true
[]
755,655,752
1,023
Add Schema Guided Dialogue dataset
This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge - https://github.com/google-research-datasets/dstc8-schema-guided-dialogue A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config for the schemas.
closed
https://github.com/huggingface/datasets/pull/1023
2020-12-02T22:26:01
2020-12-03T01:18:01
2020-12-03T01:18:01
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
755,651,377
1,022
add MRQA
MRQA (shared task 2019) out of distribution generalization Framed as extractive question answering Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format
closed
https://github.com/huggingface/datasets/pull/1022
2020-12-02T22:17:56
2020-12-04T00:34:26
2020-12-04T00:34:25
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
755,644,559
1,021
Add Gutenberg time references dataset
This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124
closed
https://github.com/huggingface/datasets/pull/1021
2020-12-02T22:05:26
2020-12-03T10:33:39
2020-12-03T10:33:38
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
755,601,450
1,020
Add Setswana NER
closed
https://github.com/huggingface/datasets/pull/1020
2020-12-02T20:52:07
2020-12-03T14:56:14
2020-12-03T14:56:14
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
755,582,090
1,019
Add caWaC dataset
Add dataset.
closed
https://github.com/huggingface/datasets/pull/1019
2020-12-02T20:18:55
2020-12-03T14:47:09
2020-12-03T14:47:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
755,570,882
1,018
Add Sepedi NER
This is a new branch created for this dataset
closed
https://github.com/huggingface/datasets/pull/1018
2020-12-02T20:01:05
2020-12-03T21:47:03
2020-12-03T21:46:38
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
755,558,175
1,017
Specify file encoding
If not specified, Python uses system default, which for Windows is not "utf-8".
closed
https://github.com/huggingface/datasets/pull/1017
2020-12-02T19:40:45
2020-12-03T00:44:25
2020-12-03T00:44:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
755,521,862
1,016
Add CLINC150 dataset
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
closed
https://github.com/huggingface/datasets/pull/1016
2020-12-02T18:44:30
2020-12-03T10:32:04
2020-12-03T10:32:04
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
755,508,841
1,015
add hard dataset
Hotel Reviews in Arabic language.
closed
https://github.com/huggingface/datasets/pull/1015
2020-12-02T18:27:36
2020-12-03T15:03:54
2020-12-03T15:03:54
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
755,505,851
1,014
Add SciTLDR Dataset (Take 2)
Adds the SciTLDR Dataset by AI2 Added the `README.md` card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents Continued from #986
closed
https://github.com/huggingface/datasets/pull/1014
2020-12-02T18:22:50
2020-12-02T18:55:10
2020-12-02T18:37:58
{ "login": "bharatr21", "id": 13381361, "type": "User" }
[]
true
[]
755,493,075
1,013
Adding CS restaurants dataset
This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history.
closed
https://github.com/huggingface/datasets/pull/1013
2020-12-02T18:02:30
2020-12-02T18:25:20
2020-12-02T18:25:19
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
755,485,658
1,012
Adding Evidence Inference Data:
http://evidence-inference.ebm-nlp.com/download/ https://arxiv.org/pdf/2005.04177.pdf
closed
https://github.com/huggingface/datasets/pull/1012
2020-12-02T17:51:35
2020-12-03T15:04:46
2020-12-03T15:04:46
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,463,726
1,011
Add Bilingual Corpus of Arabic-English Parallel Tweets
Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
closed
https://github.com/huggingface/datasets/pull/1011
2020-12-02T17:20:02
2020-12-04T14:45:10
2020-12-04T14:44:33
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
755,432,143
1,010
Add NoReC: Norwegian Review Corpus
closed
https://github.com/huggingface/datasets/pull/1010
2020-12-02T16:38:29
2021-02-18T14:47:29
2021-02-18T14:47:28
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
755,384,433
1,009
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset.
https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
closed
https://github.com/huggingface/datasets/pull/1009
2020-12-02T15:40:36
2020-12-03T13:16:30
2020-12-03T13:16:29
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,372,798
1,008
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
null
closed
https://github.com/huggingface/datasets/pull/1008
2020-12-02T15:28:05
2020-12-02T15:40:55
2020-12-02T15:40:55
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,364,078
1,007
Include license file in source distribution
It would be helpful to include the license file in the source distribution.
closed
https://github.com/huggingface/datasets/pull/1007
2020-12-02T15:17:43
2020-12-02T17:58:05
2020-12-02T17:58:05
{ "login": "synapticarbors", "id": 589279, "type": "User" }
[]
true
[]
755,362,766
1,006
add yahoo_answers_topics
This PR adds yahoo answers topic classification dataset. More info: https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset cc @joeddav, @yjernite
closed
https://github.com/huggingface/datasets/pull/1006
2020-12-02T15:16:13
2020-12-03T16:44:38
2020-12-02T18:01:32
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
755,337,255
1,005
Adding Autshumato South african langages:
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned
closed
https://github.com/huggingface/datasets/pull/1005
2020-12-02T14:47:33
2020-12-03T13:13:30
2020-12-03T13:13:30
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,325,368
1,004
how large datasets are handled under the hood
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
closed
https://github.com/huggingface/datasets/issues/1004
2020-12-02T14:32:40
2022-10-05T12:13:29
2022-10-05T12:13:29
{ "login": "rabeehkarimimahabadi", "id": 73364383, "type": "User" }
[]
false
[]
755,310,318
1,003
Add multi_x_science_sum
Add Multi-XScience Dataset. github repo: https://github.com/yaolu/Multi-XScience paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
closed
https://github.com/huggingface/datasets/pull/1003
2020-12-02T14:14:01
2020-12-02T17:39:05
2020-12-02T17:39:05
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
755,309,758
1,002
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
null
closed
https://github.com/huggingface/datasets/pull/1002
2020-12-02T14:13:17
2020-12-07T16:58:03
2020-12-03T13:14:33
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,309,071
1,001
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
null
closed
https://github.com/huggingface/datasets/pull/1001
2020-12-02T14:12:30
2020-12-02T14:13:12
2020-12-02T14:13:12
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,292,066
1,000
UM005: Urdu <> English Translation Dataset
Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/
closed
https://github.com/huggingface/datasets/pull/1000
2020-12-02T13:51:35
2020-12-04T15:34:30
2020-12-04T15:34:29
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
755,246,786
999
add generated_reviews_enth
`generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.
closed
https://github.com/huggingface/datasets/pull/999
2020-12-02T12:50:43
2020-12-03T11:17:28
2020-12-03T11:17:28
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
755,235,356
998
adding yahoo_answers_qa
Adding Yahoo Answers QA dataset. More info: https://ciir.cs.umass.edu/downloads/nfL6/
closed
https://github.com/huggingface/datasets/pull/998
2020-12-02T12:33:54
2020-12-02T13:45:40
2020-12-02T13:26:06
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
755,185,517
997
Microsoft CodeXGlue
Datasets from https://github.com/microsoft/CodeXGLUE This contains 13 datasets: code_x_glue_cc_clone_detection_big_clone_bench code_x_glue_cc_clone_detection_poj_104 code_x_glue_cc_cloze_testing_all code_x_glue_cc_cloze_testing_maxmin code_x_glue_cc_code_completion_line code_x_glue_cc_code_completion_token code_x_glue_cc_code_refinement code_x_glue_cc_code_to_code_trans code_x_glue_cc_defect_detection code_x_glue_ct_code_to_text code_x_glue_tc_nl_code_search_adv code_x_glue_tc_text_to_code code_x_glue_tt_text_to_text
closed
https://github.com/huggingface/datasets/pull/997
2020-12-02T11:21:18
2021-06-08T13:42:25
2021-06-08T13:42:24
{ "login": "madlag", "id": 272253, "type": "User" }
[]
true
[]
755,176,084
996
NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
closed
https://github.com/huggingface/datasets/issues/996
2020-12-02T11:07:56
2022-02-17T14:13:39
2022-02-17T14:13:39
{ "login": "arc-bu", "id": 75367920, "type": "User" }
[]
false
[]
755,175,199
995
added dataset circa
Dataset Circa added. Only README.md and dataset card left
closed
https://github.com/huggingface/datasets/pull/995
2020-12-02T11:06:39
2020-12-04T10:58:16
2020-12-03T09:39:37
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
755,146,834
994
Add Sepedi ner corpus
closed
https://github.com/huggingface/datasets/pull/994
2020-12-02T10:30:07
2020-12-03T10:19:14
2020-12-02T18:20:08
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
755,135,768
993
Problem downloading amazon_reviews_multi
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, "all_languages", cache_dir=".data" )` I am using version 1.1.3 of `datasets` Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json`
closed
https://github.com/huggingface/datasets/issues/993
2020-12-02T10:15:57
2022-10-05T12:21:34
2022-10-05T12:21:34
{ "login": "hfawaz", "id": 29229602, "type": "User" }
[]
false
[]
755,124,963
992
Add CAIL 2018 dataset
closed
https://github.com/huggingface/datasets/pull/992
2020-12-02T10:01:40
2020-12-02T16:49:02
2020-12-02T16:49:01
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
755,117,902
991
Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets)
null
closed
https://github.com/huggingface/datasets/pull/991
2020-12-02T09:52:19
2020-12-03T11:01:26
2020-12-03T11:01:26
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
755,097,798
990
Add E2E NLG
Adding the E2E NLG dataset. More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass.
closed
https://github.com/huggingface/datasets/pull/990
2020-12-02T09:25:12
2020-12-03T13:08:05
2020-12-03T13:08:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
755,079,394
989
Fix SV -> NO
This PR fixes the small typo as seen in #956
closed
https://github.com/huggingface/datasets/pull/989
2020-12-02T08:59:59
2020-12-02T09:18:21
2020-12-02T09:18:14
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
755,069,159
988
making sure datasets are not loaded in memory and distributed training of them
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
closed
https://github.com/huggingface/datasets/issues/988
2020-12-02T08:45:15
2022-10-05T13:00:42
2022-10-05T13:00:42
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[]
false
[]
755,059,469
987
Add OPUS DOGC dataset
closed
https://github.com/huggingface/datasets/pull/987
2020-12-02T08:30:32
2020-12-04T13:27:41
2020-12-04T13:27:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
755,047,470
986
Add SciTLDR Dataset
Adds the SciTLDR Dataset by AI2 Added README card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents
closed
https://github.com/huggingface/datasets/pull/986
2020-12-02T08:11:16
2020-12-02T18:37:22
2020-12-02T18:02:59
{ "login": "bharatr21", "id": 13381361, "type": "User" }
[]
true
[]
755,020,564
985
Add GAP dataset
GAP dataset Gender bias coreference resolution
closed
https://github.com/huggingface/datasets/pull/985
2020-12-02T07:25:11
2022-10-06T14:11:52
2020-12-02T16:16:32
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
755,009,916
984
committing Whoa file
closed
https://github.com/huggingface/datasets/pull/984
2020-12-02T07:07:46
2020-12-02T16:15:29
2020-12-02T15:40:58
{ "login": "StulosDunamos", "id": 75356780, "type": "User" }
[]
true
[]
754,966,620
983
add mc taco
MC-TACO Temporal commonsense knowledge
closed
https://github.com/huggingface/datasets/pull/983
2020-12-02T05:54:55
2020-12-02T15:37:47
2020-12-02T15:37:46
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
754,946,337
982
add prachathai67k take2
I decided it will be faster to create a new pull request instead of fixing the rebase issues. continuing from https://github.com/huggingface/datasets/pull/954
closed
https://github.com/huggingface/datasets/pull/982
2020-12-02T05:12:01
2020-12-02T10:18:11
2020-12-02T10:18:11
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
754,937,612
981
add wisesight_sentiment take2
Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one.
closed
https://github.com/huggingface/datasets/pull/981
2020-12-02T04:50:59
2020-12-02T10:37:13
2020-12-02T10:37:13
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
754,899,301
980
Wongnai - Thai reviews dataset
40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ )
closed
https://github.com/huggingface/datasets/pull/980
2020-12-02T03:20:08
2020-12-02T15:34:41
2020-12-02T15:30:05
{ "login": "mapmeld", "id": 643918, "type": "User" }
[]
true
[]
754,893,337
979
[WIP] Add multi woz
This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2 It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md On the plus side the structure is broadly similar to that of the Google Schema Guided dialogue [dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue), so will take care of that one next.
closed
https://github.com/huggingface/datasets/pull/979
2020-12-02T03:05:42
2020-12-02T16:07:16
2020-12-02T16:07:16
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
754,854,478
978
Add code refinement
### OVERVIEW Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs Code refinement aims to automatically fix bugs in the code, which can contribute to reducing the cost of bug-fixes for developers. Given a piece of Java code with bugs, the task is to remove the bugs to output the refined code.
closed
https://github.com/huggingface/datasets/pull/978
2020-12-02T01:29:58
2020-12-07T01:52:58
2020-12-07T01:52:58
{ "login": "reshinthadithyan", "id": 36307201, "type": "User" }
[]
true
[]
754,839,594
977
Add ROPES dataset
ROPES dataset Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa. One thing to note: labels of the test set are hidden (leaderboard submission) so I encoded that as an empty list (ropes.py:L125)
closed
https://github.com/huggingface/datasets/pull/977
2020-12-02T00:52:10
2020-12-02T10:58:36
2020-12-02T10:58:35
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
754,826,146
976
Arabic pos dialect
A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP.
closed
https://github.com/huggingface/datasets/pull/976
2020-12-02T00:21:13
2020-12-09T17:30:32
2020-12-09T17:30:32
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[]
true
[]
754,823,701
975
add MeTooMA dataset
This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. Paper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292 Dataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU --- annotations_creators: - expert-generated language_creators: - found languages: - en multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification --- # Dataset Card for #MeTooMA dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU - **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292 - **Point of Contact:** https://github.com/midas-research/MeTooMA ### Dataset Summary - The dataset consists of tweets belonging to #MeToo movement on Twitter, labeled into different categories. - This dataset includes more data points and has more labels than any of the previous datasets that contain social media posts about sexual abuse disclosures. Please refer to the Related Datasets of the publication for detailed information about this. - Due to Twitter's development policies, the authors provide only the tweet IDs and corresponding labels, other data can be fetched via Twitter API. - The data has been labeled by experts, with the majority taken into the account for deciding the final label. - The authors provide these labels for each of the tweets. - Relevance - Directed Hate - Generalized Hate - Sarcasm - Allegation - Justification - Refutation - Support - Oppose - The definitions for each task/label are in the main publication. - Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data extracted from this dataset. - The language of all the tweets in this dataset is English - Time period: October 2018 - December 2018 - Suggested Use Cases of this dataset: - Evaluating usage of linguistic acts such as hate-speech and sarcasm in the context of public sexual abuse disclosures. - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations. - Identifying how influential people were portrayed on the public platform in the events of mass social movements. - Polarization analysis based on graph simulations of social nodes of users involved in the #MeToo movement. ### Supported Tasks and Leaderboards Multi-Label and Multi-Class Classification ### Languages English ## Dataset Structure - The dataset is structured into CSV format with TweetID and accompanying labels. - Train and Test sets are split into respective files. ### Data Instances Tweet ID and the appropriate labels ### Data Fields Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID ### Data Splits - Train: 7979 - Test: 1996 ## Dataset Creation ### Curation Rationale - Twitter was the major source of all the public disclosures of sexual abuse incidents during the #MeToo movement. - People expressed their opinions over issues that were previously missing from the social media space. - This provides an option to study the linguistic behaviors of social media users in an informal setting, therefore the authors decide to curate this annotated dataset. - The authors expect this dataset would be of great interest and use to both computational and socio-linguists. - For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media. ### Source Data - Source of all the data points in this dataset is a Twitter social media platform. #### Initial Data Collection and Normalization - All the tweets are mined from Twitter with initial search parameters identified using keywords from the #MeToo movement. - Redundant keywords were removed based on manual inspection. - Public streaming APIs of Twitter was used for querying with the selected keywords. - Based on text de-duplication and cosine similarity score, the set of tweets were pruned. - Non-English tweets were removed. - The final set was labeled by experts with the majority label taken into the account for deciding the final label. - Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292 #### Who are the source language producers? Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292 ### Annotations #### Annotation process - The authors chose against crowdsourcing for labeling this dataset due to its highly sensitive nature. - The annotators are domain experts having degrees in advanced clinical psychology and gender studies. - They were provided a guidelines document with instructions about each task and its definitions, labels, and examples. - They studied the document, worked on a few examples to get used to this annotation task. - They also provided feedback for improving the class definitions. - The annotation process is not mutually exclusive, implying that the presence of one label does not mean the absence of the other one. #### Who are the annotators? - The annotators are domain experts having a degree in clinical psychology and gender studies. - Please refer to the accompanying paper for a detailed annotation process. ### Personal and Sensitive Information - Considering Twitter's policy for distribution of data, only Tweet ID and applicable labels are shared for public use. - It is highly encouraged to use this dataset for scientific purposes only. - This dataset collection completely follows the Twitter mandated guidelines for distribution and usage. ## Considerations for Using the Data ### Social Impact of Dataset - The authors of this dataset do not intend to conduct a population-centric analysis of the #MeToo movement on Twitter. - The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these should be used to assist already existing human intervention tools and therapies. - Enough care has been taken to ensure that this work comes off as trying to target a specific person for their the personal stance of issues pertaining to the #MeToo movement. - The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner. - Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset and the social impact of this work. ### Discussion of Biases - The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of the community affected by sexual abuse. - Any work undertaken on this dataset should aim to minimize the bias against minority groups which might amplify in cases of a sudden outburst of public reactions over sensitive social media discussions. ### Other Known Limitations - Considering privacy concerns, social media practitioners should be aware of making automated interventions to aid the victims of sexual abuse as some people might not prefer to disclose their notions. - Concerned social media users might also repeal their social information if they found out that their information is being used for computational purposes, hence it is important to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy. ## Additional Information Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU ### Dataset Curators - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (http://midas.iiitd.edu.in) appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - If interested in the commercial use of the corpus, send an email to midas@iiitd.ac.in. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your social media data. - if interested in a collaborative research project. ### Licensing Information [More Information Needed] ### Citation Information Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292 ``` @article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={&lt;p&gt;In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.&lt;/p&#38;gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} } ```
closed
https://github.com/huggingface/datasets/pull/975
2020-12-02T00:15:55
2020-12-02T10:58:56
2020-12-02T10:58:55
{ "login": "akash418", "id": 23264033, "type": "User" }
[]
true
[]
754,811,185
974
Add MeTooMA Dataset
closed
https://github.com/huggingface/datasets/pull/974
2020-12-01T23:44:01
2020-12-01T23:57:58
2020-12-01T23:57:58
{ "login": "akash418", "id": 23264033, "type": "User" }
[]
true
[]
754,807,963
973
Adding The Microsoft Terminology Collection dataset.
closed
https://github.com/huggingface/datasets/pull/973
2020-12-01T23:36:23
2020-12-04T15:25:44
2020-12-04T15:12:46
{ "login": "leoxzhao", "id": 7915719, "type": "User" }
[]
true
[]
754,787,314
972
Add Children's Book Test (CBT) dataset
Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016). Sentence completion given a few sentences as context from a children's book.
closed
https://github.com/huggingface/datasets/pull/972
2020-12-01T22:53:26
2021-03-19T11:30:03
2021-03-19T11:30:03
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
754,784,041
971
add piqa
Physical Interaction: Question Answering (commonsense) https://yonatanbisk.com/piqa/
closed
https://github.com/huggingface/datasets/pull/971
2020-12-01T22:47:04
2020-12-02T09:58:02
2020-12-02T09:58:01
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
754,697,489
970
Add SWAG
Commonsense NLI -> https://rowanzellers.com/swag/
closed
https://github.com/huggingface/datasets/pull/970
2020-12-01T20:21:05
2020-12-02T09:55:16
2020-12-02T09:55:15
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
754,681,940
969
Add wiki auto dataset
This PR adds the WikiAuto sentence simplification dataset https://github.com/chaojiang06/wiki-auto This is also a prospective GEM task, hence the README.md
closed
https://github.com/huggingface/datasets/pull/969
2020-12-01T19:58:11
2020-12-02T16:19:14
2020-12-02T16:19:14
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
754,659,015
968
ADD Afrikaans NER
Afrikaans NER corpus
closed
https://github.com/huggingface/datasets/pull/968
2020-12-01T19:23:03
2020-12-02T09:41:28
2020-12-02T09:41:28
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
754,578,988
967
Add CS Restaurants dataset
This PR adds the Czech restaurants dataset for Czech NLG.
closed
https://github.com/huggingface/datasets/pull/967
2020-12-01T17:17:37
2020-12-02T17:57:44
2020-12-02T17:57:25
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
754,558,686
966
Add CLINC150 Dataset
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
closed
https://github.com/huggingface/datasets/pull/966
2020-12-01T16:50:13
2020-12-02T18:45:43
2020-12-02T18:45:30
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
754,553,169
965
Add CLINC150 Dataset
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
closed
https://github.com/huggingface/datasets/pull/965
2020-12-01T16:43:00
2020-12-01T16:51:16
2020-12-01T16:49:15
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
754,474,660
964
Adding the WebNLG dataset
This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration. More information can be found [here](https://webnlg-challenge.loria.fr/) Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file).
closed
https://github.com/huggingface/datasets/pull/964
2020-12-01T15:05:23
2020-12-02T17:34:05
2020-12-02T17:34:05
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
754,451,234
963
add CODAH dataset
Adding CODAH dataset. More info: https://github.com/Websail-NU/CODAH
closed
https://github.com/huggingface/datasets/pull/963
2020-12-01T14:37:05
2020-12-02T13:45:58
2020-12-02T13:21:25
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
754,441,428
962
Add Danish Political Comments Dataset
closed
https://github.com/huggingface/datasets/pull/962
2020-12-01T14:28:32
2020-12-03T10:31:55
2020-12-03T10:31:54
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
754,434,398
961
sample multiple datasets
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
closed
https://github.com/huggingface/datasets/issues/961
2020-12-01T14:20:02
2024-06-17T08:23:20
2023-07-20T14:08:57
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[]
false
[]
754,422,710
960
Add code to automate parts of the dataset card
Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so.
closed
https://github.com/huggingface/datasets/pull/960
2020-12-01T14:04:51
2023-09-24T09:50:38
2021-04-26T07:56:01
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
754,418,610
959
Add Tunizi Dataset
closed
https://github.com/huggingface/datasets/pull/959
2020-12-01T13:59:39
2020-12-03T14:21:41
2020-12-03T14:21:40
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
754,404,095
958
dataset(ncslgr): add initial loading script
clean #789
closed
https://github.com/huggingface/datasets/pull/958
2020-12-01T13:41:17
2020-12-07T16:35:39
2020-12-07T16:35:39
{ "login": "AmitMY", "id": 5757359, "type": "User" }
[]
true
[]
754,380,073
957
Isixhosa ner corpus
closed
https://github.com/huggingface/datasets/pull/957
2020-12-01T13:08:36
2020-12-01T18:14:58
2020-12-01T18:14:58
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
754,368,378
956
Add Norwegian NER
This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset. I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files.
closed
https://github.com/huggingface/datasets/pull/956
2020-12-01T12:51:02
2020-12-02T08:53:11
2020-12-01T18:09:21
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
754,367,291
955
Added PragmEval benchmark
closed
https://github.com/huggingface/datasets/pull/955
2020-12-01T12:49:15
2020-12-04T10:43:32
2020-12-03T09:36:47
{ "login": "sileod", "id": 9168444, "type": "User" }
[]
true
[]
754,362,012
954
add prachathai67k
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125. You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb
closed
https://github.com/huggingface/datasets/pull/954
2020-12-01T12:40:55
2020-12-02T05:12:11
2020-12-02T04:43:52
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
754,359,942
953
added health_fact dataset
Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact)
closed
https://github.com/huggingface/datasets/pull/953
2020-12-01T12:37:44
2020-12-01T23:11:33
2020-12-01T23:11:33
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
754,357,270
952
Add orange sum
Add OrangeSum a french abstractive summarization dataset. Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
closed
https://github.com/huggingface/datasets/pull/952
2020-12-01T12:33:34
2020-12-01T15:44:00
2020-12-01T15:44:00
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
754,349,979
951
Prachathai67k
Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb). This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**: * `การเมือง` - politics * `สิทธิมนุษยชน` - human_rights * `คุณภาพชีวิต` - quality_of_life * `ต่างประเทศ` - international * `สังคม` - social * `สิ่งแวดล้อม` - environment * `เศรษฐกิจ` - economics * `วัฒนธรรม` - culture * `แรงงาน` - labor * `ความมั่นคง` - national_security * `ไอซีที` - ict * `การศึกษา` - education
closed
https://github.com/huggingface/datasets/pull/951
2020-12-01T12:21:52
2020-12-01T12:29:53
2020-12-01T12:28:26
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
754,318,686
950
Support .xz file format
Add support to extract/uncompress files in .xz format.
closed
https://github.com/huggingface/datasets/pull/950
2020-12-01T11:34:48
2020-12-01T13:39:18
2020-12-01T13:39:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
754,317,777
949
Add GermaNER Dataset
closed
https://github.com/huggingface/datasets/pull/949
2020-12-01T11:33:31
2020-12-03T14:06:41
2020-12-03T14:06:40
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
754,306,260
948
docs(ADD_NEW_DATASET): correct indentation for script
closed
https://github.com/huggingface/datasets/pull/948
2020-12-01T11:17:38
2020-12-01T11:25:18
2020-12-01T11:25:18
{ "login": "AmitMY", "id": 5757359, "type": "User" }
[]
true
[]
754,286,658
947
Add europeana newspapers
This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset.
closed
https://github.com/huggingface/datasets/pull/947
2020-12-01T10:52:18
2020-12-02T09:42:35
2020-12-02T09:42:09
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
754,278,632
946
add PEC dataset
A persona-based empathetic conversation dataset published at EMNLP 2020.
closed
https://github.com/huggingface/datasets/pull/946
2020-12-01T10:41:41
2020-12-03T02:47:14
2020-12-03T02:47:14
{ "login": "zhongpeixiang", "id": 11826803, "type": "User" }
[]
true
[]
754,273,920
945
Adding Babi dataset - English version
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment.
closed
https://github.com/huggingface/datasets/pull/945
2020-12-01T10:35:36
2020-12-04T15:43:05
2020-12-04T15:42:54
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
754,228,947
944
Add German Legal Entity Recognition Dataset
closed
https://github.com/huggingface/datasets/pull/944
2020-12-01T09:38:22
2020-12-03T13:06:56
2020-12-03T13:06:55
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
754,192,491
943
The FLUE Benchmark
This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content. Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later.
closed
https://github.com/huggingface/datasets/pull/943
2020-12-01T09:00:50
2020-12-01T15:24:38
2020-12-01T15:24:30
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
754,162,318
942
D
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/942
2020-12-01T08:17:10
2020-12-03T16:42:53
2020-12-03T16:42:53
{ "login": "CryptoMiKKi", "id": 74238514, "type": "User" }
[]
false
[]
754,141,321
941
Add People's Daily NER dataset
closed
https://github.com/huggingface/datasets/pull/941
2020-12-01T07:48:53
2020-12-02T18:42:43
2020-12-02T18:42:41
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
754,010,753
940
Add MSRA NER dataset
closed
https://github.com/huggingface/datasets/pull/940
2020-12-01T05:02:11
2020-12-04T09:29:40
2020-12-01T07:25:53
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
753,965,405
939
add wisesight_sentiment
Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question) Model Card: --- YAML tags: annotations_creators: - expert-generated language_creators: - found languages: - th licenses: - cc0-1.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for wisesight_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment - **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment - **Paper:** - **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/ - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question) - Released to public domain under Creative Commons Zero v1.0 Universal license. - Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3} - Size: 26,737 messages - Language: Central Thai - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. - More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb) ### Supported Tasks and Leaderboards Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/) ### Languages Thai ## Dataset Structure ### Data Instances ``` {'category': 'pos', 'texts': 'น่าสนนน'} {'category': 'neu', 'texts': 'ครับ #phithanbkk'} {'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'} {'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'} ``` ### Data Fields - `texts`: texts - `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3) ### Data Splits | | train | valid | test | |-----------|-------|-------|-------| | # samples | 21628 | 2404 | 2671 | | # neu | 11795 | 1291 | 1453 | | # neg | 5491 | 637 | 683 | | # pos | 3866 | 434 | 478 | | # q | 476 | 42 | 57 | | avg words | 27.21 | 27.18 | 27.12 | | avg chars | 89.82 | 89.50 | 90.36 | ## Dataset Creation ### Curation Rationale Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai. ### Source Data #### Initial Data Collection and Normalization - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. - (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. #### Who are the source language producers? Social media users in Thailand ### Annotations #### Annotation process - Sentiment values are assigned by human annotators. - A human annotator put his/her best effort to assign just one label, out of four, to a message. - Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative. - Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product. - Saying that other product or service is better is counted as negative. - General information or news title tend to be counted as neutral. #### Who are the annotators? Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) ### Personal and Sensitive Information - We trying to exclude any known personally identifiable information from this data set. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. ## Considerations for Using the Data ### Social Impact of Dataset - `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai - There are risks of personal information that escape the anonymization process ### Discussion of Biases - A message can be ambiguous. When possible, the judgement will be based solely on the text itself. - In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess. - In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus. ### Other Known Limitations - The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question). - Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance ## Additional Information ### Dataset Curators Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/ ### Licensing Information - If applicable, copyright of each message content belongs to the original poster. - **Annotation data (labels) are released to public domain.** - [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers. - The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message. ### Citation Information Please cite the following if you make use of the dataset: Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September. BibTeX: ``` @software{bact_2019_3457447, author = {Suriyawongkul, Arthit and Chuangsuwanich, Ekapol and Chormai, Pattarawat and Polpanumas, Charin}, title = {PyThaiNLP/wisesight-sentiment: First release}, month = sep, year = 2019, publisher = {Zenodo}, version = {v1.0}, doi = {10.5281/zenodo.3457447}, url = {https://doi.org/10.5281/zenodo.3457447} } ```
closed
https://github.com/huggingface/datasets/pull/939
2020-12-01T03:06:39
2020-12-02T04:52:38
2020-12-02T04:35:51
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
753,940,979
938
V-1.0.0 of isizulu_ner_corpus
closed
https://github.com/huggingface/datasets/pull/938
2020-12-01T02:04:32
2020-12-01T23:34:36
2020-12-01T23:34:36
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
753,921,078
937
Local machine/cluster Beam Datasets example/tutorial
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang
closed
https://github.com/huggingface/datasets/issues/937
2020-12-01T01:11:43
2024-03-15T16:05:14
2024-03-15T16:05:14
{ "login": "shangw-nvidia", "id": 66387198, "type": "User" }
[]
false
[]
753,915,603
936
Added HANS parses and categories
This pull request adds HANS missing information: the sentence parses, as well as the heuristic category.
closed
https://github.com/huggingface/datasets/pull/936
2020-12-01T00:58:16
2020-12-01T13:19:41
2020-12-01T13:19:40
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
753,863,055
935
add PIB dataset
This pull request will add PIB dataset.
closed
https://github.com/huggingface/datasets/pull/935
2020-11-30T22:55:43
2020-12-01T23:17:11
2020-12-01T23:17:11
{ "login": "thevasudevgupta", "id": 53136577, "type": "User" }
[]
true
[]