id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
756,349,001 | https://api.github.com/repos/huggingface/datasets/issues/1060 | https://github.com/huggingface/datasets/pull/1060 | 1,060 | Fix squad V2 metric script | closed | 2 | 2020-12-03T16:23:32 | 2020-12-22T15:02:20 | 2020-12-22T15:02:19 | sgugger | [] | The current squad v2 metric doesn't work with the squad (v1 or v2) datasets. The script is copied from `squad_evaluate` in transformers that requires the labels (with multiple answers) to be like this:
```
references = [{'id': 'a', 'answers': [
{'text': 'Denver Broncos', 'answer_start': 177},
{'text': 'Denv... | true |
756,348,623 | https://api.github.com/repos/huggingface/datasets/issues/1059 | https://github.com/huggingface/datasets/pull/1059 | 1,059 | Add TLC | closed | 3 | 2020-12-03T16:23:06 | 2020-12-04T11:15:33 | 2020-12-04T11:15:33 | chameleonTK | [] | Added TLC dataset | true |
756,332,704 | https://api.github.com/repos/huggingface/datasets/issues/1058 | https://github.com/huggingface/datasets/pull/1058 | 1,058 | added paws-x dataset | closed | 0 | 2020-12-03T16:06:01 | 2020-12-04T13:46:05 | 2020-12-04T13:46:05 | bhavitvyamalik | [] | Added paws-x dataset. Updating README and tags in the dataset card in a while | true |
756,331,419 | https://api.github.com/repos/huggingface/datasets/issues/1057 | https://github.com/huggingface/datasets/pull/1057 | 1,057 | Adding TamilMixSentiment | closed | 1 | 2020-12-03T16:04:25 | 2020-12-04T10:09:34 | 2020-12-04T10:09:12 | jamespaultg | [] | true | |
756,309,828 | https://api.github.com/repos/huggingface/datasets/issues/1056 | https://github.com/huggingface/datasets/pull/1056 | 1,056 | Add deal_or_no_dialog | closed | 0 | 2020-12-03T15:38:07 | 2020-12-03T18:13:45 | 2020-12-03T18:13:45 | moussaKam | [] | Add deal_or_no_dialog Dataset
github: https://github.com/facebookresearch/end-to-end-negotiator
Paper: [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https://arxiv.org/abs/1706.05125) | true |
756,298,372 | https://api.github.com/repos/huggingface/datasets/issues/1055 | https://github.com/huggingface/datasets/pull/1055 | 1,055 | Add hebrew-sentiment | closed | 4 | 2020-12-03T15:24:31 | 2022-02-21T15:26:05 | 2020-12-04T11:24:16 | elronbandel | [] | hebrew-sentiment dataset is ready! (including tests, tags etc) | true |
756,265,688 | https://api.github.com/repos/huggingface/datasets/issues/1054 | https://github.com/huggingface/datasets/pull/1054 | 1,054 | Add dataset - SemEval 2014 - Task 1 | closed | 1 | 2020-12-03T14:52:59 | 2020-12-04T00:52:44 | 2020-12-04T00:52:44 | ashmeet13 | [] | Adding the dataset of SemEval 2014 Task 1
Found the dataset under the shared Google Sheet > Recurring Task Datasets
Task Homepage - https://alt.qcri.org/semeval2014/task1
Thank you! | true |
756,176,061 | https://api.github.com/repos/huggingface/datasets/issues/1053 | https://github.com/huggingface/datasets/pull/1053 | 1,053 | Fix dataset URL and file names, and add column name in "Social Bias Frames" dataset | closed | 1 | 2020-12-03T13:03:05 | 2020-12-03T13:42:26 | 2020-12-03T13:42:26 | otakumesi | [] | # Why I did
When I use "social_bias_frames" datasets in this library, I got 404 Errors.
So, I fixed this error and another some problems that I faced to use the dataset.
# What I did
* Modify this dataset URL
* Modify this dataset file names
* Add a "dataSource" column
Thank you! | true |
756,171,798 | https://api.github.com/repos/huggingface/datasets/issues/1052 | https://github.com/huggingface/datasets/pull/1052 | 1,052 | add sharc dataset | closed | 0 | 2020-12-03T12:57:23 | 2020-12-03T16:44:21 | 2020-12-03T14:09:54 | patil-suraj | [] | This PR adds the ShARC dataset.
More info:
https://sharc-data.github.io/index.html | true |
756,169,049 | https://api.github.com/repos/huggingface/datasets/issues/1051 | https://github.com/huggingface/datasets/pull/1051 | 1,051 | Add Facebook SimpleQuestionV2 | closed | 1 | 2020-12-03T12:53:20 | 2020-12-03T17:31:59 | 2020-12-03T17:31:58 | abhishekkrthakur | [] | Add simple questions v2: https://research.fb.com/downloads/babi/ | true |
756,166,728 | https://api.github.com/repos/huggingface/datasets/issues/1050 | https://github.com/huggingface/datasets/pull/1050 | 1,050 | Add GoEmotions | closed | 1 | 2020-12-03T12:49:53 | 2020-12-03T17:37:45 | 2020-12-03T17:30:08 | joeddav | [] | Adds the GoEmotions dataset, a nice emotion classification dataset with 27 (multi-)label annotations on reddit comments. Includes both a large raw version and a narrowed version with predefined train/test/val splits, which I've included as separate configs with the latter as a default.
- Webpage/repo: https://github... | true |
756,157,602 | https://api.github.com/repos/huggingface/datasets/issues/1049 | https://github.com/huggingface/datasets/pull/1049 | 1,049 | Add siswati ner corpus | closed | 0 | 2020-12-03T12:36:00 | 2020-12-03T17:27:02 | 2020-12-03T17:26:55 | yvonnegitau | [] | true | |
756,133,072 | https://api.github.com/repos/huggingface/datasets/issues/1048 | https://github.com/huggingface/datasets/pull/1048 | 1,048 | Adding NCHLT dataset | closed | 1 | 2020-12-03T11:59:25 | 2020-12-04T13:29:57 | 2020-12-04T13:29:57 | Narsil | [] | https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II | true |
756,127,490 | https://api.github.com/repos/huggingface/datasets/issues/1047 | https://github.com/huggingface/datasets/pull/1047 | 1,047 | Add KorNLU | closed | 5 | 2020-12-03T11:50:54 | 2020-12-03T17:17:07 | 2020-12-03T17:16:09 | sumanthd17 | [] | Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289)
**Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq
... | true |
756,122,709 | https://api.github.com/repos/huggingface/datasets/issues/1046 | https://github.com/huggingface/datasets/issues/1046 | 1,046 | Dataset.map() turns tensors into lists? | closed | 2 | 2020-12-03T11:43:46 | 2022-10-05T12:12:41 | 2022-10-05T12:12:41 | tombosc | [] | I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset ... | false |
756,120,760 | https://api.github.com/repos/huggingface/datasets/issues/1045 | https://github.com/huggingface/datasets/pull/1045 | 1,045 | Add xitsonga ner corpus | closed | 1 | 2020-12-03T11:40:48 | 2020-12-03T17:20:03 | 2020-12-03T17:19:32 | yvonnegitau | [] | true | |
756,111,647 | https://api.github.com/repos/huggingface/datasets/issues/1044 | https://github.com/huggingface/datasets/pull/1044 | 1,044 | Add AMTTL Chinese Word Segmentation Dataset | closed | 0 | 2020-12-03T11:27:52 | 2020-12-03T17:13:14 | 2020-12-03T17:13:13 | JetRunner | [] | true | |
756,100,717 | https://api.github.com/repos/huggingface/datasets/issues/1043 | https://github.com/huggingface/datasets/pull/1043 | 1,043 | Add TSAC: Tunisian Sentiment Analysis Corpus | closed | 0 | 2020-12-03T11:12:35 | 2020-12-03T13:35:05 | 2020-12-03T13:32:24 | abhishekkrthakur | [] | github: https://github.com/fbougares/TSAC
paper: https://www.aclweb.org/anthology/W17-1307/ | true |
756,097,583 | https://api.github.com/repos/huggingface/datasets/issues/1042 | https://github.com/huggingface/datasets/pull/1042 | 1,042 | Add Big Patent dataset | closed | 2 | 2020-12-03T11:07:59 | 2020-12-04T04:38:26 | 2020-12-04T04:38:26 | mattbui | [] | - More info on the dataset: https://evasharma.github.io/bigpatent/
- There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later.
- ~Currently, there are no dummy data for this ... | true |
756,055,102 | https://api.github.com/repos/huggingface/datasets/issues/1041 | https://github.com/huggingface/datasets/pull/1041 | 1,041 | Add SuperGLUE metric | closed | 0 | 2020-12-03T10:11:34 | 2021-02-23T19:02:59 | 2021-02-23T18:02:12 | calpt | [] | Adds a new metric for the SuperGLUE benchmark (similar to the GLUE benchmark metric). | true |
756,050,387 | https://api.github.com/repos/huggingface/datasets/issues/1040 | https://github.com/huggingface/datasets/pull/1040 | 1,040 | Add UN Universal Declaration of Human Rights (UDHR) | closed | 0 | 2020-12-03T10:04:58 | 2020-12-03T19:20:15 | 2020-12-03T19:20:11 | joeddav | [] | Universal declaration of human rights with translations in 464 languages and dialects.
- UN page: https://www.ohchr.org/EN/UDHR/Pages/UDHRIndex.aspx
- Raw data source: https://unicode.org/udhr/index.html
Each instance of the dataset corresponds to one translation of the document. Since there's only one instance ... | true |
756,000,478 | https://api.github.com/repos/huggingface/datasets/issues/1039 | https://github.com/huggingface/datasets/pull/1039 | 1,039 | Update ADD NEW DATASET | closed | 0 | 2020-12-03T08:58:32 | 2020-12-03T09:18:28 | 2020-12-03T09:18:10 | jplu | [] | This PR adds a couple of detail on cloning/rebasing the repo. | true |
755,987,997 | https://api.github.com/repos/huggingface/datasets/issues/1038 | https://github.com/huggingface/datasets/pull/1038 | 1,038 | add med_hop | closed | 0 | 2020-12-03T08:40:27 | 2020-12-03T16:53:13 | 2020-12-03T16:52:23 | patil-suraj | [] | This PR adds the MedHop dataset from the QAngaroo multi hop reading comprehension datasets
More info:
http://qangaroo.cs.ucl.ac.uk/index.html | true |
755,975,586 | https://api.github.com/repos/huggingface/datasets/issues/1037 | https://github.com/huggingface/datasets/pull/1037 | 1,037 | Fix docs indentation issues | closed | 2 | 2020-12-03T08:21:34 | 2020-12-22T16:01:15 | 2020-12-22T16:01:15 | albertvillanova | [] | Replace tabs with spaces. | true |
755,953,294 | https://api.github.com/repos/huggingface/datasets/issues/1036 | https://github.com/huggingface/datasets/pull/1036 | 1,036 | Add PerSenT | closed | 2 | 2020-12-03T07:43:58 | 2020-12-14T13:40:43 | 2020-12-14T13:40:43 | jeromeku | [] | Added [Person's SentimenT](https://stonybrooknlp.github.io/PerSenT/) dataset. | true |
755,947,097 | https://api.github.com/repos/huggingface/datasets/issues/1035 | https://github.com/huggingface/datasets/pull/1035 | 1,035 | add wiki_hop | closed | 1 | 2020-12-03T07:32:26 | 2020-12-03T16:43:40 | 2020-12-03T16:41:12 | patil-suraj | [] | This PR adds the WikiHop dataset from the QAngaroo multi hop reading comprehension datasets
More info:
http://qangaroo.cs.ucl.ac.uk/index.html
| true |
755,936,327 | https://api.github.com/repos/huggingface/datasets/issues/1034 | https://github.com/huggingface/datasets/pull/1034 | 1,034 | add scb_mt_enth_2020 | closed | 0 | 2020-12-03T07:13:49 | 2020-12-03T16:57:23 | 2020-12-03T16:57:23 | cstorm125 | [] | ## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia artic... | true |
755,921,927 | https://api.github.com/repos/huggingface/datasets/issues/1033 | https://github.com/huggingface/datasets/pull/1033 | 1,033 | Add support for ".txm" format | closed | 5 | 2020-12-03T06:52:08 | 2021-02-21T19:47:11 | 2021-02-21T19:47:11 | albertvillanova | [] | In dummy data generation, add support for XML-like ".txm" file format.
Also support filenames with additional compression extension: ".txm.gz". | true |
755,858,785 | https://api.github.com/repos/huggingface/datasets/issues/1032 | https://github.com/huggingface/datasets/pull/1032 | 1,032 | IIT B English to Hindi machine translation dataset | closed | 5 | 2020-12-03T05:18:45 | 2021-01-10T08:44:51 | 2021-01-10T08:44:15 | spatil6 | [] | Adding IIT Bombay English-Hindi Corpus dataset
more info : http://www.cfilt.iitb.ac.in/iitb_parallel/ | true |
755,844,004 | https://api.github.com/repos/huggingface/datasets/issues/1031 | https://github.com/huggingface/datasets/pull/1031 | 1,031 | add crows_pairs | closed | 2 | 2020-12-03T05:05:11 | 2020-12-03T18:29:52 | 2020-12-03T18:29:39 | patil-suraj | [] | This PR adds CrowS-Pairs datasets.
More info:
https://github.com/nyu-mll/crows-pairs/
https://arxiv.org/pdf/2010.00133.pdf | true |
755,777,438 | https://api.github.com/repos/huggingface/datasets/issues/1030 | https://github.com/huggingface/datasets/pull/1030 | 1,030 | allegro_reviews dataset | closed | 0 | 2020-12-03T03:11:39 | 2020-12-04T10:56:29 | 2020-12-03T16:34:47 | abecadel | [] | - **Name:** *allegro_reviews*
- **Description:** *Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to fi... | true |
755,767,616 | https://api.github.com/repos/huggingface/datasets/issues/1029 | https://github.com/huggingface/datasets/pull/1029 | 1,029 | Add PEC | closed | 5 | 2020-12-03T02:46:08 | 2020-12-04T10:58:19 | 2020-12-03T16:15:06 | zhongpeixiang | [] | A persona-based empathetic conversation dataset. | true |
755,712,854 | https://api.github.com/repos/huggingface/datasets/issues/1028 | https://github.com/huggingface/datasets/pull/1028 | 1,028 | Add ASSET dataset for text simplification evaluation | closed | 1 | 2020-12-03T00:28:29 | 2020-12-17T10:03:06 | 2020-12-03T16:34:37 | yjernite | [] | Adding the ASSET dataset from https://github.com/facebookresearch/asset
One config for the simplification data, one for the human ratings of quality.
The README.md borrows from that written by @juand-r | true |
755,695,420 | https://api.github.com/repos/huggingface/datasets/issues/1027 | https://github.com/huggingface/datasets/issues/1027 | 1,027 | Hi | closed | 0 | 2020-12-02T23:47:14 | 2020-12-03T16:42:41 | 2020-12-03T16:42:41 | suemori87 | [] | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | false |
755,689,195 | https://api.github.com/repos/huggingface/datasets/issues/1026 | https://github.com/huggingface/datasets/issues/1026 | 1,026 | Lío o | closed | 0 | 2020-12-02T23:32:25 | 2020-12-03T16:42:47 | 2020-12-03T16:42:47 | ghost | [] | ````l`````````
```
O
```
`````
Ño
```
````
``` | false |
755,673,371 | https://api.github.com/repos/huggingface/datasets/issues/1025 | https://github.com/huggingface/datasets/pull/1025 | 1,025 | Add Sesotho Ner | closed | 4 | 2020-12-02T23:00:15 | 2020-12-16T16:27:03 | 2020-12-16T16:27:02 | yvonnegitau | [] | true | |
755,664,113 | https://api.github.com/repos/huggingface/datasets/issues/1024 | https://github.com/huggingface/datasets/pull/1024 | 1,024 | Add ZEST: ZEroShot learning from Task descriptions | closed | 1 | 2020-12-02T22:41:20 | 2020-12-03T19:21:00 | 2020-12-03T16:09:15 | joeddav | [] | Adds the ZEST dataset on zero-shot learning from task descriptions from AI2.
- Webpage: https://allenai.org/data/zest
- Paper: https://arxiv.org/abs/2011.08115
The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we shoul... | true |
755,655,752 | https://api.github.com/repos/huggingface/datasets/issues/1023 | https://github.com/huggingface/datasets/pull/1023 | 1,023 | Add Schema Guided Dialogue dataset | closed | 0 | 2020-12-02T22:26:01 | 2020-12-03T01:18:01 | 2020-12-03T01:18:01 | yjernite | [] | This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge
- https://github.com/google-research-datasets/dstc8-schema-guided-dialogue
A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config... | true |
755,651,377 | https://api.github.com/repos/huggingface/datasets/issues/1022 | https://github.com/huggingface/datasets/pull/1022 | 1,022 | add MRQA | closed | 1 | 2020-12-02T22:17:56 | 2020-12-04T00:34:26 | 2020-12-04T00:34:25 | VictorSanh | [] | MRQA (shared task 2019)
out of distribution generalization
Framed as extractive question answering
Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format | true |
755,644,559 | https://api.github.com/repos/huggingface/datasets/issues/1021 | https://github.com/huggingface/datasets/pull/1021 | 1,021 | Add Gutenberg time references dataset | closed | 1 | 2020-12-02T22:05:26 | 2020-12-03T10:33:39 | 2020-12-03T10:33:38 | TevenLeScao | [] | This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124 | true |
755,601,450 | https://api.github.com/repos/huggingface/datasets/issues/1020 | https://github.com/huggingface/datasets/pull/1020 | 1,020 | Add Setswana NER | closed | 0 | 2020-12-02T20:52:07 | 2020-12-03T14:56:14 | 2020-12-03T14:56:14 | yvonnegitau | [] | true | |
755,582,090 | https://api.github.com/repos/huggingface/datasets/issues/1019 | https://github.com/huggingface/datasets/pull/1019 | 1,019 | Add caWaC dataset | closed | 0 | 2020-12-02T20:18:55 | 2020-12-03T14:47:09 | 2020-12-03T14:47:09 | albertvillanova | [] | Add dataset. | true |
755,570,882 | https://api.github.com/repos/huggingface/datasets/issues/1018 | https://github.com/huggingface/datasets/pull/1018 | 1,018 | Add Sepedi NER | closed | 1 | 2020-12-02T20:01:05 | 2020-12-03T21:47:03 | 2020-12-03T21:46:38 | yvonnegitau | [] | This is a new branch created for this dataset | true |
755,558,175 | https://api.github.com/repos/huggingface/datasets/issues/1017 | https://github.com/huggingface/datasets/pull/1017 | 1,017 | Specify file encoding | closed | 1 | 2020-12-02T19:40:45 | 2020-12-03T00:44:25 | 2020-12-03T00:44:25 | albertvillanova | [] | If not specified, Python uses system default, which for Windows is not "utf-8". | true |
755,521,862 | https://api.github.com/repos/huggingface/datasets/issues/1016 | https://github.com/huggingface/datasets/pull/1016 | 1,016 | Add CLINC150 dataset | closed | 0 | 2020-12-02T18:44:30 | 2020-12-03T10:32:04 | 2020-12-03T10:32:04 | sumanthd17 | [] | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | true |
755,508,841 | https://api.github.com/repos/huggingface/datasets/issues/1015 | https://github.com/huggingface/datasets/pull/1015 | 1,015 | add hard dataset | closed | 1 | 2020-12-02T18:27:36 | 2020-12-03T15:03:54 | 2020-12-03T15:03:54 | zaidalyafeai | [] | Hotel Reviews in Arabic language. | true |
755,505,851 | https://api.github.com/repos/huggingface/datasets/issues/1014 | https://github.com/huggingface/datasets/pull/1014 | 1,014 | Add SciTLDR Dataset (Take 2) | closed | 6 | 2020-12-02T18:22:50 | 2020-12-02T18:55:10 | 2020-12-02T18:37:58 | bharatr21 | [] | Adds the SciTLDR Dataset by AI2
Added the `README.md` card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents
Continued from #986 | true |
755,493,075 | https://api.github.com/repos/huggingface/datasets/issues/1013 | https://github.com/huggingface/datasets/pull/1013 | 1,013 | Adding CS restaurants dataset | closed | 0 | 2020-12-02T18:02:30 | 2020-12-02T18:25:20 | 2020-12-02T18:25:19 | TevenLeScao | [] | This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history. | true |
755,485,658 | https://api.github.com/repos/huggingface/datasets/issues/1012 | https://github.com/huggingface/datasets/pull/1012 | 1,012 | Adding Evidence Inference Data: | closed | 0 | 2020-12-02T17:51:35 | 2020-12-03T15:04:46 | 2020-12-03T15:04:46 | Narsil | [] | http://evidence-inference.ebm-nlp.com/download/
https://arxiv.org/pdf/2005.04177.pdf | true |
755,463,726 | https://api.github.com/repos/huggingface/datasets/issues/1011 | https://github.com/huggingface/datasets/pull/1011 | 1,011 | Add Bilingual Corpus of Arabic-English Parallel Tweets | closed | 6 | 2020-12-02T17:20:02 | 2020-12-04T14:45:10 | 2020-12-04T14:44:33 | sumanthd17 | [] | Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf)
- [x] Followed the instru... | true |
755,432,143 | https://api.github.com/repos/huggingface/datasets/issues/1010 | https://github.com/huggingface/datasets/pull/1010 | 1,010 | Add NoReC: Norwegian Review Corpus | closed | 0 | 2020-12-02T16:38:29 | 2021-02-18T14:47:29 | 2021-02-18T14:47:28 | abhishekkrthakur | [] | true | |
755,384,433 | https://api.github.com/repos/huggingface/datasets/issues/1009 | https://github.com/huggingface/datasets/pull/1009 | 1,009 | Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. | closed | 0 | 2020-12-02T15:40:36 | 2020-12-03T13:16:30 | 2020-12-03T13:16:29 | Narsil | [] | https://github.com/nlpdata/c3
https://arxiv.org/abs/1904.09679 | true |
755,372,798 | https://api.github.com/repos/huggingface/datasets/issues/1008 | https://github.com/huggingface/datasets/pull/1008 | 1,008 | Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679 | closed | 1 | 2020-12-02T15:28:05 | 2020-12-02T15:40:55 | 2020-12-02T15:40:55 | Narsil | [] | null | true |
755,364,078 | https://api.github.com/repos/huggingface/datasets/issues/1007 | https://github.com/huggingface/datasets/pull/1007 | 1,007 | Include license file in source distribution | closed | 0 | 2020-12-02T15:17:43 | 2020-12-02T17:58:05 | 2020-12-02T17:58:05 | synapticarbors | [] | It would be helpful to include the license file in the source distribution. | true |
755,362,766 | https://api.github.com/repos/huggingface/datasets/issues/1006 | https://github.com/huggingface/datasets/pull/1006 | 1,006 | add yahoo_answers_topics | closed | 1 | 2020-12-02T15:16:13 | 2020-12-03T16:44:38 | 2020-12-02T18:01:32 | patil-suraj | [] | This PR adds yahoo answers topic classification dataset.
More info:
https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset
cc @joeddav, @yjernite | true |
755,337,255 | https://api.github.com/repos/huggingface/datasets/issues/1005 | https://github.com/huggingface/datasets/pull/1005 | 1,005 | Adding Autshumato South african langages: | closed | 0 | 2020-12-02T14:47:33 | 2020-12-03T13:13:30 | 2020-12-03T13:13:30 | Narsil | [] | https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned | true |
755,325,368 | https://api.github.com/repos/huggingface/datasets/issues/1004 | https://github.com/huggingface/datasets/issues/1004 | 1,004 | how large datasets are handled under the hood | closed | 3 | 2020-12-02T14:32:40 | 2022-10-05T12:13:29 | 2022-10-05T12:13:29 | rabeehkarimimahabadi | [] | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than... | false |
755,310,318 | https://api.github.com/repos/huggingface/datasets/issues/1003 | https://github.com/huggingface/datasets/pull/1003 | 1,003 | Add multi_x_science_sum | closed | 0 | 2020-12-02T14:14:01 | 2020-12-02T17:39:05 | 2020-12-02T17:39:05 | moussaKam | [] | Add Multi-XScience Dataset.
github repo: https://github.com/yaolu/Multi-XScience
paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235) | true |
755,309,758 | https://api.github.com/repos/huggingface/datasets/issues/1002 | https://github.com/huggingface/datasets/pull/1002 | 1,002 | Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining | closed | 2 | 2020-12-02T14:13:17 | 2020-12-07T16:58:03 | 2020-12-03T13:14:33 | Narsil | [] | null | true |
755,309,071 | https://api.github.com/repos/huggingface/datasets/issues/1001 | https://github.com/huggingface/datasets/pull/1001 | 1,001 | Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining | closed | 1 | 2020-12-02T14:12:30 | 2020-12-02T14:13:12 | 2020-12-02T14:13:12 | Narsil | [] | null | true |
755,292,066 | https://api.github.com/repos/huggingface/datasets/issues/1000 | https://github.com/huggingface/datasets/pull/1000 | 1,000 | UM005: Urdu <> English Translation Dataset | closed | 0 | 2020-12-02T13:51:35 | 2020-12-04T15:34:30 | 2020-12-04T15:34:29 | abhishekkrthakur | [] | Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/ | true |
755,246,786 | https://api.github.com/repos/huggingface/datasets/issues/999 | https://github.com/huggingface/datasets/pull/999 | 999 | add generated_reviews_enth | closed | 0 | 2020-12-02T12:50:43 | 2020-12-03T11:17:28 | 2020-12-03T11:17:28 | cstorm125 | [] | `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1... | true |
755,235,356 | https://api.github.com/repos/huggingface/datasets/issues/998 | https://github.com/huggingface/datasets/pull/998 | 998 | adding yahoo_answers_qa | closed | 0 | 2020-12-02T12:33:54 | 2020-12-02T13:45:40 | 2020-12-02T13:26:06 | patil-suraj | [] | Adding Yahoo Answers QA dataset.
More info:
https://ciir.cs.umass.edu/downloads/nfL6/ | true |
755,185,517 | https://api.github.com/repos/huggingface/datasets/issues/997 | https://github.com/huggingface/datasets/pull/997 | 997 | Microsoft CodeXGlue | closed | 4 | 2020-12-02T11:21:18 | 2021-06-08T13:42:25 | 2021-06-08T13:42:24 | madlag | [] | Datasets from https://github.com/microsoft/CodeXGLUE
This contains 13 datasets:
code_x_glue_cc_clone_detection_big_clone_bench
code_x_glue_cc_clone_detection_poj_104
code_x_glue_cc_cloze_testing_all
code_x_glue_cc_cloze_testing_maxmin
code_x_glue_cc_code_completion_line
code_x_glue_cc_code_completion_token
... | true |
755,176,084 | https://api.github.com/repos/huggingface/datasets/issues/996 | https://github.com/huggingface/datasets/issues/996 | 996 | NotADirectoryError while loading the CNN/Dailymail dataset | closed | 12 | 2020-12-02T11:07:56 | 2022-02-17T14:13:39 | 2022-02-17T14:13:39 | arc-bu | [] |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | false |
755,175,199 | https://api.github.com/repos/huggingface/datasets/issues/995 | https://github.com/huggingface/datasets/pull/995 | 995 | added dataset circa | closed | 1 | 2020-12-02T11:06:39 | 2020-12-04T10:58:16 | 2020-12-03T09:39:37 | bhavitvyamalik | [] | Dataset Circa added. Only README.md and dataset card left | true |
755,146,834 | https://api.github.com/repos/huggingface/datasets/issues/994 | https://github.com/huggingface/datasets/pull/994 | 994 | Add Sepedi ner corpus | closed | 2 | 2020-12-02T10:30:07 | 2020-12-03T10:19:14 | 2020-12-02T18:20:08 | yvonnegitau | [] | true | |
755,135,768 | https://api.github.com/repos/huggingface/datasets/issues/993 | https://github.com/huggingface/datasets/issues/993 | 993 | Problem downloading amazon_reviews_multi | closed | 2 | 2020-12-02T10:15:57 | 2022-10-05T12:21:34 | 2022-10-05T12:21:34 | hfawaz | [] | Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`load_dataset(
dataset_name,
... | false |
755,124,963 | https://api.github.com/repos/huggingface/datasets/issues/992 | https://github.com/huggingface/datasets/pull/992 | 992 | Add CAIL 2018 dataset | closed | 0 | 2020-12-02T10:01:40 | 2020-12-02T16:49:02 | 2020-12-02T16:49:01 | JetRunner | [] | true | |
755,117,902 | https://api.github.com/repos/huggingface/datasets/issues/991 | https://github.com/huggingface/datasets/pull/991 | 991 | Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets) | closed | 0 | 2020-12-02T09:52:19 | 2020-12-03T11:01:26 | 2020-12-03T11:01:26 | Narsil | [] | null | true |
755,097,798 | https://api.github.com/repos/huggingface/datasets/issues/990 | https://github.com/huggingface/datasets/pull/990 | 990 | Add E2E NLG | closed | 0 | 2020-12-02T09:25:12 | 2020-12-03T13:08:05 | 2020-12-03T13:08:04 | lhoestq | [] | Adding the E2E NLG dataset.
More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_genera... | true |
755,079,394 | https://api.github.com/repos/huggingface/datasets/issues/989 | https://github.com/huggingface/datasets/pull/989 | 989 | Fix SV -> NO | closed | 0 | 2020-12-02T08:59:59 | 2020-12-02T09:18:21 | 2020-12-02T09:18:14 | jplu | [] | This PR fixes the small typo as seen in #956 | true |
755,069,159 | https://api.github.com/repos/huggingface/datasets/issues/988 | https://github.com/huggingface/datasets/issues/988 | 988 | making sure datasets are not loaded in memory and distributed training of them | closed | 2 | 2020-12-02T08:45:15 | 2022-10-05T13:00:42 | 2022-10-05T13:00:42 | rabeehk | [] | Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in cas... | false |
755,059,469 | https://api.github.com/repos/huggingface/datasets/issues/987 | https://github.com/huggingface/datasets/pull/987 | 987 | Add OPUS DOGC dataset | closed | 1 | 2020-12-02T08:30:32 | 2020-12-04T13:27:41 | 2020-12-04T13:27:41 | albertvillanova | [] | true | |
755,047,470 | https://api.github.com/repos/huggingface/datasets/issues/986 | https://github.com/huggingface/datasets/pull/986 | 986 | Add SciTLDR Dataset | closed | 5 | 2020-12-02T08:11:16 | 2020-12-02T18:37:22 | 2020-12-02T18:02:59 | bharatr21 | [] | Adds the SciTLDR Dataset by AI2
Added README card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents | true |
755,020,564 | https://api.github.com/repos/huggingface/datasets/issues/985 | https://github.com/huggingface/datasets/pull/985 | 985 | Add GAP dataset | closed | 3 | 2020-12-02T07:25:11 | 2022-10-06T14:11:52 | 2020-12-02T16:16:32 | VictorSanh | [] | GAP dataset
Gender bias coreference resolution | true |
755,009,916 | https://api.github.com/repos/huggingface/datasets/issues/984 | https://github.com/huggingface/datasets/pull/984 | 984 | committing Whoa file | closed | 2 | 2020-12-02T07:07:46 | 2020-12-02T16:15:29 | 2020-12-02T15:40:58 | StulosDunamos | [] | true | |
754,966,620 | https://api.github.com/repos/huggingface/datasets/issues/983 | https://github.com/huggingface/datasets/pull/983 | 983 | add mc taco | closed | 0 | 2020-12-02T05:54:55 | 2020-12-02T15:37:47 | 2020-12-02T15:37:46 | VictorSanh | [] | MC-TACO
Temporal commonsense knowledge | true |
754,946,337 | https://api.github.com/repos/huggingface/datasets/issues/982 | https://github.com/huggingface/datasets/pull/982 | 982 | add prachathai67k take2 | closed | 0 | 2020-12-02T05:12:01 | 2020-12-02T10:18:11 | 2020-12-02T10:18:11 | cstorm125 | [] | I decided it will be faster to create a new pull request instead of fixing the rebase issues.
continuing from https://github.com/huggingface/datasets/pull/954
| true |
754,937,612 | https://api.github.com/repos/huggingface/datasets/issues/981 | https://github.com/huggingface/datasets/pull/981 | 981 | add wisesight_sentiment take2 | closed | 0 | 2020-12-02T04:50:59 | 2020-12-02T10:37:13 | 2020-12-02T10:37:13 | cstorm125 | [] | Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one. | true |
754,899,301 | https://api.github.com/repos/huggingface/datasets/issues/980 | https://github.com/huggingface/datasets/pull/980 | 980 | Wongnai - Thai reviews dataset | closed | 2 | 2020-12-02T03:20:08 | 2020-12-02T15:34:41 | 2020-12-02T15:30:05 | mapmeld | [] | 40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ ) | true |
754,893,337 | https://api.github.com/repos/huggingface/datasets/issues/979 | https://github.com/huggingface/datasets/pull/979 | 979 | [WIP] Add multi woz | closed | 0 | 2020-12-02T03:05:42 | 2020-12-02T16:07:16 | 2020-12-02T16:07:16 | yjernite | [] | This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2
It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md
On the plus side the structure is broadly similar to that... | true |
754,854,478 | https://api.github.com/repos/huggingface/datasets/issues/978 | https://github.com/huggingface/datasets/pull/978 | 978 | Add code refinement | closed | 5 | 2020-12-02T01:29:58 | 2020-12-07T01:52:58 | 2020-12-07T01:52:58 | reshinthadithyan | [] | ### OVERVIEW
Millions of open-source projects with numerous bug fixes
are available in code repositories. This proliferation
of software development histories can be leveraged to
learn how to fix common programming bugs
Code refinement aims to automatically fix bugs in the code,
which can contribute to reducing t... | true |
754,839,594 | https://api.github.com/repos/huggingface/datasets/issues/977 | https://github.com/huggingface/datasets/pull/977 | 977 | Add ROPES dataset | closed | 0 | 2020-12-02T00:52:10 | 2020-12-02T10:58:36 | 2020-12-02T10:58:35 | VictorSanh | [] | ROPES dataset
Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa.
One thing to note: labels of the test set are hidden (leaderboard submiss... | true |
754,826,146 | https://api.github.com/repos/huggingface/datasets/issues/976 | https://github.com/huggingface/datasets/pull/976 | 976 | Arabic pos dialect | closed | 2 | 2020-12-02T00:21:13 | 2020-12-09T17:30:32 | 2020-12-09T17:30:32 | mcmillanmajora | [] | A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP. | true |
754,823,701 | https://api.github.com/repos/huggingface/datasets/issues/975 | https://github.com/huggingface/datasets/pull/975 | 975 | add MeTooMA dataset | closed | 0 | 2020-12-02T00:15:55 | 2020-12-02T10:58:56 | 2020-12-02T10:58:55 | akash418 | [] | This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guideli... | true |
754,811,185 | https://api.github.com/repos/huggingface/datasets/issues/974 | https://github.com/huggingface/datasets/pull/974 | 974 | Add MeTooMA Dataset | closed | 0 | 2020-12-01T23:44:01 | 2020-12-01T23:57:58 | 2020-12-01T23:57:58 | akash418 | [] | true | |
754,807,963 | https://api.github.com/repos/huggingface/datasets/issues/973 | https://github.com/huggingface/datasets/pull/973 | 973 | Adding The Microsoft Terminology Collection dataset. | closed | 9 | 2020-12-01T23:36:23 | 2020-12-04T15:25:44 | 2020-12-04T15:12:46 | leoxzhao | [] | true | |
754,787,314 | https://api.github.com/repos/huggingface/datasets/issues/972 | https://github.com/huggingface/datasets/pull/972 | 972 | Add Children's Book Test (CBT) dataset | closed | 2 | 2020-12-01T22:53:26 | 2021-03-19T11:30:03 | 2021-03-19T11:30:03 | thomwolf | [] | Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016).
Sentence completion given a few sentences as context from a children's book. | true |
754,784,041 | https://api.github.com/repos/huggingface/datasets/issues/971 | https://github.com/huggingface/datasets/pull/971 | 971 | add piqa | closed | 0 | 2020-12-01T22:47:04 | 2020-12-02T09:58:02 | 2020-12-02T09:58:01 | VictorSanh | [] | Physical Interaction: Question Answering (commonsense)
https://yonatanbisk.com/piqa/ | true |
754,697,489 | https://api.github.com/repos/huggingface/datasets/issues/970 | https://github.com/huggingface/datasets/pull/970 | 970 | Add SWAG | closed | 0 | 2020-12-01T20:21:05 | 2020-12-02T09:55:16 | 2020-12-02T09:55:15 | VictorSanh | [] | Commonsense NLI -> https://rowanzellers.com/swag/ | true |
754,681,940 | https://api.github.com/repos/huggingface/datasets/issues/969 | https://github.com/huggingface/datasets/pull/969 | 969 | Add wiki auto dataset | closed | 0 | 2020-12-01T19:58:11 | 2020-12-02T16:19:14 | 2020-12-02T16:19:14 | yjernite | [] | This PR adds the WikiAuto sentence simplification dataset
https://github.com/chaojiang06/wiki-auto
This is also a prospective GEM task, hence the README.md | true |
754,659,015 | https://api.github.com/repos/huggingface/datasets/issues/968 | https://github.com/huggingface/datasets/pull/968 | 968 | ADD Afrikaans NER | closed | 1 | 2020-12-01T19:23:03 | 2020-12-02T09:41:28 | 2020-12-02T09:41:28 | yvonnegitau | [] | Afrikaans NER corpus | true |
754,578,988 | https://api.github.com/repos/huggingface/datasets/issues/967 | https://github.com/huggingface/datasets/pull/967 | 967 | Add CS Restaurants dataset | closed | 4 | 2020-12-01T17:17:37 | 2020-12-02T17:57:44 | 2020-12-02T17:57:25 | TevenLeScao | [] | This PR adds the Czech restaurants dataset for Czech NLG. | true |
754,558,686 | https://api.github.com/repos/huggingface/datasets/issues/966 | https://github.com/huggingface/datasets/pull/966 | 966 | Add CLINC150 Dataset | closed | 2 | 2020-12-01T16:50:13 | 2020-12-02T18:45:43 | 2020-12-02T18:45:30 | sumanthd17 | [] | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | true |
754,553,169 | https://api.github.com/repos/huggingface/datasets/issues/965 | https://github.com/huggingface/datasets/pull/965 | 965 | Add CLINC150 Dataset | closed | 0 | 2020-12-01T16:43:00 | 2020-12-01T16:51:16 | 2020-12-01T16:49:15 | sumanthd17 | [] | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | true |
754,474,660 | https://api.github.com/repos/huggingface/datasets/issues/964 | https://github.com/huggingface/datasets/pull/964 | 964 | Adding the WebNLG dataset | closed | 1 | 2020-12-01T15:05:23 | 2020-12-02T17:34:05 | 2020-12-02T17:34:05 | yjernite | [] | This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.
More information can be found [here](https://webnlg-challenge.loria.fr/)
Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB ... | true |
754,451,234 | https://api.github.com/repos/huggingface/datasets/issues/963 | https://github.com/huggingface/datasets/pull/963 | 963 | add CODAH dataset | closed | 0 | 2020-12-01T14:37:05 | 2020-12-02T13:45:58 | 2020-12-02T13:21:25 | patil-suraj | [] | Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH | true |
754,441,428 | https://api.github.com/repos/huggingface/datasets/issues/962 | https://github.com/huggingface/datasets/pull/962 | 962 | Add Danish Political Comments Dataset | closed | 0 | 2020-12-01T14:28:32 | 2020-12-03T10:31:55 | 2020-12-03T10:31:54 | abhishekkrthakur | [] | true | |
754,434,398 | https://api.github.com/repos/huggingface/datasets/issues/961 | https://github.com/huggingface/datasets/issues/961 | 961 | sample multiple datasets | closed | 6 | 2020-12-01T14:20:02 | 2024-06-17T08:23:20 | 2023-07-20T14:08:57 | rabeehk | [] | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I c... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.