id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
761,415,420 | https://api.github.com/repos/huggingface/datasets/issues/1461 | https://github.com/huggingface/datasets/pull/1461 | 1,461 | Adding NewsQA dataset | closed | 6 | 2020-12-10T17:01:10 | 2020-12-17T18:29:03 | 2020-12-17T18:27:36 | rsanjaykamath | [] | Since the dataset has legal restrictions to circulate the original data. It has to be manually downloaded by the user and loaded to the library. | true |
761,349,149 | https://api.github.com/repos/huggingface/datasets/issues/1460 | https://github.com/huggingface/datasets/pull/1460 | 1,460 | add Bengali Hate Speech dataset | closed | 7 | 2020-12-10T15:40:55 | 2021-09-17T16:54:53 | 2021-01-04T14:08:29 | stevhliu | [] | true | |
761,258,395 | https://api.github.com/repos/huggingface/datasets/issues/1459 | https://github.com/huggingface/datasets/pull/1459 | 1,459 | Add Google Conceptual Captions Dataset | closed | 1 | 2020-12-10T13:50:33 | 2022-04-14T13:14:19 | 2022-04-14T13:07:49 | abhishekkrthakur | [] | true | |
761,235,962 | https://api.github.com/repos/huggingface/datasets/issues/1458 | https://github.com/huggingface/datasets/pull/1458 | 1,458 | Add id_nergrit_corpus | closed | 1 | 2020-12-10T13:20:34 | 2020-12-17T10:45:15 | 2020-12-17T10:45:15 | cahya-wirawan | [] | Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis.
Recently my PR for id_nergrit_ner has been accepted and merged to the main branch. The id_nergrit_ner has only one dataset (NER), and this new PR renamed the dataset from id_nergrit_ner to id_n... | true |
761,232,610 | https://api.github.com/repos/huggingface/datasets/issues/1457 | https://github.com/huggingface/datasets/pull/1457 | 1,457 | add hrenwac_para | closed | 1 | 2020-12-10T13:16:20 | 2020-12-10T13:35:54 | 2020-12-10T13:35:10 | IvanZidov | [] | true | |
761,231,296 | https://api.github.com/repos/huggingface/datasets/issues/1456 | https://github.com/huggingface/datasets/pull/1456 | 1,456 | Add CC100 Dataset | closed | 0 | 2020-12-10T13:14:37 | 2020-12-14T10:20:09 | 2020-12-14T10:20:08 | abhishekkrthakur | [] | Closes #773 | true |
761,205,073 | https://api.github.com/repos/huggingface/datasets/issues/1455 | https://github.com/huggingface/datasets/pull/1455 | 1,455 | Add HEAD-QA: A Healthcare Dataset for Complex Reasoning | closed | 1 | 2020-12-10T12:36:56 | 2020-12-17T17:03:32 | 2020-12-17T16:58:11 | mariagrandury | [] | HEAD-QA is a multi-choice HEAlthcare Dataset, the questions come from exams to access a specialized position in the
Spanish healthcare system. | true |
761,199,862 | https://api.github.com/repos/huggingface/datasets/issues/1454 | https://github.com/huggingface/datasets/pull/1454 | 1,454 | Add kinnews_kirnews | closed | 1 | 2020-12-10T12:29:08 | 2020-12-17T18:34:16 | 2020-12-17T18:34:16 | saradhix | [] | Add kinnews and kirnews | true |
761,188,657 | https://api.github.com/repos/huggingface/datasets/issues/1453 | https://github.com/huggingface/datasets/pull/1453 | 1,453 | Adding ethos dataset clean | closed | 2 | 2020-12-10T12:13:21 | 2020-12-14T15:00:46 | 2020-12-14T10:31:24 | iamollas | [] | I addressed the comments on the PR1318 | true |
761,104,924 | https://api.github.com/repos/huggingface/datasets/issues/1452 | https://github.com/huggingface/datasets/issues/1452 | 1,452 | SNLI dataset contains labels with value -1 | closed | 2 | 2020-12-10T10:16:55 | 2020-12-10T17:49:55 | 2020-12-10T17:49:55 | aarnetalman | [] | ```
import datasets
nli_data = datasets.load_dataset("snli")
train_data = nli_data['train']
train_labels = train_data['label']
label_set = set(train_labels)
print(label_set)
```
**Output:**
`{0, 1, 2, -1}` | false |
761,102,770 | https://api.github.com/repos/huggingface/datasets/issues/1451 | https://github.com/huggingface/datasets/pull/1451 | 1,451 | Add European Center for Disease Control and Preventions's (ECDC) Translation Memory dataset | closed | 0 | 2020-12-10T10:14:20 | 2020-12-11T16:50:09 | 2020-12-11T16:50:09 | SBrandeis | [] | ECDC-TM homepage: https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory | true |
761,102,429 | https://api.github.com/repos/huggingface/datasets/issues/1450 | https://github.com/huggingface/datasets/pull/1450 | 1,450 | Fix version in bible_para | closed | 0 | 2020-12-10T10:13:55 | 2020-12-11T16:40:41 | 2020-12-11T16:40:40 | abhishekkrthakur | [] | true | |
761,083,210 | https://api.github.com/repos/huggingface/datasets/issues/1449 | https://github.com/huggingface/datasets/pull/1449 | 1,449 | add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC) [PROPER] | closed | 2 | 2020-12-10T09:51:08 | 2020-12-11T17:07:46 | 2020-12-11T17:07:46 | aseifert | [] | - **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC)
- **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data
- **Paper:** https://www.aclweb.org/anthology/W19-4406/
- **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is us... | true |
761,080,776 | https://api.github.com/repos/huggingface/datasets/issues/1448 | https://github.com/huggingface/datasets/pull/1448 | 1,448 | add thai_toxicity_tweet | closed | 0 | 2020-12-10T09:48:02 | 2020-12-11T16:21:27 | 2020-12-11T16:21:27 | cstorm125 | [] | Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets t... | true |
761,067,955 | https://api.github.com/repos/huggingface/datasets/issues/1447 | https://github.com/huggingface/datasets/pull/1447 | 1,447 | Update step-by-step guide for windows | closed | 1 | 2020-12-10T09:30:59 | 2020-12-10T12:18:47 | 2020-12-10T09:31:14 | thomwolf | [] | Update step-by-step guide for windows to give an alternative to `make style`. | true |
761,060,323 | https://api.github.com/repos/huggingface/datasets/issues/1446 | https://github.com/huggingface/datasets/pull/1446 | 1,446 | Add Bing Coronavirus Query Set | closed | 0 | 2020-12-10T09:20:46 | 2020-12-11T17:03:08 | 2020-12-11T17:03:07 | abhishekkrthakur | [] | true | |
761,057,851 | https://api.github.com/repos/huggingface/datasets/issues/1445 | https://github.com/huggingface/datasets/pull/1445 | 1,445 | Added dataset clickbait_news_bg | closed | 2 | 2020-12-10T09:17:28 | 2020-12-15T07:45:19 | 2020-12-15T07:45:19 | tsvm | [] | true | |
761,055,651 | https://api.github.com/repos/huggingface/datasets/issues/1444 | https://github.com/huggingface/datasets/issues/1444 | 1,444 | FileNotFound remotly, can't load a dataset | closed | 2 | 2020-12-10T09:14:47 | 2020-12-15T17:41:14 | 2020-12-15T17:41:14 | sadakmed | [] | ```py
!pip install datasets
import datasets as ds
corpus = ds.load_dataset('large_spanish_corpus')
```
gives the error
> FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/large_spa... | false |
761,033,061 | https://api.github.com/repos/huggingface/datasets/issues/1443 | https://github.com/huggingface/datasets/pull/1443 | 1,443 | Add OPUS Wikimedia Translations Dataset | closed | 1 | 2020-12-10T08:43:02 | 2023-09-24T09:40:41 | 2022-10-03T09:38:48 | abhishekkrthakur | [
"dataset contribution"
] | null | true |
761,026,069 | https://api.github.com/repos/huggingface/datasets/issues/1442 | https://github.com/huggingface/datasets/pull/1442 | 1,442 | Create XML dummy data without loading all dataset in memory | closed | 0 | 2020-12-10T08:32:07 | 2020-12-17T09:59:43 | 2020-12-17T09:59:43 | albertvillanova | [] | While I was adding one XML dataset, I noticed that all the dataset was loaded in memory during the dummy data generation process (using nearly all my laptop RAM).
Looking at the code, I have found that the origin is the use of `ET.parse()`. This method loads **all the file content in memory**.
In order to fix thi... | true |
761,021,823 | https://api.github.com/repos/huggingface/datasets/issues/1441 | https://github.com/huggingface/datasets/pull/1441 | 1,441 | Add Igbo-English Machine Translation Dataset | closed | 0 | 2020-12-10T08:25:34 | 2020-12-11T15:54:53 | 2020-12-11T15:54:52 | abhishekkrthakur | [] | true | |
760,973,057 | https://api.github.com/repos/huggingface/datasets/issues/1440 | https://github.com/huggingface/datasets/pull/1440 | 1,440 | Adding english plaintext jokes dataset | closed | 2 | 2020-12-10T07:04:17 | 2020-12-13T05:22:00 | 2020-12-12T05:55:43 | purvimisal | [] | This PR adds a dataset of 200k English plaintext Jokes from three sources: Reddit, Stupidstuff, and Wocka.
Link: https://github.com/taivop/joke-dataset
This is my second PR.
First was: [#1269 ](https://github.com/huggingface/datasets/pull/1269) | true |
760,968,410 | https://api.github.com/repos/huggingface/datasets/issues/1439 | https://github.com/huggingface/datasets/pull/1439 | 1,439 | Update README.md | closed | 0 | 2020-12-10T06:57:01 | 2020-12-11T15:22:53 | 2020-12-11T15:22:53 | tuner007 | [] | 1k-10k -> 1k-1M
3 separate configs are available with min. 1K and max. 211.3k examples | true |
760,962,193 | https://api.github.com/repos/huggingface/datasets/issues/1438 | https://github.com/huggingface/datasets/pull/1438 | 1,438 | A descriptive name for my changes | closed | 3 | 2020-12-10T06:47:24 | 2020-12-15T10:36:27 | 2020-12-15T10:36:26 | rahul-art | [] | hind encorp resubmited | true |
760,891,879 | https://api.github.com/repos/huggingface/datasets/issues/1437 | https://github.com/huggingface/datasets/pull/1437 | 1,437 | Add Indosum dataset | closed | 2 | 2020-12-10T05:02:00 | 2022-10-03T09:38:54 | 2022-10-03T09:38:54 | prasastoadi | [
"dataset contribution"
] | null | true |
760,873,132 | https://api.github.com/repos/huggingface/datasets/issues/1436 | https://github.com/huggingface/datasets/pull/1436 | 1,436 | add ALT | closed | 1 | 2020-12-10T04:17:21 | 2020-12-13T16:14:18 | 2020-12-11T15:52:41 | chameleonTK | [] | ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ | true |
760,867,325 | https://api.github.com/repos/huggingface/datasets/issues/1435 | https://github.com/huggingface/datasets/pull/1435 | 1,435 | Add FreebaseQA dataset | closed | 12 | 2020-12-10T04:03:27 | 2021-02-05T09:47:30 | 2021-02-05T09:47:30 | anaerobeth | [] | This PR adds the FreebaseQA dataset: A Trivia-type QA Data Set over the Freebase Knowledge Graph
Repo: https://github.com/kelvin-jiang/FreebaseQA
Paper: https://www.aclweb.org/anthology/N19-1028.pdf
## TODO: create dummy data
Error encountered when running `python datasets-cli dummy_data datasets/freebase... | true |
760,821,474 | https://api.github.com/repos/huggingface/datasets/issues/1434 | https://github.com/huggingface/datasets/pull/1434 | 1,434 | add_sofc_materials_articles | closed | 1 | 2020-12-10T02:15:02 | 2020-12-17T09:59:54 | 2020-12-17T09:59:54 | ZacharySBrown | [] | adding [SOFC-Exp Corpus](https://arxiv.org/abs/2006.03039) | true |
760,813,539 | https://api.github.com/repos/huggingface/datasets/issues/1433 | https://github.com/huggingface/datasets/pull/1433 | 1,433 | Adding the ASSIN 2 dataset | closed | 0 | 2020-12-10T01:57:02 | 2020-12-11T14:32:56 | 2020-12-11T14:32:56 | jonatasgrosman | [] | Adding the ASSIN 2 dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring | true |
760,808,449 | https://api.github.com/repos/huggingface/datasets/issues/1432 | https://github.com/huggingface/datasets/pull/1432 | 1,432 | Adding journalists questions dataset | closed | 2 | 2020-12-10T01:44:47 | 2020-12-14T13:51:05 | 2020-12-14T13:51:04 | MaramHasanain | [] | This is my first dataset to be added to HF. | true |
760,791,019 | https://api.github.com/repos/huggingface/datasets/issues/1431 | https://github.com/huggingface/datasets/pull/1431 | 1,431 | Ar cov19 | closed | 1 | 2020-12-10T00:59:34 | 2020-12-11T15:01:23 | 2020-12-11T15:01:23 | Fatima-Haouari | [] | Adding ArCOV-19 dataset. ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subs... | true |
760,779,666 | https://api.github.com/repos/huggingface/datasets/issues/1430 | https://github.com/huggingface/datasets/pull/1430 | 1,430 | Add 1.5 billion words Arabic corpus | closed | 10 | 2020-12-10T00:32:18 | 2020-12-22T10:03:59 | 2020-12-22T10:03:59 | zaidalyafeai | [] | Needs https://github.com/huggingface/datasets/pull/1429 to work. | true |
760,737,818 | https://api.github.com/repos/huggingface/datasets/issues/1429 | https://github.com/huggingface/datasets/pull/1429 | 1,429 | extract rar files | closed | 0 | 2020-12-09T23:01:10 | 2020-12-18T15:03:37 | 2020-12-18T15:03:37 | zaidalyafeai | [] | Unfortunately, I didn't find any native python libraries for extracting rar files. The user has to manually install `sudo apt-get install unrar`. Discussion with @yjernite is in the slack channel. | true |
760,736,726 | https://api.github.com/repos/huggingface/datasets/issues/1428 | https://github.com/huggingface/datasets/pull/1428 | 1,428 | Add twi wordsim353 | closed | 0 | 2020-12-09T22:59:19 | 2020-12-11T13:57:32 | 2020-12-11T13:57:32 | dadelani | [] | Add twi WordSim 353 | true |
760,736,703 | https://api.github.com/repos/huggingface/datasets/issues/1427 | https://github.com/huggingface/datasets/pull/1427 | 1,427 | Hebrew project BenYehuda | closed | 1 | 2020-12-09T22:59:17 | 2020-12-11T17:39:23 | 2020-12-11T17:39:23 | imvladikon | [] | Added Hebrew corpus from https://github.com/projectbenyehuda/public_domain_dump | true |
760,735,763 | https://api.github.com/repos/huggingface/datasets/issues/1426 | https://github.com/huggingface/datasets/pull/1426 | 1,426 | init commit for MultiReQA for third PR with all issues fixed | closed | 2 | 2020-12-09T22:57:41 | 2020-12-11T13:37:08 | 2020-12-11T13:37:08 | Karthik-Bhaskar | [] | 3rd PR w.r.t. PR #1349 with all the issues fixed. As #1349 had uploaded other files along with the multi_re_qa dataset | true |
760,733,638 | https://api.github.com/repos/huggingface/datasets/issues/1425 | https://github.com/huggingface/datasets/pull/1425 | 1,425 | Add german common crawl dataset | closed | 4 | 2020-12-09T22:54:12 | 2022-10-03T09:39:02 | 2022-10-03T09:39:02 | Phil1108 | [
"dataset contribution"
] | Adding a subpart of the Common Crawl which was extracted with this repo https://github.com/facebookresearch/cc_net and additionally filtered for duplicates | true |
760,724,914 | https://api.github.com/repos/huggingface/datasets/issues/1424 | https://github.com/huggingface/datasets/pull/1424 | 1,424 | Add yoruba wordsim353 | closed | 0 | 2020-12-09T22:37:42 | 2020-12-09T22:39:45 | 2020-12-09T22:39:45 | dadelani | [] | Added WordSim-353 evaluation dataset for Yoruba | true |
760,712,421 | https://api.github.com/repos/huggingface/datasets/issues/1423 | https://github.com/huggingface/datasets/pull/1423 | 1,423 | Imppres | closed | 11 | 2020-12-09T22:14:12 | 2020-12-17T18:27:14 | 2020-12-17T18:27:14 | aclifton314 | [] | 2nd PR ever! Hopefully I'm starting to get the hang of this. This is for the IMPPRES dataset. Please let me know of any corrections or changes that need to be made. | true |
760,707,113 | https://api.github.com/repos/huggingface/datasets/issues/1422 | https://github.com/huggingface/datasets/issues/1422 | 1,422 | Can't map dataset (loaded from csv) | closed | 2 | 2020-12-09T22:05:42 | 2020-12-17T18:13:40 | 2020-12-17T18:13:40 | SolomidHero | [] | Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to class... | false |
760,706,851 | https://api.github.com/repos/huggingface/datasets/issues/1421 | https://github.com/huggingface/datasets/pull/1421 | 1,421 | adding fake-news-english-2 | closed | 0 | 2020-12-09T22:05:13 | 2020-12-13T00:48:49 | 2020-12-13T00:48:49 | MisbahKhan789 | [] | true | |
760,700,388 | https://api.github.com/repos/huggingface/datasets/issues/1420 | https://github.com/huggingface/datasets/pull/1420 | 1,420 | Add dataset yoruba_wordsim353 | closed | 1 | 2020-12-09T21:54:29 | 2020-12-11T13:34:04 | 2020-12-11T13:34:04 | michael-aloys | [] | Contains loading script as well as dataset card including YAML tags. | true |
760,673,716 | https://api.github.com/repos/huggingface/datasets/issues/1419 | https://github.com/huggingface/datasets/pull/1419 | 1,419 | Add Turkish News Category Dataset (270K) | closed | 3 | 2020-12-09T21:08:33 | 2020-12-11T14:02:31 | 2020-12-11T14:02:31 | basakbuluz | [] | This PR adds the Turkish News Categories Dataset (270K) dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news** in **17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https://www.interpress.com/)... | true |
760,672,320 | https://api.github.com/repos/huggingface/datasets/issues/1418 | https://github.com/huggingface/datasets/pull/1418 | 1,418 | Add arabic dialects | closed | 1 | 2020-12-09T21:06:07 | 2020-12-17T09:40:56 | 2020-12-17T09:40:56 | mcmillanmajora | [] | Data loading script and dataset card for Dialectal Arabic Resources dataset.
Fixed git issues from PR #976 | true |
760,660,918 | https://api.github.com/repos/huggingface/datasets/issues/1417 | https://github.com/huggingface/datasets/pull/1417 | 1,417 | WIP: Vinay/add peer read dataset | closed | 0 | 2020-12-09T20:49:52 | 2020-12-11T18:43:31 | 2020-12-11T18:43:31 | vinaykudari | [] | true | |
760,653,971 | https://api.github.com/repos/huggingface/datasets/issues/1416 | https://github.com/huggingface/datasets/pull/1416 | 1,416 | Add Shrinked Turkish NER from Kaggle. | closed | 0 | 2020-12-09T20:38:35 | 2020-12-11T11:23:31 | 2020-12-11T11:23:31 | bhctsntrk | [] | Add Shrinked Turkish NER from [Kaggle](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar). | true |
760,642,786 | https://api.github.com/repos/huggingface/datasets/issues/1415 | https://github.com/huggingface/datasets/pull/1415 | 1,415 | Add Hate Speech and Offensive Language Detection dataset | closed | 3 | 2020-12-09T20:22:12 | 2020-12-14T18:06:44 | 2020-12-14T16:25:31 | hugoabonizio | [] | Add [Hate Speech and Offensive Language Detection dataset](https://github.com/t-davidson/hate-speech-and-offensive-language) from [this paper](https://arxiv.org/abs/1703.04009). | true |
760,622,133 | https://api.github.com/repos/huggingface/datasets/issues/1414 | https://github.com/huggingface/datasets/pull/1414 | 1,414 | Adding BioCreative II Gene Mention corpus | closed | 0 | 2020-12-09T19:49:28 | 2020-12-11T11:17:40 | 2020-12-11T11:17:40 | mahajandiwakar | [] | Adding BioCreative II Gene Mention corpus | true |
760,615,090 | https://api.github.com/repos/huggingface/datasets/issues/1413 | https://github.com/huggingface/datasets/pull/1413 | 1,413 | Add OffComBR | closed | 3 | 2020-12-09T19:38:08 | 2020-12-14T18:06:45 | 2020-12-14T16:51:10 | hugoabonizio | [] | Add [OffComBR](https://github.com/rogersdepelle/OffComBR) from [Offensive Comments in the Brazilian Web: a dataset and baseline results](https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222) paper.
But I'm having a hard time generating dummy data since the original dataset extion is `.arff` and the [_crea... | true |
760,607,959 | https://api.github.com/repos/huggingface/datasets/issues/1412 | https://github.com/huggingface/datasets/pull/1412 | 1,412 | Adding the ASSIN dataset | closed | 0 | 2020-12-09T19:27:06 | 2020-12-11T10:41:10 | 2020-12-11T10:41:10 | jonatasgrosman | [] | Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring | true |
760,606,290 | https://api.github.com/repos/huggingface/datasets/issues/1411 | https://github.com/huggingface/datasets/pull/1411 | 1,411 | 2 typos | closed | 0 | 2020-12-09T19:24:34 | 2020-12-11T10:39:05 | 2020-12-11T10:39:05 | dezow | [] | Corrected 2 typos | true |
760,597,092 | https://api.github.com/repos/huggingface/datasets/issues/1410 | https://github.com/huggingface/datasets/pull/1410 | 1,410 | Add penn treebank dataset | closed | 2 | 2020-12-09T19:11:33 | 2020-12-16T09:38:23 | 2020-12-16T09:38:23 | harshalmittal4 | [] | true | |
760,593,932 | https://api.github.com/repos/huggingface/datasets/issues/1409 | https://github.com/huggingface/datasets/pull/1409 | 1,409 | Adding the ASSIN dataset | closed | 1 | 2020-12-09T19:07:00 | 2020-12-09T19:18:12 | 2020-12-09T19:15:52 | jonatasgrosman | [] | Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring | true |
760,590,589 | https://api.github.com/repos/huggingface/datasets/issues/1408 | https://github.com/huggingface/datasets/pull/1408 | 1,408 | adding fake-news-english | closed | 1 | 2020-12-09T19:02:07 | 2020-12-13T00:49:19 | 2020-12-13T00:49:19 | MisbahKhan789 | [] | true | |
760,581,756 | https://api.github.com/repos/huggingface/datasets/issues/1407 | https://github.com/huggingface/datasets/pull/1407 | 1,407 | Add Tweet Eval Dataset | closed | 4 | 2020-12-09T18:48:57 | 2023-09-24T09:52:03 | 2021-02-26T08:54:04 | abhishekkrthakur | [] | true | |
760,581,330 | https://api.github.com/repos/huggingface/datasets/issues/1406 | https://github.com/huggingface/datasets/pull/1406 | 1,406 | Add Portuguese Hate Speech dataset | closed | 2 | 2020-12-09T18:48:16 | 2020-12-14T18:06:42 | 2020-12-14T16:22:20 | hugoabonizio | [] | Binary Portuguese Hate Speech dataset from [this paper](https://www.aclweb.org/anthology/W19-3510/). | true |
760,578,035 | https://api.github.com/repos/huggingface/datasets/issues/1405 | https://github.com/huggingface/datasets/pull/1405 | 1,405 | Adding TaPaCo Dataset with README.md | closed | 2 | 2020-12-09T18:42:58 | 2020-12-13T19:11:18 | 2020-12-13T19:11:18 | pacman100 | [] | true | |
760,575,473 | https://api.github.com/repos/huggingface/datasets/issues/1404 | https://github.com/huggingface/datasets/pull/1404 | 1,404 | Add Acronym Identification Dataset | closed | 1 | 2020-12-09T18:38:54 | 2020-12-14T13:12:01 | 2020-12-14T13:12:00 | abhishekkrthakur | [] | true | |
760,571,419 | https://api.github.com/repos/huggingface/datasets/issues/1403 | https://github.com/huggingface/datasets/pull/1403 | 1,403 | Add dataset clickbait_news_bg | closed | 1 | 2020-12-09T18:32:12 | 2020-12-10T09:16:44 | 2020-12-10T09:16:43 | tsvm | [] | Adding a new dataset - clickbait_news_bg | true |
760,538,325 | https://api.github.com/repos/huggingface/datasets/issues/1402 | https://github.com/huggingface/datasets/pull/1402 | 1,402 | adding covid-tweets-japanese (again) | closed | 4 | 2020-12-09T17:46:46 | 2020-12-13T17:54:14 | 2020-12-13T17:47:36 | forest1988 | [] | I had mistaken use git rebase, I was so hurried to fix it. However, I didn't fully consider the use of git reset , so I unintendedly stopped PR (#1367) altogether. Sorry about that.
I'll make a new PR. | true |
760,525,949 | https://api.github.com/repos/huggingface/datasets/issues/1401 | https://github.com/huggingface/datasets/pull/1401 | 1,401 | Add reasoning_bg | closed | 4 | 2020-12-09T17:30:49 | 2020-12-17T16:50:43 | 2020-12-17T16:50:42 | saradhix | [] | Adding reading comprehension dataset for Bulgarian language | true |
760,514,215 | https://api.github.com/repos/huggingface/datasets/issues/1400 | https://github.com/huggingface/datasets/pull/1400 | 1,400 | Add European Union Education and Culture Translation Memory (EAC-TM) dataset | closed | 0 | 2020-12-09T17:14:52 | 2020-12-14T13:06:48 | 2020-12-14T13:06:47 | SBrandeis | [] | Adding the EAC Translation Memory dataset : https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory | true |
760,499,576 | https://api.github.com/repos/huggingface/datasets/issues/1399 | https://github.com/huggingface/datasets/pull/1399 | 1,399 | Add HoVer Dataset | closed | 2 | 2020-12-09T16:55:39 | 2020-12-14T10:57:23 | 2020-12-14T10:57:22 | abhishekkrthakur | [] | HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
https://arxiv.org/abs/2011.03088 | true |
760,497,024 | https://api.github.com/repos/huggingface/datasets/issues/1398 | https://github.com/huggingface/datasets/pull/1398 | 1,398 | Add Neural Code Search Dataset | closed | 3 | 2020-12-09T16:52:16 | 2020-12-09T18:02:27 | 2020-12-09T18:02:27 | vinaykudari | [] | true | |
760,467,501 | https://api.github.com/repos/huggingface/datasets/issues/1397 | https://github.com/huggingface/datasets/pull/1397 | 1,397 | datasets card-creator link added | closed | 0 | 2020-12-09T16:15:18 | 2020-12-09T16:47:48 | 2020-12-09T16:47:48 | tanmoyio | [] | dataset card creator link has been added
link: https://huggingface.co/datasets/card-creator/ | true |
760,455,295 | https://api.github.com/repos/huggingface/datasets/issues/1396 | https://github.com/huggingface/datasets/pull/1396 | 1,396 | initial commit for MultiReQA for second PR | closed | 2 | 2020-12-09T16:00:35 | 2020-12-10T18:20:12 | 2020-12-10T18:20:11 | Karthik-Bhaskar | [] | Since last PR #1349 had some issues passing the tests. So, a new PR is generated. | true |
760,448,255 | https://api.github.com/repos/huggingface/datasets/issues/1395 | https://github.com/huggingface/datasets/pull/1395 | 1,395 | Add WikiSource Dataset | closed | 1 | 2020-12-09T15:52:06 | 2020-12-14T10:24:14 | 2020-12-14T10:24:13 | abhishekkrthakur | [] | true | |
760,436,365 | https://api.github.com/repos/huggingface/datasets/issues/1394 | https://github.com/huggingface/datasets/pull/1394 | 1,394 | Add OfisPublik Dataset | closed | 1 | 2020-12-09T15:37:45 | 2020-12-14T10:23:30 | 2020-12-14T10:23:29 | abhishekkrthakur | [] | true | |
760,436,267 | https://api.github.com/repos/huggingface/datasets/issues/1393 | https://github.com/huggingface/datasets/pull/1393 | 1,393 | Add script_version suggestion when dataset/metric not found | closed | 0 | 2020-12-09T15:37:38 | 2020-12-10T18:17:05 | 2020-12-10T18:17:05 | joeddav | [] | Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like:
> Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/dat... | true |
760,432,261 | https://api.github.com/repos/huggingface/datasets/issues/1392 | https://github.com/huggingface/datasets/pull/1392 | 1,392 | Add KDE4 Dataset | closed | 1 | 2020-12-09T15:32:58 | 2020-12-14T10:22:33 | 2020-12-14T10:22:32 | abhishekkrthakur | [] | true | |
760,432,041 | https://api.github.com/repos/huggingface/datasets/issues/1391 | https://github.com/huggingface/datasets/pull/1391 | 1,391 | Add MultiParaCrawl Dataset | closed | 0 | 2020-12-09T15:32:46 | 2020-12-10T18:39:45 | 2020-12-10T18:39:44 | abhishekkrthakur | [] | true | |
760,431,051 | https://api.github.com/repos/huggingface/datasets/issues/1390 | https://github.com/huggingface/datasets/pull/1390 | 1,390 | Add SPC Dataset | closed | 0 | 2020-12-09T15:31:51 | 2020-12-14T11:13:53 | 2020-12-14T11:13:52 | abhishekkrthakur | [] | true | |
760,402,224 | https://api.github.com/repos/huggingface/datasets/issues/1389 | https://github.com/huggingface/datasets/pull/1389 | 1,389 | add amazon polarity dataset | closed | 5 | 2020-12-09T14:58:21 | 2020-12-11T11:45:39 | 2020-12-11T11:41:01 | hfawaz | [] | This corresponds to the amazon (binary dataset) requested in https://github.com/huggingface/datasets/issues/353 | true |
760,373,136 | https://api.github.com/repos/huggingface/datasets/issues/1388 | https://github.com/huggingface/datasets/pull/1388 | 1,388 | hind_encorp | closed | 0 | 2020-12-09T14:22:59 | 2020-12-09T14:46:51 | 2020-12-09T14:46:37 | rahul-art | [] | resubmit of hind_encorp file changes | true |
760,368,355 | https://api.github.com/repos/huggingface/datasets/issues/1387 | https://github.com/huggingface/datasets/pull/1387 | 1,387 | Add LIAR dataset | closed | 2 | 2020-12-09T14:16:55 | 2020-12-14T18:06:43 | 2020-12-14T16:23:59 | hugoabonizio | [] | Add LIAR dataset from [“Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection](https://www.aclweb.org/anthology/P17-2067/). | true |
760,365,505 | https://api.github.com/repos/huggingface/datasets/issues/1386 | https://github.com/huggingface/datasets/pull/1386 | 1,386 | Add RecipeNLG Dataset (manual download) | closed | 1 | 2020-12-09T14:13:19 | 2020-12-10T16:58:22 | 2020-12-10T16:58:21 | abhishekkrthakur | [] | true | |
760,351,405 | https://api.github.com/repos/huggingface/datasets/issues/1385 | https://github.com/huggingface/datasets/pull/1385 | 1,385 | add best2009 | closed | 0 | 2020-12-09T13:56:09 | 2020-12-14T10:59:08 | 2020-12-14T10:59:08 | cstorm125 | [] | `best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are ... | true |
760,331,767 | https://api.github.com/repos/huggingface/datasets/issues/1384 | https://github.com/huggingface/datasets/pull/1384 | 1,384 | Add News Commentary Dataset | closed | 0 | 2020-12-09T13:30:36 | 2020-12-10T16:54:08 | 2020-12-10T16:54:07 | abhishekkrthakur | [] | true | |
760,331,480 | https://api.github.com/repos/huggingface/datasets/issues/1383 | https://github.com/huggingface/datasets/pull/1383 | 1,383 | added conv ai 2 | closed | 2 | 2020-12-09T13:30:12 | 2020-12-13T18:54:42 | 2020-12-13T18:54:41 | rkc007 | [] | Dataset : https://github.com/DeepPavlov/convai/tree/master/2018 | true |
760,325,077 | https://api.github.com/repos/huggingface/datasets/issues/1382 | https://github.com/huggingface/datasets/pull/1382 | 1,382 | adding UNPC | closed | 1 | 2020-12-09T13:21:41 | 2020-12-09T17:53:06 | 2020-12-09T17:53:06 | patil-suraj | [] | Adding United Nations Parallel Corpus
http://opus.nlpl.eu/UNPC.php | true |
760,320,960 | https://api.github.com/repos/huggingface/datasets/issues/1381 | https://github.com/huggingface/datasets/pull/1381 | 1,381 | Add twi text c3 | closed | 6 | 2020-12-09T13:16:38 | 2020-12-13T18:39:27 | 2020-12-13T18:39:27 | dadelani | [] | Added Twi texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | true |
760,320,494 | https://api.github.com/repos/huggingface/datasets/issues/1380 | https://github.com/huggingface/datasets/pull/1380 | 1,380 | Add Tatoeba Dataset | closed | 0 | 2020-12-09T13:16:04 | 2020-12-10T16:54:28 | 2020-12-10T16:54:27 | abhishekkrthakur | [] | true | |
760,320,487 | https://api.github.com/repos/huggingface/datasets/issues/1379 | https://github.com/huggingface/datasets/pull/1379 | 1,379 | Add yoruba text c3 | closed | 12 | 2020-12-09T13:16:03 | 2020-12-13T18:45:12 | 2020-12-13T18:37:33 | dadelani | [] | Added Yoruba texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | true |
760,313,108 | https://api.github.com/repos/huggingface/datasets/issues/1378 | https://github.com/huggingface/datasets/pull/1378 | 1,378 | Add FACTCK.BR dataset | closed | 2 | 2020-12-09T13:06:22 | 2020-12-17T12:38:45 | 2020-12-15T15:34:11 | hugoabonizio | [] | This PR adds [FACTCK.BR](https://github.com/jghm-f/FACTCK.BR) dataset from [FACTCK.BR: a new dataset to study fake news](https://dl.acm.org/doi/10.1145/3323503.3361698). | true |
760,309,435 | https://api.github.com/repos/huggingface/datasets/issues/1377 | https://github.com/huggingface/datasets/pull/1377 | 1,377 | adding marathi-wiki dataset | closed | 3 | 2020-12-09T13:01:20 | 2022-10-03T09:39:09 | 2022-10-03T09:39:09 | ekdnam | [
"dataset contribution"
] | Adding marathi-wiki-articles dataset. | true |
760,309,300 | https://api.github.com/repos/huggingface/datasets/issues/1376 | https://github.com/huggingface/datasets/pull/1376 | 1,376 | Add SETimes Dataset | closed | 1 | 2020-12-09T13:01:08 | 2020-12-10T16:11:57 | 2020-12-10T16:11:56 | abhishekkrthakur | [] | true | |
760,294,931 | https://api.github.com/repos/huggingface/datasets/issues/1375 | https://github.com/huggingface/datasets/pull/1375 | 1,375 | Add OPUS EMEA Dataset | closed | 0 | 2020-12-09T12:39:44 | 2020-12-10T16:11:09 | 2020-12-10T16:11:08 | abhishekkrthakur | [] | true | |
760,288,291 | https://api.github.com/repos/huggingface/datasets/issues/1374 | https://github.com/huggingface/datasets/pull/1374 | 1,374 | Add OPUS Tilde Model Dataset | closed | 1 | 2020-12-09T12:29:23 | 2020-12-10T16:11:29 | 2020-12-10T16:11:28 | abhishekkrthakur | [] | true | |
760,280,869 | https://api.github.com/repos/huggingface/datasets/issues/1373 | https://github.com/huggingface/datasets/pull/1373 | 1,373 | Add OPUS ECB Dataset | closed | 0 | 2020-12-09T12:18:22 | 2020-12-10T15:25:55 | 2020-12-10T15:25:54 | abhishekkrthakur | [] | true | |
760,274,046 | https://api.github.com/repos/huggingface/datasets/issues/1372 | https://github.com/huggingface/datasets/pull/1372 | 1,372 | Add OPUS Books Dataset | closed | 1 | 2020-12-09T12:08:49 | 2020-12-14T09:56:28 | 2020-12-14T09:56:27 | abhishekkrthakur | [] | true | |
760,270,116 | https://api.github.com/repos/huggingface/datasets/issues/1371 | https://github.com/huggingface/datasets/pull/1371 | 1,371 | Adding Scielo | closed | 0 | 2020-12-09T12:02:48 | 2020-12-09T17:53:37 | 2020-12-09T17:53:37 | patil-suraj | [] | Adding Scielo: Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO
https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB | true |
760,264,132 | https://api.github.com/repos/huggingface/datasets/issues/1370 | https://github.com/huggingface/datasets/pull/1370 | 1,370 | Add OPUS PHP Dataset | closed | 0 | 2020-12-09T11:53:30 | 2020-12-10T15:37:25 | 2020-12-10T15:37:24 | abhishekkrthakur | [] | true | |
760,227,776 | https://api.github.com/repos/huggingface/datasets/issues/1369 | https://github.com/huggingface/datasets/pull/1369 | 1,369 | Use passed --cache_dir for modules cache | open | 7 | 2020-12-09T10:59:59 | 2022-07-06T15:19:47 | null | albertvillanova | [] | When passed `--cache_dir` arg:
```shell
python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>
```
it is not used for caching the modules, which are cached in the default location at `.cache/huggingface/modules`.
With this fix, the modules will be cached at `<... | true |
760,222,616 | https://api.github.com/repos/huggingface/datasets/issues/1368 | https://github.com/huggingface/datasets/pull/1368 | 1,368 | Re-adding narrativeqa dataset | closed | 4 | 2020-12-09T10:53:09 | 2020-12-11T13:30:59 | 2020-12-11T13:30:59 | ghomasHudson | [] | An update of #309. | true |
760,208,191 | https://api.github.com/repos/huggingface/datasets/issues/1367 | https://github.com/huggingface/datasets/pull/1367 | 1,367 | adding covid-tweets-japanese | closed | 2 | 2020-12-09T10:34:01 | 2020-12-09T17:25:14 | 2020-12-09T17:25:14 | forest1988 | [] | Adding COVID-19 Japanese Tweets Dataset as part of the sprint.
Testing with dummy data is not working (the file is said to not exist). Sorry for the incomplete PR. | true |
760,205,506 | https://api.github.com/repos/huggingface/datasets/issues/1366 | https://github.com/huggingface/datasets/pull/1366 | 1,366 | Adding Hope EDI dataset | closed | 1 | 2020-12-09T10:30:23 | 2020-12-14T14:27:57 | 2020-12-14T14:27:57 | jamespaultg | [] | true | |
760,188,457 | https://api.github.com/repos/huggingface/datasets/issues/1365 | https://github.com/huggingface/datasets/pull/1365 | 1,365 | Add Mkqa dataset | closed | 2 | 2020-12-09T10:06:33 | 2020-12-10T15:37:56 | 2020-12-10T15:37:56 | cceyda | [] | # MKQA: Multilingual Knowledge Questions & Answers Dataset
Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉
There is no official data splits so I added just a `train` split.
differently from the original:
- answer:type field is a ClassLabel (I thought it might be possible to... | true |
760,164,558 | https://api.github.com/repos/huggingface/datasets/issues/1364 | https://github.com/huggingface/datasets/pull/1364 | 1,364 | Narrative QA (Manual Download Stories) Dataset | closed | 3 | 2020-12-09T09:33:59 | 2021-01-25T15:31:51 | 2021-01-25T15:31:31 | rsanjaykamath | [] | Narrative QA with manual download for stories. | true |
760,160,944 | https://api.github.com/repos/huggingface/datasets/issues/1363 | https://github.com/huggingface/datasets/pull/1363 | 1,363 | Adding OPUS MultiUN | closed | 0 | 2020-12-09T09:29:01 | 2020-12-09T17:54:20 | 2020-12-09T17:54:20 | patil-suraj | [] | Adding UnMulti
http://www.euromatrixplus.net/multi-un/ | true |
760,138,233 | https://api.github.com/repos/huggingface/datasets/issues/1362 | https://github.com/huggingface/datasets/pull/1362 | 1,362 | adding opus_infopankki | closed | 1 | 2020-12-09T08:57:10 | 2020-12-09T18:16:20 | 2020-12-09T18:13:48 | patil-suraj | [] | Adding opus_infopankki
http://opus.nlpl.eu/infopankki-v1.php | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.