id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
774,710,014
1,637
Added `pn_summary` dataset
#1635 You did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)!
closed
https://github.com/huggingface/datasets/pull/1637
2020-12-25T11:01:24
2021-01-04T13:43:19
2021-01-04T13:43:19
{ "login": "m3hrdadfi", "id": 2601833, "type": "User" }
[]
true
[]
774,574,378
1,636
winogrande cannot be dowloaded
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", line 148, in <listcomp> for task in data_args.tasks] File "/workdir/seq2seq/data/tasks.py", line 65, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 466, in load_dataset return datasets.load_dataset('winogrande', 'winogrande_l', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/winogrande/winogrande.py yo/0 I1224 14:17:46.419031 31226 main shadow.py:122 > Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 260, in <module> main() File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) ```
closed
https://github.com/huggingface/datasets/issues/1636
2020-12-24T22:28:22
2022-10-05T12:35:44
2022-10-05T12:35:44
{ "login": "ghost", "id": 10137, "type": "User" }
[]
false
[]
774,524,492
1,635
Persian Abstractive/Extractive Text Summarization
Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included. ## Adding a Dataset - **Name:** *pn-summary* - **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.* - **Paper:** *https://arxiv.org/abs/2012.11204* - **Data:** *https://github.com/hooshvare/pn-summary/#download* - **Motivation:** *It is the first Persian abstractive/extractive Text summarization dataset (like cnn_dailymail for English)!* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1635
2020-12-24T17:47:12
2021-01-04T15:11:04
2021-01-04T15:11:04
{ "login": "m3hrdadfi", "id": 2601833, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
774,487,934
1,634
Inspecting datasets per category
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
closed
https://github.com/huggingface/datasets/issues/1634
2020-12-24T15:26:34
2022-10-04T14:57:33
2022-10-04T14:57:33
{ "login": "ghost", "id": 10137, "type": "User" }
[]
false
[]
774,422,603
1,633
social_i_qa wrong format of labels
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /julia/cache/datasets Downloading: 4.72kB [00:00, 3.52MB/s] cahce dir /julia/cache/datasets Downloading: 2.19kB [00:00, 1.81MB/s] Using custom data configuration default Reusing dataset social_i_qa (/julia/datasets/social_i_qa/default/0.1.0/4a4190cc2d2482d43416c2167c0c5dccdd769d4482e84893614bd069e5c3ba06) >>> dataset['train'][0] {'answerA': 'like attending', 'answerB': 'like staying home', 'answerC': 'a good friend to have', 'context': 'Cameron decided to have a barbecue and gathered her friends together.', 'label': '1\n', 'question': 'How would Others feel as a result?'} ```
closed
https://github.com/huggingface/datasets/issues/1633
2020-12-24T13:11:54
2020-12-30T17:18:49
2020-12-30T17:18:49
{ "login": "ghost", "id": 10137, "type": "User" }
[]
false
[]
774,388,625
1,632
SICK dataset
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena. - **Paper:** https://www.aclweb.org/anthology/L14-1314/ - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1632
2020-12-24T12:40:14
2021-02-05T15:49:25
2021-02-05T15:49:25
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
774,349,222
1,631
Update README.md
I made small change for citation
closed
https://github.com/huggingface/datasets/pull/1631
2020-12-24T11:45:52
2020-12-28T17:35:41
2020-12-28T17:16:04
{ "login": "savasy", "id": 6584825, "type": "User" }
[]
true
[]
774,332,129
1,630
Adding UKP Argument Aspect Similarity Corpus
Hi, this would be great to have this dataset included. ## Adding a Dataset - **Name:** UKP Argument Aspect Similarity Corpus - **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as either “high similarity”, “some similarity”, “no similarity” or “not related” with respect to the topic. - **Paper:** https://www.aclweb.org/anthology/P19-1054/ - **Data:** https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998 - **Motivation:** this is one of the datasets currently used frequently in recent adapter papers like https://arxiv.org/pdf/2005.00247.pdf Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Thank you
closed
https://github.com/huggingface/datasets/issues/1630
2020-12-24T11:01:31
2022-10-05T12:36:12
2022-10-05T12:36:12
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
774,255,716
1,629
add wongnai_reviews test set labels
- add test set labels provided by @ekapolc - refactor `star_rating` to a `datasets.features.ClassLabel` field
closed
https://github.com/huggingface/datasets/pull/1629
2020-12-24T08:02:31
2020-12-28T17:23:39
2020-12-28T17:23:39
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
774,091,411
1,628
made suggested changes to hate-speech-and-offensive-language
closed
https://github.com/huggingface/datasets/pull/1628
2020-12-23T23:25:32
2020-12-28T10:11:20
2020-12-28T10:11:20
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
773,960,255
1,627
`Dataset.map` disable progress bar
I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?
closed
https://github.com/huggingface/datasets/issues/1627
2020-12-23T17:53:42
2025-05-16T16:36:24
2020-12-26T19:57:17
{ "login": "Nickil21", "id": 8767964, "type": "User" }
[]
false
[]
773,840,368
1,626
Fix dataset_dict.shuffle with single seed
Fix #1610 I added support for single integer used in `DatasetDict.shuffle`. Previously only a dictionary of seed was allowed. Moreover I added the missing `seed` parameter. Previously only `seeds` was allowed.
closed
https://github.com/huggingface/datasets/pull/1626
2020-12-23T14:33:36
2021-01-04T10:00:04
2021-01-04T10:00:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
773,771,596
1,625
Fixed bug in the shape property
Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`.
closed
https://github.com/huggingface/datasets/pull/1625
2020-12-23T13:33:21
2021-01-02T23:22:52
2020-12-23T14:13:13
{ "login": "noaonoszko", "id": 47183162, "type": "User" }
[]
true
[]
773,669,700
1,624
Cannot download ade_corpus_v2
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2 but received this error : `Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module combined_path, github_file_path, file_path FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
closed
https://github.com/huggingface/datasets/issues/1624
2020-12-23T10:58:14
2021-08-03T05:08:54
2021-08-03T05:08:54
{ "login": "him1411", "id": 20259310, "type": "User" }
[]
false
[]
772,950,710
1,623
Add CLIMATE-FEVER dataset
As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579. --- A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: * Homepage: http://climatefever.ai * Paper: https://arxiv.org/abs/2012.00614
closed
https://github.com/huggingface/datasets/pull/1623
2020-12-22T13:34:05
2020-12-22T17:53:53
2020-12-22T17:53:53
{ "login": "tdiggelm", "id": 1658969, "type": "User" }
[]
true
[]
772,940,768
1,622
Can't call shape on the output of select()
I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`. It's line 531 in shape in arrow_dataset.py that causes the problem: ``return tuple(self._indices.num_rows, self._data.num_columns)`` This makes sense, since `tuple(num1, num2)` is not a valid call. Full code to reproduce: ```python dataset = load_dataset("cnn_dailymail", "3.0.0") train_set = dataset["train"] t = train_set.select(range(10)) print(t.shape)
closed
https://github.com/huggingface/datasets/issues/1622
2020-12-22T13:18:40
2020-12-23T13:37:13
2020-12-23T13:37:12
{ "login": "noaonoszko", "id": 47183162, "type": "User" }
[]
false
[]
772,940,417
1,621
updated dutch_social.py for loading jsonl (lines instead of list) files
the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records Pls refer to previous PR #1321
closed
https://github.com/huggingface/datasets/pull/1621
2020-12-22T13:18:11
2020-12-23T11:51:51
2020-12-23T11:51:51
{ "login": "skyprince999", "id": 9033954, "type": "User" }
[]
true
[]
772,620,056
1,620
Adding myPOS2017 dataset
myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments
closed
https://github.com/huggingface/datasets/pull/1620
2020-12-22T04:04:55
2022-10-03T09:38:23
2022-10-03T09:38:23
{ "login": "hungluumfc", "id": 69781878, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
772,508,558
1,619
data loader for reading comprehension task
added doc2dial data loader and dummy data for reading comprehension task.
closed
https://github.com/huggingface/datasets/pull/1619
2020-12-21T22:40:34
2020-12-28T10:32:53
2020-12-28T10:32:53
{ "login": "songfeng", "id": 2062185, "type": "User" }
[]
true
[]
772,248,730
1,618
Can't filter language:EN on https://huggingface.co/datasets
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge: ![screenshot](https://user-images.githubusercontent.com/4547987/102792244-892e1f00-43a8-11eb-9e89-4826ca201a87.png)
closed
https://github.com/huggingface/datasets/issues/1618
2020-12-21T15:23:23
2020-12-22T17:17:00
2020-12-22T17:16:09
{ "login": "davidefiocco", "id": 4547987, "type": "User" }
[]
false
[]
772,084,764
1,617
cifar10 initial commit
CIFAR-10 dataset. Didn't add the tagging since there are no vision related tags.
closed
https://github.com/huggingface/datasets/pull/1617
2020-12-21T11:18:50
2020-12-22T10:18:05
2020-12-22T10:11:28
{ "login": "czabo", "id": 75574105, "type": "User" }
[]
true
[]
772,074,229
1,616
added TurkishMovieSentiment dataset
This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.** - **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks) - **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/)
closed
https://github.com/huggingface/datasets/pull/1616
2020-12-21T11:03:16
2020-12-24T07:08:41
2020-12-23T16:50:06
{ "login": "yavuzKomecoglu", "id": 5150963, "type": "User" }
[]
true
[]
771,641,088
1,615
Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello, I'm having issue downloading TriviaQA dataset with `load_dataset`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets dataset = datasets.load_dataset("trivia_qa", "rc", cache_dir = "./datasets") ``` ## The output: 1. Download begins: ``` Downloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /cs/labs/gabis/sapirweissbuch/tr ivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... Downloading: 17%|███████████████████▉ | 446M/2.67G [00:37<04:45, 7.77MB/s] ``` 2. 100% is reached 3. It got stuck here for about an hour, and added additional 30G of data to "./datasets" directory. I killed the process eventually. A similar issue can be observed in Google Colab: https://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing ## Expected behaviour: The dataset "TriviaQA" should be successfully downloaded.
open
https://github.com/huggingface/datasets/issues/1615
2020-12-20T17:27:38
2021-06-25T13:11:33
null
{ "login": "SapirWeissbuch", "id": 44585792, "type": "User" }
[]
false
[]
771,577,050
1,613
Add id_clickbait
This is the CLICK-ID dataset, a collection of annotated clickbait Indonesian news headlines that was collected from 12 local online news
closed
https://github.com/huggingface/datasets/pull/1613
2020-12-20T12:24:49
2020-12-22T17:45:27
2020-12-22T17:45:27
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
771,558,160
1,612
Adding wiki asp dataset as new PR
Hi @lhoestq, Adding wiki asp as new branch because #1539 has other commits. This version has dummy data for each domain <20/30KB.
closed
https://github.com/huggingface/datasets/pull/1612
2020-12-20T10:25:08
2020-12-21T14:13:33
2020-12-21T14:13:33
{ "login": "katnoria", "id": 7674948, "type": "User" }
[]
true
[]
771,486,456
1,611
shuffle with torch generator
Hi I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I really need to make shuffle work with this generator and I was wondering what I can do about this issue, thanks for your help @lhoestq
closed
https://github.com/huggingface/datasets/issues/1611
2020-12-20T00:57:14
2022-06-01T15:30:13
2022-06-01T15:30:13
{ "login": "rabeehkarimimahabadi", "id": 73364383, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
771,453,599
1,610
shuffle does not accept seed
Hi I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
closed
https://github.com/huggingface/datasets/issues/1610
2020-12-19T20:59:39
2021-01-04T10:00:03
2021-01-04T10:00:03
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
771,421,881
1,609
Not able to use 'jigsaw_toxicity_pred' dataset
When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing): ``` from datasets import list_datasets, list_metrics, load_dataset, load_metric ds = load_dataset("jigsaw_toxicity_pred") ``` I see below error: > FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 280 raise FileNotFoundError( 281 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 282 combined_path, github_file_path, file_path 283 ) 284 ) FileNotFoundError: Couldn't find file locally at jigsaw_toxicity_pred/jigsaw_toxicity_pred.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py
closed
https://github.com/huggingface/datasets/issues/1609
2020-12-19T17:35:48
2020-12-22T16:42:24
2020-12-22T16:42:23
{ "login": "jassimran", "id": 7424133, "type": "User" }
[]
false
[]
771,329,434
1,608
adding ted_talks_iwslt
UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108) Running the `pytest `went for more than 40+ hours and it was still running! So working on a different approach, such that the number of configs = number of languages. Will make a new pull request with that. UPDATE: This requires manual download dataset This is a draft version
closed
https://github.com/huggingface/datasets/pull/1608
2020-12-19T07:36:41
2021-01-02T15:44:12
2021-01-02T15:44:11
{ "login": "skyprince999", "id": 9033954, "type": "User" }
[]
true
[]
771,325,852
1,607
modified tweets hate speech detection
closed
https://github.com/huggingface/datasets/pull/1607
2020-12-19T07:13:40
2020-12-21T16:08:48
2020-12-21T16:08:48
{ "login": "darshan-gandhi", "id": 44197177, "type": "User" }
[]
true
[]
771,116,455
1,606
added Semantic Scholar Open Research Corpus
I picked up this dataset [Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don’t have. Can someone from the HF team with that much of disk space help me with generate dataset_infos and dummy_data?
closed
https://github.com/huggingface/datasets/pull/1606
2020-12-18T19:21:24
2021-02-03T09:30:59
2021-02-03T09:30:59
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
770,979,620
1,605
Navigation version breaking
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png) **Edit:** this actually happens _only_ if you open a link to a concrete subsection. IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to: ``` let label = (version in versionMapping) ? version : stableVersion ``` which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case. I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :) I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version? So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.
closed
https://github.com/huggingface/datasets/issues/1605
2020-12-18T15:36:24
2022-10-05T12:35:11
2022-10-05T12:35:11
{ "login": "mttk", "id": 3007947, "type": "User" }
[]
false
[]
770,862,112
1,604
Add tests for the download functions ?
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
closed
https://github.com/huggingface/datasets/issues/1604
2020-12-18T12:49:25
2022-10-05T13:04:24
2022-10-05T13:04:24
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
770,857,221
1,603
Add retries to HTTP requests
## What does this PR do ? Adding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception. The "canonical" way to do this is to use [urllib's Retry class](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite Fixes #1102
closed
https://github.com/huggingface/datasets/pull/1603
2020-12-18T12:41:31
2020-12-22T15:34:07
2020-12-22T15:34:07
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
770,841,810
1,602
second update of id_newspapers_2018
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
closed
https://github.com/huggingface/datasets/pull/1602
2020-12-18T12:16:37
2020-12-22T10:41:15
2020-12-22T10:41:14
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
770,758,914
1,601
second update of the id_newspapers_2018
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
closed
https://github.com/huggingface/datasets/pull/1601
2020-12-18T10:10:20
2020-12-18T12:15:31
2020-12-18T12:15:31
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
770,582,960
1,600
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
closed
https://github.com/huggingface/datasets/issues/1600
2020-12-18T05:37:10
2023-05-03T04:22:55
2020-12-21T07:38:58
{ "login": "david-waterworth", "id": 5028974, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
770,431,389
1,599
add Korean Sarcasm Dataset
closed
https://github.com/huggingface/datasets/pull/1599
2020-12-17T22:49:56
2021-09-17T16:54:32
2020-12-23T17:25:59
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
770,332,440
1,598
made suggested changes in fake-news-english
closed
https://github.com/huggingface/datasets/pull/1598
2020-12-17T20:06:29
2020-12-18T09:43:58
2020-12-18T09:43:57
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
770,276,140
1,597
adding hate-speech-and-offensive-language
closed
https://github.com/huggingface/datasets/pull/1597
2020-12-17T18:35:15
2020-12-23T23:27:17
2020-12-23T23:27:16
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
770,260,531
1,596
made suggested changes to hate-speech-and-offensive-language
closed
https://github.com/huggingface/datasets/pull/1596
2020-12-17T18:09:26
2020-12-17T18:36:02
2020-12-17T18:35:53
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
770,153,693
1,595
Logiqa en
logiqa in english.
closed
https://github.com/huggingface/datasets/pull/1595
2020-12-17T15:42:00
2022-10-03T09:38:30
2022-10-03T09:38:30
{ "login": "aclifton314", "id": 53267795, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
769,747,767
1,594
connection error
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_trainer.py", line 207, in <dictcomp> for task in data_args.eval_tasks} File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset return datasets.load_dataset(self.task.name, split=split, script_version="master") File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED ```
closed
https://github.com/huggingface/datasets/issues/1594
2020-12-17T09:18:34
2022-06-01T15:33:42
2022-06-01T15:33:41
{ "login": "rabeehkarimimahabadi", "id": 73364383, "type": "User" }
[]
false
[]
769,611,386
1,593
Access to key in DatasetDict map
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
closed
https://github.com/huggingface/datasets/issues/1593
2020-12-17T07:02:20
2022-10-05T13:47:28
2022-10-05T12:33:06
{ "login": "ZhaofengWu", "id": 11954789, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
769,383,714
1,591
IWSLT-17 Link Broken
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
closed
https://github.com/huggingface/datasets/issues/1591
2020-12-17T00:46:42
2020-12-18T08:06:36
2020-12-18T08:05:28
{ "login": "ZhaofengWu", "id": 11954789, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
769,242,858
1,590
Add helper to resolve namespace collision
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
closed
https://github.com/huggingface/datasets/issues/1590
2020-12-16T20:17:24
2022-06-01T15:32:04
2022-06-01T15:32:04
{ "login": "jramapuram", "id": 8204807, "type": "User" }
[]
false
[]
769,187,141
1,589
Update doc2dial.py
Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper.
closed
https://github.com/huggingface/datasets/pull/1589
2020-12-16T18:50:56
2022-07-06T15:19:57
2022-07-06T15:19:57
{ "login": "songfeng", "id": 2062185, "type": "User" }
[]
true
[]
769,068,227
1,588
Modified hind encorp
description added, unnecessary comments removed from .py and readme.md reformated @lhoestq for #1584
closed
https://github.com/huggingface/datasets/pull/1588
2020-12-16T16:28:14
2020-12-16T22:41:53
2020-12-16T17:20:28
{ "login": "rahul-art", "id": 56379013, "type": "User" }
[]
true
[]
768,929,877
1,587
Add nq_open question answering dataset
this is pr is a copy of #1506 due to messed up git history in that pr.
closed
https://github.com/huggingface/datasets/pull/1587
2020-12-16T14:22:08
2020-12-17T16:07:10
2020-12-17T16:07:10
{ "login": "Nilanshrajput", "id": 28673745, "type": "User" }
[]
true
[]
768,864,502
1,586
added irc disentangle dataset
added irc disentanglement dataset
closed
https://github.com/huggingface/datasets/pull/1586
2020-12-16T13:25:58
2021-01-29T10:28:53
2021-01-29T10:28:53
{ "login": "dhruvjoshi1998", "id": 32560035, "type": "User" }
[]
true
[]
768,831,171
1,585
FileNotFoundError for `amazon_polarity`
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
closed
https://github.com/huggingface/datasets/issues/1585
2020-12-16T12:51:05
2020-12-16T16:02:56
2020-12-16T16:02:56
{ "login": "phtephanx", "id": 24647404, "type": "User" }
[]
false
[]
768,820,406
1,584
Load hind encorp
reformated well documented, yaml tags added, code
closed
https://github.com/huggingface/datasets/pull/1584
2020-12-16T12:38:38
2020-12-18T02:27:24
2020-12-18T02:27:24
{ "login": "rahul-art", "id": 56379013, "type": "User" }
[]
true
[]
768,795,986
1,583
Update metrics docstrings.
#1478 Correcting the argument descriptions for metrics. Let me know if there's any issues.
closed
https://github.com/huggingface/datasets/pull/1583
2020-12-16T12:14:18
2020-12-18T18:39:06
2020-12-18T18:39:06
{ "login": "Fraser-Greenlee", "id": 8402500, "type": "User" }
[]
true
[]
768,776,617
1,582
Adding wiki lingua dataset as new branch
Adding the dataset as new branch as advised here: #1470
closed
https://github.com/huggingface/datasets/pull/1582
2020-12-16T11:53:07
2020-12-17T18:06:46
2020-12-17T18:06:45
{ "login": "katnoria", "id": 7674948, "type": "User" }
[]
true
[]
768,320,594
1,581
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should map to the ID and group for your user on the Docker host. Great! tf-docker /root > python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers 2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module> from .trainer_utils import EvaluationStrategy File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' ``` I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile. ``` FROM tensorflow/tensorflow:latest-gpu-jupyter WORKDIR /root EXPOSE 80 EXPOSE 8888 EXPOSE 6006 ENV SHELL /bin/bash ENV PATH="/root/.local/bin:${PATH}" ENV CUDA_CACHE_PATH="/root/cache/cuda" ENV CUDA_CACHE_MAXSIZE="4294967296" ENV TFHUB_CACHE_DIR="/root/cache/tfhub" RUN pip install --upgrade pip RUN apt update -y && apt upgrade -y RUN pip install transformers #Installing datasets will throw the error, try commenting and rebuilding RUN pip install datasets #Another workaround is creating the directory and give permissions explicitly #RUN mkdir /.cache #RUN chmod 777 /.cache ```
closed
https://github.com/huggingface/datasets/issues/1581
2020-12-16T00:02:21
2021-06-17T15:40:45
2021-06-17T15:40:45
{ "login": "eduardofv", "id": 702586, "type": "User" }
[]
false
[]
768,111,377
1,580
made suggested changes in diplomacy_detection.py
closed
https://github.com/huggingface/datasets/pull/1580
2020-12-15T19:52:00
2020-12-16T10:27:52
2020-12-16T10:27:52
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
767,808,465
1,579
Adding CLIMATE-FEVER dataset
This PR request the addition of the CLIMATE-FEVER dataset: A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: - Homepage: <http://climatefever.ai> - Paper: <https://arxiv.org/abs/2012.00614>
closed
https://github.com/huggingface/datasets/pull/1579
2020-12-15T16:49:22
2020-12-22T13:43:16
2020-12-22T13:43:15
{ "login": "tdiggelm", "id": 1658969, "type": "User" }
[]
true
[]
767,760,513
1,578
update multiwozv22 checksums
a file was updated on the GitHub repo for the dataset
closed
https://github.com/huggingface/datasets/pull/1578
2020-12-15T16:13:52
2020-12-15T17:06:29
2020-12-15T17:06:29
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
767,342,432
1,577
Add comet metric
Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics. COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark. We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric. I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 ! Cheers, Ricardo
closed
https://github.com/huggingface/datasets/pull/1577
2020-12-15T08:56:00
2021-01-14T13:33:10
2021-01-14T13:33:10
{ "login": "ricardorei", "id": 17256847, "type": "User" }
[]
true
[]
767,080,645
1,576
Remove the contributors section
sourcerer is down
closed
https://github.com/huggingface/datasets/pull/1576
2020-12-15T01:47:15
2020-12-15T12:53:47
2020-12-15T12:53:46
{ "login": "clmnt", "id": 821155, "type": "User" }
[]
true
[]
767,076,374
1,575
Hind_Encorp all done
closed
https://github.com/huggingface/datasets/pull/1575
2020-12-15T01:36:02
2020-12-16T15:15:17
2020-12-16T15:15:17
{ "login": "rahul-art", "id": 56379013, "type": "User" }
[]
true
[]
767,015,317
1,574
Diplomacy detection 3
closed
https://github.com/huggingface/datasets/pull/1574
2020-12-14T23:28:51
2020-12-14T23:29:32
2020-12-14T23:29:32
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
767,011,938
1,573
adding dataset for diplomacy detection-2
closed
https://github.com/huggingface/datasets/pull/1573
2020-12-14T23:21:37
2020-12-14T23:36:57
2020-12-14T23:36:57
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
767,008,470
1,572
add Gnad10 dataset
reference [PR#1317](https://github.com/huggingface/datasets/pull/1317)
closed
https://github.com/huggingface/datasets/pull/1572
2020-12-14T23:15:02
2021-09-17T16:54:37
2020-12-16T16:52:30
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
766,981,721
1,571
Fixing the KILT tasks to match our current standards
This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task
closed
https://github.com/huggingface/datasets/pull/1571
2020-12-14T22:26:12
2020-12-14T23:07:41
2020-12-14T23:07:41
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
766,830,545
1,570
Documentation for loading CSV datasets misleads the user
Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting. There are two problems here: i) `quote_char' is misspelled, must be `quotechar' ii) the documentation should mention `quoting'
closed
https://github.com/huggingface/datasets/pull/1570
2020-12-14T19:04:37
2020-12-22T19:30:12
2020-12-21T13:47:09
{ "login": "onurgu", "id": 56893, "type": "User" }
[]
true
[]
766,758,895
1,569
added un_ga dataset
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset. With suggested changes in #1330
closed
https://github.com/huggingface/datasets/pull/1569
2020-12-14T17:42:04
2020-12-15T15:28:58
2020-12-15T15:28:58
{ "login": "param087", "id": 26374564, "type": "User" }
[]
true
[]
766,722,994
1,568
Added the dataset clickbait_news_bg
There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.
closed
https://github.com/huggingface/datasets/pull/1568
2020-12-14T17:03:00
2020-12-15T18:28:56
2020-12-15T18:28:56
{ "login": "tsvm", "id": 1083319, "type": "User" }
[]
true
[]
766,382,609
1,567
[wording] Update Readme.md
Make the features of the library clearer.
closed
https://github.com/huggingface/datasets/pull/1567
2020-12-14T12:34:52
2020-12-15T12:54:07
2020-12-15T12:54:06
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
766,354,236
1,566
Add Microsoft Research Sequential Question Answering (SQA) Dataset
For more information: https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2
closed
https://github.com/huggingface/datasets/pull/1566
2020-12-14T12:02:30
2020-12-15T15:24:22
2020-12-15T15:24:22
{ "login": "mattbui", "id": 46804938, "type": "User" }
[]
true
[]
766,333,940
1,565
Create README.md
closed
https://github.com/huggingface/datasets/pull/1565
2020-12-14T11:40:23
2021-03-25T14:01:49
2021-03-25T14:01:49
{ "login": "ManuelFay", "id": 43467008, "type": "User" }
[]
true
[]
766,266,609
1,564
added saudinewsnet
I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution
closed
https://github.com/huggingface/datasets/pull/1564
2020-12-14T10:35:09
2020-12-22T09:51:04
2020-12-22T09:51:04
{ "login": "abdulelahsm", "id": 28743265, "type": "User" }
[]
true
[]
766,211,931
1,563
adding tmu-gfm-dataset
Adding TMU-GFM-Dataset for Grammatical Error Correction. https://github.com/tmu-nlp/TMU-GFM-Dataset A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https://www.aclweb.org/anthology/2020.coling-main.573.pdf).
closed
https://github.com/huggingface/datasets/pull/1563
2020-12-14T09:45:30
2020-12-21T10:21:04
2020-12-21T10:07:13
{ "login": "forest1988", "id": 2755894, "type": "User" }
[]
true
[]
765,981,749
1,562
Add dataset COrpus of Urdu News TExt Reuse (COUNTER).
closed
https://github.com/huggingface/datasets/pull/1562
2020-12-14T06:32:48
2020-12-21T13:14:46
2020-12-21T13:14:46
{ "login": "arkhalid", "id": 14899066, "type": "User" }
[]
true
[]
765,831,436
1,561
Lama
This the LAMA dataset for probing facts and common sense from language models. See https://github.com/facebookresearch/LAMA for more details.
closed
https://github.com/huggingface/datasets/pull/1561
2020-12-14T03:27:10
2020-12-28T09:51:47
2020-12-28T09:51:47
{ "login": "huu4ontocord", "id": 8900094, "type": "User" }
[]
true
[]
765,814,964
1,560
Adding the BrWaC dataset
Adding the BrWaC dataset, a large corpus of Portuguese language texts
closed
https://github.com/huggingface/datasets/pull/1560
2020-12-14T03:03:56
2020-12-18T15:56:56
2020-12-18T15:56:55
{ "login": "jonatasgrosman", "id": 5097052, "type": "User" }
[]
true
[]
765,714,183
1,559
adding dataset card information to CONTRIBUTING.md
Added a documentation line and link to the full sprint guide in the "How to add a dataset" section, and a section on how to contribute to the dataset card of an existing dataset. And a thank you note at the end :hugs:
closed
https://github.com/huggingface/datasets/pull/1559
2020-12-14T00:08:43
2020-12-14T17:55:03
2020-12-14T17:55:03
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
765,707,907
1,558
Adding Igbo NER data
This PR adds the Igbo NER dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner
closed
https://github.com/huggingface/datasets/pull/1558
2020-12-13T23:52:11
2020-12-21T14:38:20
2020-12-21T14:38:20
{ "login": "purvimisal", "id": 22298787, "type": "User" }
[]
true
[]
765,693,927
1,557
HindEncorp again commited
closed
https://github.com/huggingface/datasets/pull/1557
2020-12-13T23:09:02
2020-12-15T10:37:05
2020-12-15T10:37:04
{ "login": "rahul-art", "id": 56379013, "type": "User" }
[]
true
[]
765,689,730
1,556
add bswac
closed
https://github.com/huggingface/datasets/pull/1556
2020-12-13T22:55:35
2020-12-18T15:14:28
2020-12-18T15:14:27
{ "login": "IvanZidov", "id": 11391118, "type": "User" }
[]
true
[]
765,681,607
1,555
Added Opus TedTalks
Dataset : http://opus.nlpl.eu/TedTalks.php
closed
https://github.com/huggingface/datasets/pull/1555
2020-12-13T22:29:33
2020-12-18T09:44:43
2020-12-18T09:44:43
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
765,675,148
1,554
Opus CAPES added
Dataset : http://opus.nlpl.eu/CAPES.php
closed
https://github.com/huggingface/datasets/pull/1554
2020-12-13T22:11:34
2020-12-18T09:54:57
2020-12-18T08:46:59
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
765,670,083
1,553
added air_dialogue
UPDATE2 (3797ce5): Updated for multi-configs UPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin/master DRAFT VERSION (57fdb20): (_no longer draft_) Uploaded the air_dialogue database. dummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data
closed
https://github.com/huggingface/datasets/pull/1553
2020-12-13T21:59:02
2020-12-23T11:20:40
2020-12-23T11:20:39
{ "login": "skyprince999", "id": 9033954, "type": "User" }
[]
true
[]
765,664,411
1,552
Added OPUS ParaCrawl
Dataset : http://opus.nlpl.eu/ParaCrawl.php
closed
https://github.com/huggingface/datasets/pull/1552
2020-12-13T21:44:29
2020-12-21T09:50:26
2020-12-21T09:50:25
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
765,621,879
1,551
Monero
Biomedical Romanian dataset :)
closed
https://github.com/huggingface/datasets/pull/1551
2020-12-13T19:56:48
2022-10-03T09:38:35
2022-10-03T09:38:35
{ "login": "iliemihai", "id": 2815308, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
765,620,925
1,550
Add offensive langauge dravidian dataset
closed
https://github.com/huggingface/datasets/pull/1550
2020-12-13T19:54:19
2020-12-18T15:52:49
2020-12-18T14:25:30
{ "login": "jamespaultg", "id": 7421838, "type": "User" }
[]
true
[]
765,612,905
1,549
Generics kb new branch
Datasets need manual downloads. Have thus created dummy data as well. But pytest on real and dummy data are failing. I have completed the readme , tags and other required things. I need to create the metadata json once tests get successful. Opening a PR while working with Yacine Jernite to resolve my pytest issues.
closed
https://github.com/huggingface/datasets/pull/1549
2020-12-13T19:33:10
2020-12-21T13:55:09
2020-12-21T13:55:09
{ "login": "bpatidar", "id": 12439573, "type": "User" }
[]
true
[]
765,592,336
1,548
Fix `🤗Datasets` - `tfds` differences link + a few aesthetics
closed
https://github.com/huggingface/datasets/pull/1548
2020-12-13T18:48:21
2020-12-15T12:55:27
2020-12-15T12:55:27
{ "login": "VIVelev", "id": 22171622, "type": "User" }
[]
true
[]
765,562,792
1,547
Adding PolEval2019 Machine Translation Task dataset
Facing an error with pytest in training. Dummy data is passing. README has to be updated.
closed
https://github.com/huggingface/datasets/pull/1547
2020-12-13T17:50:03
2023-04-03T09:20:23
2020-12-21T16:13:21
{ "login": "vrindaprabhu", "id": 16264631, "type": "User" }
[]
true
[]
765,559,923
1,546
Add persian ner dataset
Adding the following dataset: https://github.com/HaniehP/PersianNER
closed
https://github.com/huggingface/datasets/pull/1546
2020-12-13T17:45:48
2020-12-23T09:53:03
2020-12-23T09:53:03
{ "login": "KMFODA", "id": 35491698, "type": "User" }
[]
true
[]
765,550,283
1,545
add hrwac
closed
https://github.com/huggingface/datasets/pull/1545
2020-12-13T17:31:54
2020-12-18T13:35:17
2020-12-18T13:35:17
{ "login": "IvanZidov", "id": 11391118, "type": "User" }
[]
true
[]
765,514,828
1,544
Added Wiki Summary Dataset
Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights. Link: https://github.com/m3hrdadfi/wiki-summary
closed
https://github.com/huggingface/datasets/pull/1544
2020-12-13T16:33:46
2020-12-18T16:20:06
2020-12-18T16:17:18
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
765,476,196
1,543
adding HindEncorp
adding Hindi Wikipedia corpus
closed
https://github.com/huggingface/datasets/pull/1543
2020-12-13T15:39:07
2020-12-13T23:35:53
2020-12-13T23:35:53
{ "login": "rahul-art", "id": 56379013, "type": "User" }
[]
true
[]
765,439,746
1,542
fix typo readme
closed
https://github.com/huggingface/datasets/pull/1542
2020-12-13T14:41:22
2020-12-13T17:16:41
2020-12-13T17:16:40
{ "login": "clmnt", "id": 821155, "type": "User" }
[]
true
[]
765,430,586
1,541
connection issue while downloading data
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
closed
https://github.com/huggingface/datasets/issues/1541
2020-12-13T14:27:00
2022-10-05T12:33:29
2022-10-05T12:33:29
{ "login": "rabeehkarimimahabadi", "id": 73364383, "type": "User" }
[]
false
[]
765,357,702
1,540
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900) Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
closed
https://github.com/huggingface/datasets/pull/1540
2020-12-13T12:43:33
2020-12-18T10:09:01
2020-12-18T10:09:01
{ "login": "yavuzKomecoglu", "id": 5150963, "type": "User" }
[]
true
[]
765,338,910
1,539
Added Wiki Asp dataset
Hello, I have added Wiki Asp dataset. Please review the PR.
closed
https://github.com/huggingface/datasets/pull/1539
2020-12-13T12:18:34
2020-12-22T10:16:01
2020-12-22T10:16:01
{ "login": "katnoria", "id": 7674948, "type": "User" }
[]
true
[]
765,139,739
1,538
tweets_hate_speech_detection
closed
https://github.com/huggingface/datasets/pull/1538
2020-12-13T07:37:53
2020-12-21T15:54:28
2020-12-21T15:54:27
{ "login": "darshan-gandhi", "id": 44197177, "type": "User" }
[]
true
[]
765,095,210
1,537
added ohsumed
UPDATE2: PR passed all tests. Now waiting for review. UPDATE: pushed a new version. cross fingers that it should complete all the tests! :) If it passes all tests then it's not a draft version. This is a draft version
closed
https://github.com/huggingface/datasets/pull/1537
2020-12-13T06:58:23
2020-12-17T18:28:16
2020-12-17T18:28:16
{ "login": "skyprince999", "id": 9033954, "type": "User" }
[]
true
[]
765,043,121
1,536
Add Hippocorpus Dataset
closed
https://github.com/huggingface/datasets/pull/1536
2020-12-13T06:13:02
2020-12-15T13:41:17
2020-12-15T13:40:11
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]