id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
619,968,480
https://api.github.com/repos/huggingface/datasets/issues/151
https://github.com/huggingface/datasets/pull/151
151
Fix JSON tests.
closed
0
2020-05-18T07:17:38
2020-05-18T07:21:52
2020-05-18T07:21:51
jplu
[]
true
619,809,645
https://api.github.com/repos/huggingface/datasets/issues/150
https://github.com/huggingface/datasets/pull/150
150
Add WNUT 17 NER dataset
closed
4
2020-05-17T22:19:04
2020-05-26T20:37:59
2020-05-26T20:37:59
stefan-it
[]
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisati...
true
619,735,739
https://api.github.com/repos/huggingface/datasets/issues/149
https://github.com/huggingface/datasets/issues/149
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
closed
1
2020-05-17T15:42:39
2020-05-18T17:01:46
2020-05-18T17:01:46
danth
[ "dataset request" ]
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
false
619,590,555
https://api.github.com/repos/huggingface/datasets/issues/148
https://github.com/huggingface/datasets/issues/148
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
closed
2
2020-05-17T01:48:53
2020-05-18T07:38:33
2020-05-18T07:38:33
richarddwang
[ "dataset bug" ]
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/w...
false
619,581,907
https://api.github.com/repos/huggingface/datasets/issues/147
https://github.com/huggingface/datasets/issues/147
147
Error with sklearn train_test_split
closed
2
2020-05-17T00:28:24
2020-06-18T16:23:23
2020-06-18T16:23:23
ClonedOne
[ "enhancement" ]
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)...
false
619,564,653
https://api.github.com/repos/huggingface/datasets/issues/146
https://github.com/huggingface/datasets/pull/146
146
Add BERTScore to metrics
closed
0
2020-05-16T22:09:39
2020-05-17T22:22:10
2020-05-17T22:22:09
felixgwu
[]
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [[...
true
619,480,549
https://api.github.com/repos/huggingface/datasets/issues/145
https://github.com/huggingface/datasets/pull/145
145
[AWS Tests] Follow-up PR from #144
closed
0
2020-05-16T13:53:46
2020-05-16T13:54:23
2020-05-16T13:54:22
patrickvonplaten
[]
I forgot to add this line in PR #145 .
true
619,477,367
https://api.github.com/repos/huggingface/datasets/issues/144
https://github.com/huggingface/datasets/pull/144
144
[AWS tests] AWS test should not run for canonical datasets
closed
0
2020-05-16T13:39:30
2020-05-16T13:44:34
2020-05-16T13:44:33
patrickvonplaten
[]
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical da...
true
619,457,641
https://api.github.com/repos/huggingface/datasets/issues/143
https://github.com/huggingface/datasets/issues/143
143
ArrowTypeError in squad metrics
closed
1
2020-05-16T12:06:37
2020-05-22T13:38:52
2020-05-22T13:36:48
patil-suraj
[ "metric bug" ]
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references lo...
false
619,450,068
https://api.github.com/repos/huggingface/datasets/issues/142
https://github.com/huggingface/datasets/pull/142
142
[WMT] Add all wmt
closed
0
2020-05-16T11:28:46
2020-05-17T12:18:21
2020-05-17T12:18:20
patrickvonplaten
[]
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" languag...
true
619,447,090
https://api.github.com/repos/huggingface/datasets/issues/141
https://github.com/huggingface/datasets/pull/141
141
[Clean up] remove bogus folder
closed
2
2020-05-16T11:13:42
2020-05-16T13:24:27
2020-05-16T13:24:26
patrickvonplaten
[]
@mariamabarham - I think you accidentally placed it there.
true
619,443,613
https://api.github.com/repos/huggingface/datasets/issues/140
https://github.com/huggingface/datasets/pull/140
140
[Tests] run local tests as default
closed
2
2020-05-16T10:56:06
2020-05-16T13:21:44
2020-05-16T13:21:43
patrickvonplaten
[]
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are...
true
619,327,409
https://api.github.com/repos/huggingface/datasets/issues/139
https://github.com/huggingface/datasets/pull/139
139
Add GermEval 2014 NER dataset
closed
4
2020-05-15T23:42:09
2020-05-16T13:56:37
2020-05-16T13:56:22
stefan-it
[]
Hi, this PR adds the GermEval 2014 NER dataset 😃 > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000...
true
619,225,191
https://api.github.com/repos/huggingface/datasets/issues/138
https://github.com/huggingface/datasets/issues/138
138
Consider renaming to nld
closed
13
2020-05-15T20:23:27
2022-09-16T05:18:22
2020-09-28T00:08:10
honnibal
[ "generic discussion" ]
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This...
false
619,211,018
https://api.github.com/repos/huggingface/datasets/issues/136
https://github.com/huggingface/datasets/pull/136
136
Update README.md
closed
1
2020-05-15T20:01:07
2020-05-17T12:17:28
2020-05-17T12:17:28
renaud
[]
small typo
true
619,206,708
https://api.github.com/repos/huggingface/datasets/issues/135
https://github.com/huggingface/datasets/pull/135
135
Fix print statement in READ.md
closed
1
2020-05-15T19:52:23
2020-05-17T12:14:06
2020-05-17T12:14:05
codehunk628
[]
print statement was throwing generator object instead of printing names of available datasets/metrics
true
619,112,641
https://api.github.com/repos/huggingface/datasets/issues/134
https://github.com/huggingface/datasets/pull/134
134
Update README.md
closed
1
2020-05-15T16:56:14
2020-05-28T08:21:49
2020-05-28T08:21:49
pranv
[]
true
619,094,954
https://api.github.com/repos/huggingface/datasets/issues/133
https://github.com/huggingface/datasets/issues/133
133
[Question] Using/adding a local dataset
closed
5
2020-05-15T16:26:06
2020-07-23T16:44:09
2020-07-23T16:44:09
zphang
[]
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. ...
false
619,077,851
https://api.github.com/repos/huggingface/datasets/issues/132
https://github.com/huggingface/datasets/issues/132
132
[Feature Request] Add the OpenWebText dataset
closed
2
2020-05-15T15:57:29
2020-10-07T14:22:48
2020-10-07T14:22:48
LysandreJik
[ "dataset request" ]
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
false
619,073,731
https://api.github.com/repos/huggingface/datasets/issues/131
https://github.com/huggingface/datasets/issues/131
131
[Feature request] Add Toronto BookCorpus dataset
closed
2
2020-05-15T15:50:44
2020-06-28T21:27:31
2020-06-28T21:27:31
jarednielsen
[ "dataset request" ]
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
false
619,035,440
https://api.github.com/repos/huggingface/datasets/issues/130
https://github.com/huggingface/datasets/issues/130
130
Loading GLUE dataset loads CoLA by default
closed
3
2020-05-15T14:55:50
2020-05-27T22:08:15
2020-05-27T22:08:15
zphang
[ "dataset bug" ]
If I run: ```python dataset = nlp.load_dataset('glue') ``` The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling: ```python metric = nlp.load_metric("glue") ``` which throws an error telling the user that they need to specify a task in GLUE. Should the...
false
618,997,725
https://api.github.com/repos/huggingface/datasets/issues/129
https://github.com/huggingface/datasets/issues/129
129
[Feature request] Add Google Natural Question dataset
closed
7
2020-05-15T14:14:20
2020-07-23T13:21:29
2020-07-23T13:21:29
elyase
[ "dataset request" ]
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
false
618,951,117
https://api.github.com/repos/huggingface/datasets/issues/128
https://github.com/huggingface/datasets/issues/128
128
Some error inside nlp.load_dataset()
closed
2
2020-05-15T13:01:29
2020-05-15T13:10:40
2020-05-15T13:10:40
polkaYK
[]
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: `...
false
618,909,042
https://api.github.com/repos/huggingface/datasets/issues/127
https://github.com/huggingface/datasets/pull/127
127
Update Overview.ipynb
closed
0
2020-05-15T11:46:48
2020-05-15T11:47:27
2020-05-15T11:47:25
patrickvonplaten
[]
update notebook
true
618,897,499
https://api.github.com/repos/huggingface/datasets/issues/126
https://github.com/huggingface/datasets/pull/126
126
remove webis
closed
0
2020-05-15T11:25:20
2020-05-15T11:31:24
2020-05-15T11:30:26
patrickvonplaten
[]
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
true
618,869,048
https://api.github.com/repos/huggingface/datasets/issues/125
https://github.com/huggingface/datasets/pull/125
125
[Newsroom] add newsroom
closed
0
2020-05-15T10:34:34
2020-05-15T10:37:07
2020-05-15T10:37:02
patrickvonplaten
[]
I checked it with the data link of the mail you forwarded @thomwolf => works well!
true
618,864,284
https://api.github.com/repos/huggingface/datasets/issues/124
https://github.com/huggingface/datasets/pull/124
124
Xsum, require manual download of some files
closed
0
2020-05-15T10:26:13
2020-05-15T11:04:48
2020-05-15T11:04:46
mariamabarham
[]
true
618,820,140
https://api.github.com/repos/huggingface/datasets/issues/123
https://github.com/huggingface/datasets/pull/123
123
[Tests] Local => aws
closed
3
2020-05-15T09:12:25
2020-05-15T10:06:12
2020-05-15T10:03:26
patrickvonplaten
[]
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset c...
true
618,813,182
https://api.github.com/repos/huggingface/datasets/issues/122
https://github.com/huggingface/datasets/pull/122
122
Final cleanup of readme and metrics
closed
0
2020-05-15T09:00:52
2021-09-03T19:40:09
2020-05-15T09:02:22
thomwolf
[]
true
618,790,040
https://api.github.com/repos/huggingface/datasets/issues/121
https://github.com/huggingface/datasets/pull/121
121
make style
closed
0
2020-05-15T08:23:36
2020-05-15T08:25:39
2020-05-15T08:25:38
patrickvonplaten
[]
true
618,737,783
https://api.github.com/repos/huggingface/datasets/issues/120
https://github.com/huggingface/datasets/issues/120
120
🐛 `map` not working
closed
1
2020-05-15T06:43:08
2020-05-15T07:02:38
2020-05-15T07:02:38
astariul
[]
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): samp...
false
618,652,145
https://api.github.com/repos/huggingface/datasets/issues/119
https://github.com/huggingface/datasets/issues/119
119
🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
closed
2
2020-05-15T02:27:26
2020-05-15T05:11:22
2020-05-15T02:45:28
astariul
[]
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
false
618,643,088
https://api.github.com/repos/huggingface/datasets/issues/118
https://github.com/huggingface/datasets/issues/118
118
❓ How to apply a map to all subsets ?
closed
1
2020-05-15T01:58:52
2020-05-15T07:05:49
2020-05-15T07:04:25
astariul
[]
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_f...
false
618,632,573
https://api.github.com/repos/huggingface/datasets/issues/117
https://github.com/huggingface/datasets/issues/117
117
❓ How to remove specific rows of a dataset ?
closed
4
2020-05-15T01:25:06
2022-07-15T08:36:44
2020-05-15T07:04:32
astariul
[]
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample w...
false
618,628,264
https://api.github.com/repos/huggingface/datasets/issues/116
https://github.com/huggingface/datasets/issues/116
116
🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
closed
5
2020-05-15T01:12:06
2020-05-28T23:43:07
2020-05-28T23:43:07
astariul
[ "metric bug" ]
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): ...
false
618,615,855
https://api.github.com/repos/huggingface/datasets/issues/115
https://github.com/huggingface/datasets/issues/115
115
AttributeError: 'dict' object has no attribute 'info'
closed
2
2020-05-15T00:29:47
2020-05-17T13:11:00
2020-05-17T13:11:00
astariul
[]
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
false
618,611,310
https://api.github.com/repos/huggingface/datasets/issues/114
https://github.com/huggingface/datasets/issues/114
114
Couldn't reach CNN/DM dataset
closed
1
2020-05-15T00:16:17
2020-05-15T00:19:52
2020-05-15T00:19:51
astariul
[]
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error ...
false
618,590,562
https://api.github.com/repos/huggingface/datasets/issues/113
https://github.com/huggingface/datasets/pull/113
113
Adding docstrings and some doc
closed
0
2020-05-14T23:14:41
2020-05-14T23:22:45
2020-05-14T23:22:44
thomwolf
[]
Some doc
true
618,569,195
https://api.github.com/repos/huggingface/datasets/issues/112
https://github.com/huggingface/datasets/pull/112
112
Qa4mre - add dataset
closed
0
2020-05-14T22:17:51
2020-05-15T09:16:43
2020-05-15T09:16:42
patrickvonplaten
[]
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
true
618,528,060
https://api.github.com/repos/huggingface/datasets/issues/111
https://github.com/huggingface/datasets/pull/111
111
[Clean-up] remove under construction datastes
closed
0
2020-05-14T20:52:13
2020-05-14T20:52:23
2020-05-14T20:52:22
patrickvonplaten
[]
true
618,520,325
https://api.github.com/repos/huggingface/datasets/issues/110
https://github.com/huggingface/datasets/pull/110
110
fix reddit tifu dummy data
closed
0
2020-05-14T20:37:37
2020-05-14T20:40:14
2020-05-14T20:40:13
patrickvonplaten
[]
true
618,508,359
https://api.github.com/repos/huggingface/datasets/issues/109
https://github.com/huggingface/datasets/pull/109
109
[Reclor] fix reclor
closed
0
2020-05-14T20:16:26
2020-05-14T20:19:09
2020-05-14T20:19:08
patrickvonplaten
[]
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
true
618,386,394
https://api.github.com/repos/huggingface/datasets/issues/108
https://github.com/huggingface/datasets/pull/108
108
convert can use manual dir as second argument
closed
0
2020-05-14T16:52:32
2020-05-14T16:52:43
2020-05-14T16:52:42
patrickvonplaten
[]
@mariamabarham
true
618,373,045
https://api.github.com/repos/huggingface/datasets/issues/107
https://github.com/huggingface/datasets/pull/107
107
add writer_batch_size to GeneratorBasedBuilder
closed
1
2020-05-14T16:35:39
2020-05-14T16:50:30
2020-05-14T16:50:29
lhoestq
[]
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
true
618,361,418
https://api.github.com/repos/huggingface/datasets/issues/106
https://github.com/huggingface/datasets/pull/106
106
Add data dir test command
closed
1
2020-05-14T16:18:39
2020-05-14T16:49:11
2020-05-14T16:49:10
lhoestq
[]
true
618,345,191
https://api.github.com/repos/huggingface/datasets/issues/105
https://github.com/huggingface/datasets/pull/105
105
[New structure on AWS] Adapt paths
closed
0
2020-05-14T15:55:57
2020-05-14T15:56:28
2020-05-14T15:56:27
patrickvonplaten
[]
Some small changes so that we have the correct paths. @julien-c
true
618,277,081
https://api.github.com/repos/huggingface/datasets/issues/104
https://github.com/huggingface/datasets/pull/104
104
Add trivia_q
closed
0
2020-05-14T14:27:19
2020-07-12T05:34:20
2020-05-14T20:23:32
patrickvonplaten
[]
Currently tested only for one config to pass tests. Needs to add more dummy data later.
true
618,233,637
https://api.github.com/repos/huggingface/datasets/issues/103
https://github.com/huggingface/datasets/pull/103
103
[Manual downloads] add logic proposal for manual downloads and add wikihow
closed
3
2020-05-14T13:30:36
2020-05-14T14:27:41
2020-05-14T14:27:40
patrickvonplaten
[]
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset ca...
true
618,231,216
https://api.github.com/repos/huggingface/datasets/issues/102
https://github.com/huggingface/datasets/pull/102
102
Run save infos
closed
2
2020-05-14T13:27:26
2020-05-14T15:43:04
2020-05-14T15:43:03
lhoestq
[]
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
true
618,111,651
https://api.github.com/repos/huggingface/datasets/issues/101
https://github.com/huggingface/datasets/pull/101
101
[Reddit] add reddit
closed
0
2020-05-14T10:25:02
2020-05-14T10:27:25
2020-05-14T10:27:24
patrickvonplaten
[]
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
true
618,081,602
https://api.github.com/repos/huggingface/datasets/issues/100
https://github.com/huggingface/datasets/pull/100
100
Add per type scores in seqeval metric
closed
4
2020-05-14T09:37:52
2020-05-14T23:21:35
2020-05-14T23:21:34
jplu
[]
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-...
true
618,026,700
https://api.github.com/repos/huggingface/datasets/issues/99
https://github.com/huggingface/datasets/pull/99
99
[Cmrc 2018] fix cmrc2018
closed
0
2020-05-14T08:22:03
2020-05-14T08:49:42
2020-05-14T08:49:41
patrickvonplaten
[]
true
617,957,739
https://api.github.com/repos/huggingface/datasets/issues/98
https://github.com/huggingface/datasets/pull/98
98
Webis tl-dr
closed
12
2020-05-14T06:22:18
2020-09-03T10:00:21
2020-05-14T20:54:16
jplu
[]
Add the Webid TL:DR dataset.
true
617,809,431
https://api.github.com/repos/huggingface/datasets/issues/97
https://github.com/huggingface/datasets/pull/97
97
[Csv] add tests for csv dataset script
closed
1
2020-05-13T23:06:11
2020-05-13T23:23:16
2020-05-13T23:23:15
patrickvonplaten
[]
Adds dummy data tests for csv.
true
617,739,521
https://api.github.com/repos/huggingface/datasets/issues/96
https://github.com/huggingface/datasets/pull/96
96
lm1b
closed
1
2020-05-13T20:38:44
2020-05-14T14:13:30
2020-05-14T14:13:29
jplu
[]
Add lm1b dataset.
true
617,703,037
https://api.github.com/repos/huggingface/datasets/issues/95
https://github.com/huggingface/datasets/pull/95
95
Replace checksums files by Dataset infos json
closed
2
2020-05-13T19:36:16
2020-05-14T08:58:43
2020-05-14T08:58:42
lhoestq
[]
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes ...
true
617,571,340
https://api.github.com/repos/huggingface/datasets/issues/94
https://github.com/huggingface/datasets/pull/94
94
Librispeech
closed
1
2020-05-13T16:04:14
2020-05-13T21:29:03
2020-05-13T21:29:02
jplu
[]
Add librispeech dataset and remove some useless content.
true
617,522,029
https://api.github.com/repos/huggingface/datasets/issues/93
https://github.com/huggingface/datasets/pull/93
93
Cleanup notebooks and various fixes
closed
0
2020-05-13T14:58:58
2020-05-13T15:01:48
2020-05-13T15:01:47
thomwolf
[]
Fixes on dataset (more flexible) metrics (fix) and general clean ups
true
617,341,505
https://api.github.com/repos/huggingface/datasets/issues/92
https://github.com/huggingface/datasets/pull/92
92
[WIP] add wmt14
closed
0
2020-05-13T10:42:03
2020-05-16T11:17:38
2020-05-16T11:17:37
patrickvonplaten
[]
WMT14 takes forever to download :-/ - WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.
true
617,339,484
https://api.github.com/repos/huggingface/datasets/issues/91
https://github.com/huggingface/datasets/pull/91
91
[Paracrawl] add paracrawl
closed
0
2020-05-13T10:39:00
2020-05-13T10:40:15
2020-05-13T10:40:14
patrickvonplaten
[]
- Huge dataset - took ~1h to download - Also this PR reformats all dataset scripts and adds `datasets` to `make style`
true
617,311,877
https://api.github.com/repos/huggingface/datasets/issues/90
https://github.com/huggingface/datasets/pull/90
90
Add download gg drive
closed
2
2020-05-13T09:56:02
2020-05-13T12:46:28
2020-05-13T10:05:31
lhoestq
[]
We can now add datasets that download from google drive
true
617,295,069
https://api.github.com/repos/huggingface/datasets/issues/89
https://github.com/huggingface/datasets/pull/89
89
Add list and inspect methods - cleanup hf_api
closed
0
2020-05-13T09:30:15
2020-05-13T14:05:00
2020-05-13T09:33:10
thomwolf
[]
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3: ```python nlp.list_datasets() nlp.list_metrics() # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_dataset(path, local_path) # Copy and prepare the scripts at `local_path` for easy...
true
617,284,664
https://api.github.com/repos/huggingface/datasets/issues/88
https://github.com/huggingface/datasets/pull/88
88
Add wiki40b
closed
1
2020-05-13T09:16:01
2020-05-13T12:31:55
2020-05-13T12:31:54
lhoestq
[]
This one is a beam dataset that downloads files using tensorflow. I tested it on a small config and it works fine
true
617,267,118
https://api.github.com/repos/huggingface/datasets/issues/87
https://github.com/huggingface/datasets/pull/87
87
Add Flores
closed
0
2020-05-13T08:51:29
2020-05-13T09:23:34
2020-05-13T09:23:33
patrickvonplaten
[]
Beautiful language for sure!
true
617,260,972
https://api.github.com/repos/huggingface/datasets/issues/86
https://github.com/huggingface/datasets/pull/86
86
[Load => load_dataset] change naming
closed
0
2020-05-13T08:43:00
2020-05-13T08:50:58
2020-05-13T08:50:57
patrickvonplaten
[]
Rename leftovers @thomwolf
true
617,253,428
https://api.github.com/repos/huggingface/datasets/issues/85
https://github.com/huggingface/datasets/pull/85
85
Add boolq
closed
1
2020-05-13T08:32:27
2020-05-13T09:09:39
2020-05-13T09:09:38
lhoestq
[]
I just added the dummy data for this dataset. This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.
true
617,249,815
https://api.github.com/repos/huggingface/datasets/issues/84
https://github.com/huggingface/datasets/pull/84
84
[TedHrLr] add left dummy data
closed
0
2020-05-13T08:27:20
2020-05-13T08:29:22
2020-05-13T08:29:21
patrickvonplaten
[]
true
616,863,601
https://api.github.com/repos/huggingface/datasets/issues/83
https://github.com/huggingface/datasets/pull/83
83
New datasets
closed
0
2020-05-12T18:22:27
2020-05-12T18:22:47
2020-05-12T18:22:45
mariamabarham
[]
true
616,805,194
https://api.github.com/repos/huggingface/datasets/issues/82
https://github.com/huggingface/datasets/pull/82
82
[Datasets] add ted_hrlr
closed
0
2020-05-12T16:46:50
2020-05-13T07:52:54
2020-05-13T07:52:53
patrickvonplaten
[]
@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. The result looks like this: ![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0c...
true
616,793,010
https://api.github.com/repos/huggingface/datasets/issues/81
https://github.com/huggingface/datasets/pull/81
81
add tests
closed
0
2020-05-12T16:28:19
2020-05-13T07:43:57
2020-05-13T07:43:56
lhoestq
[]
Tests for py_utils functions and for the BaseReader used to read from arrow and parquet. I also removed unused utils functions.
true
616,786,803
https://api.github.com/repos/huggingface/datasets/issues/80
https://github.com/huggingface/datasets/pull/80
80
Add nbytes + nexamples check
closed
1
2020-05-12T16:18:43
2020-05-13T07:52:34
2020-05-13T07:52:33
lhoestq
[]
### Save size and number of examples Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file. This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example: ``` # Cached sizes: <full_config_name> <n...
true
616,785,613
https://api.github.com/repos/huggingface/datasets/issues/79
https://github.com/huggingface/datasets/pull/79
79
[Convert] add new pattern
closed
0
2020-05-12T16:16:51
2020-05-12T16:17:10
2020-05-12T16:17:09
patrickvonplaten
[]
true
616,774,275
https://api.github.com/repos/huggingface/datasets/issues/78
https://github.com/huggingface/datasets/pull/78
78
[Tests] skip beam dataset tests for now
closed
2
2020-05-12T16:00:58
2020-05-12T16:16:24
2020-05-12T16:16:22
patrickvonplaten
[]
For now we will skip tests for Beam Datasets
true
616,674,601
https://api.github.com/repos/huggingface/datasets/issues/77
https://github.com/huggingface/datasets/pull/77
77
New datasets
closed
0
2020-05-12T13:51:59
2020-05-12T14:02:16
2020-05-12T14:02:15
mariamabarham
[]
true
616,579,228
https://api.github.com/repos/huggingface/datasets/issues/76
https://github.com/huggingface/datasets/pull/76
76
pin flake 8
closed
0
2020-05-12T11:25:29
2020-05-12T11:27:35
2020-05-12T11:27:34
patrickvonplaten
[]
Flake 8's new version does not like our format. Pinning the version for now.
true
616,520,163
https://api.github.com/repos/huggingface/datasets/issues/75
https://github.com/huggingface/datasets/pull/75
75
WIP adding metrics
closed
1
2020-05-12T09:52:00
2020-05-13T07:44:12
2020-05-13T07:44:10
thomwolf
[]
Adding the following metrics as identified by @mariamabarham: 1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual) 2. GLEU: Google-BLEU: https://github.com/cnap/gec-...
true
616,511,101
https://api.github.com/repos/huggingface/datasets/issues/74
https://github.com/huggingface/datasets/pull/74
74
fix overflow check
closed
0
2020-05-12T09:38:01
2020-05-12T10:04:39
2020-05-12T10:04:38
lhoestq
[]
I did some tests and unfortunately the test ``` pa_array.nbytes > MAX_BATCH_BYTES ``` doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...). I don't think we can do a proper overflow test for the limit of 2GB... For now I replaced it with a sanity check on...
true
616,417,845
https://api.github.com/repos/huggingface/datasets/issues/73
https://github.com/huggingface/datasets/pull/73
73
JSON script
closed
5
2020-05-12T07:11:22
2020-05-18T06:50:37
2020-05-18T06:50:36
jplu
[]
Add a JSONS script to read JSON datasets from files.
true
616,225,010
https://api.github.com/repos/huggingface/datasets/issues/72
https://github.com/huggingface/datasets/pull/72
72
[README dummy data tests] README to better understand how the dummy data structure works
closed
0
2020-05-11T22:19:03
2020-05-11T22:26:03
2020-05-11T22:26:01
patrickvonplaten
[]
In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the "edge cases". @...
true
615,942,180
https://api.github.com/repos/huggingface/datasets/issues/71
https://github.com/huggingface/datasets/pull/71
71
Fix arrow writer for big datasets using writer_batch_size
closed
1
2020-05-11T14:45:36
2020-05-11T20:09:47
2020-05-11T20:00:38
lhoestq
[]
This PR fixes Yacine's bug. According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go. Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exce...
true
615,679,102
https://api.github.com/repos/huggingface/datasets/issues/70
https://github.com/huggingface/datasets/pull/70
70
adding RACE, QASC, Super_glue and Tiny_shakespear datasets
closed
1
2020-05-11T08:07:49
2020-05-12T13:21:52
2020-05-12T13:21:51
mariamabarham
[]
true
615,450,534
https://api.github.com/repos/huggingface/datasets/issues/69
https://github.com/huggingface/datasets/pull/69
69
fix cache dir in builder tests
closed
2
2020-05-10T18:39:21
2020-05-11T07:19:30
2020-05-11T07:19:28
lhoestq
[]
minor fix
true
614,882,655
https://api.github.com/repos/huggingface/datasets/issues/68
https://github.com/huggingface/datasets/pull/68
68
[CSV] re-add csv
closed
0
2020-05-08T17:38:29
2020-05-08T17:40:48
2020-05-08T17:40:46
patrickvonplaten
[]
Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests. @lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729.
true
614,798,483
https://api.github.com/repos/huggingface/datasets/issues/67
https://github.com/huggingface/datasets/pull/67
67
[Tests] Test files locally
closed
1
2020-05-08T15:02:43
2020-05-08T19:50:47
2020-05-08T15:17:00
patrickvonplaten
[]
This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets. By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci. **When local is activated all folders in `./datasets` are tested.** ...
true
614,748,552
https://api.github.com/repos/huggingface/datasets/issues/66
https://github.com/huggingface/datasets/pull/66
66
[Datasets] ReadME
closed
0
2020-05-08T13:37:43
2020-05-08T13:39:23
2020-05-08T13:39:22
patrickvonplaten
[]
true
614,746,516
https://api.github.com/repos/huggingface/datasets/issues/65
https://github.com/huggingface/datasets/pull/65
65
fix math dataset and xcopa
closed
0
2020-05-08T13:33:55
2020-05-08T13:35:41
2020-05-08T13:35:40
patrickvonplaten
[]
- fixes math dataset and xcopa, uploaded both of the to S3
true
614,737,057
https://api.github.com/repos/huggingface/datasets/issues/64
https://github.com/huggingface/datasets/pull/64
64
[Datasets] Make master ready for datasets adding
closed
0
2020-05-08T13:17:00
2020-05-08T13:17:31
2020-05-08T13:17:30
patrickvonplaten
[]
Add all relevant files so that datasets can now be added on master
true
614,666,365
https://api.github.com/repos/huggingface/datasets/issues/63
https://github.com/huggingface/datasets/pull/63
63
[Dataset scripts] add all datasets scripts
closed
0
2020-05-08T10:50:15
2020-05-08T17:39:22
2020-05-08T11:34:00
patrickvonplaten
[]
As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes. @mariamabarham @lhoestq @thomwolf - what do you think? If this is ok for you, I can sync up the master with the `add_datase...
true
614,630,830
https://api.github.com/repos/huggingface/datasets/issues/62
https://github.com/huggingface/datasets/pull/62
62
[Cached Path] Better error message
closed
0
2020-05-08T09:39:47
2020-05-08T09:45:47
2020-05-08T09:45:47
patrickvonplaten
[]
IMO returning `None` in this function only leads to confusion and is never helpful.
true
614,607,474
https://api.github.com/repos/huggingface/datasets/issues/61
https://github.com/huggingface/datasets/pull/61
61
[Load] rename setup_module to prepare_module
closed
0
2020-05-08T08:54:22
2020-05-08T08:56:32
2020-05-08T08:56:16
patrickvonplaten
[]
rename setup_module to prepare_module due to issues with pytests `setup_module` function. See: PR #59.
true
614,372,553
https://api.github.com/repos/huggingface/datasets/issues/60
https://github.com/huggingface/datasets/pull/60
60
Update to simplify some datasets conversion
closed
6
2020-05-07T22:02:24
2020-05-08T10:38:32
2020-05-08T10:18:24
thomwolf
[]
This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626 We could also change (not included in this PR yet): - `supervized_keys` to make t...
true
614,366,045
https://api.github.com/repos/huggingface/datasets/issues/59
https://github.com/huggingface/datasets/pull/59
59
Fix tests
closed
5
2020-05-07T21:48:09
2020-05-08T10:57:57
2020-05-08T10:46:51
thomwolf
[]
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./test...
true
614,362,308
https://api.github.com/repos/huggingface/datasets/issues/58
https://github.com/huggingface/datasets/pull/58
58
Aborted PR - Fix tests
closed
1
2020-05-07T21:40:19
2020-05-07T21:48:01
2020-05-07T21:41:27
thomwolf
[]
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./test...
true
614,261,638
https://api.github.com/repos/huggingface/datasets/issues/57
https://github.com/huggingface/datasets/pull/57
57
Better cached path
closed
2
2020-05-07T18:36:00
2020-05-08T13:20:30
2020-05-08T13:20:28
lhoestq
[]
### Changes: - The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error) - Fix requests to firebase API that doesn't handle HEAD requests... - Allow c...
true
614,236,869
https://api.github.com/repos/huggingface/datasets/issues/56
https://github.com/huggingface/datasets/pull/56
56
[Dataset] Tester add mock function
closed
0
2020-05-07T17:51:37
2020-05-07T17:52:51
2020-05-07T17:52:50
patrickvonplaten
[]
need to add an empty `extract()` function to make `hansard` dataset test work.
true
613,968,072
https://api.github.com/repos/huggingface/datasets/issues/55
https://github.com/huggingface/datasets/pull/55
55
Beam datasets
closed
4
2020-05-07T11:04:32
2020-05-11T07:20:02
2020-05-11T07:20:00
lhoestq
[]
# Beam datasets ## Intro Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections). The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are: - the `DirectRunner` to run the p...
true
613,513,348
https://api.github.com/repos/huggingface/datasets/issues/54
https://github.com/huggingface/datasets/pull/54
54
[Tests] Improved Error message for dummy folder structure
closed
0
2020-05-06T18:11:48
2020-05-06T18:13:00
2020-05-06T18:12:59
patrickvonplaten
[]
Improved Error message
true
613,436,158
https://api.github.com/repos/huggingface/datasets/issues/53
https://github.com/huggingface/datasets/pull/53
53
[Features] Typo in generate_from_dict
closed
0
2020-05-06T16:05:23
2020-05-07T15:28:46
2020-05-07T15:28:45
patrickvonplaten
[]
Change `isinstance` test in features when generating features from dict.
true
613,339,071
https://api.github.com/repos/huggingface/datasets/issues/52
https://github.com/huggingface/datasets/pull/52
52
allow dummy folder structure to handle dict of lists
closed
0
2020-05-06T13:54:35
2020-05-06T13:55:19
2020-05-06T13:55:18
patrickvonplaten
[]
`esnli.py` needs that extension of the dummy data testing.
true
613,266,668
https://api.github.com/repos/huggingface/datasets/issues/51
https://github.com/huggingface/datasets/pull/51
51
[Testing] Improved testing structure
closed
1
2020-05-06T12:03:07
2020-05-07T22:07:19
2020-05-06T13:20:18
patrickvonplaten
[]
This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class. as @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp. This PR tries to change that to some extent. It follows the following logic for the `dumm...
true