id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
652,424,048
https://api.github.com/repos/huggingface/datasets/issues/351
https://github.com/huggingface/datasets/pull/351
351
add pandas dataset
closed
0
2020-07-07T15:38:07
2020-07-08T14:15:16
2020-07-08T14:15:15
lhoestq
[]
Create a dataset from serialized pandas dataframes. Usage: ```python from nlp import load_dataset dset = load_dataset("pandas", data_files="df.pkl")["train"] ```
true
652,398,691
https://api.github.com/repos/huggingface/datasets/issues/350
https://github.com/huggingface/datasets/pull/350
350
add from_pandas and from_dict
closed
0
2020-07-07T15:03:53
2020-07-08T14:14:33
2020-07-08T14:14:32
lhoestq
[]
I added two new methods to the `Dataset` class: - `from_pandas()` to create a dataset from a pandas dataframe - `from_dict()` to create a dataset from a dictionary (keys = columns) It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so. It is also possible to specify the features types v...
true
652,231,571
https://api.github.com/repos/huggingface/datasets/issues/349
https://github.com/huggingface/datasets/pull/349
349
Hyperpartisan news detection
closed
2
2020-07-07T11:06:37
2020-07-07T20:47:27
2020-07-07T14:57:11
ghomasHudson
[]
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display. Implementation notes: - As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before...
true
652,158,308
https://api.github.com/repos/huggingface/datasets/issues/348
https://github.com/huggingface/datasets/pull/348
348
Add OSCAR dataset
closed
20
2020-07-07T09:22:07
2021-05-03T22:07:08
2021-02-09T10:19:19
pjox
[]
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅 Thanks!
true
652,106,567
https://api.github.com/repos/huggingface/datasets/issues/347
https://github.com/huggingface/datasets/issues/347
347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
closed
10
2020-07-07T08:14:23
2020-09-07T14:51:45
2020-09-07T14:51:45
cosmeowpawlitan
[ "dataset bug" ]
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
false
652,044,151
https://api.github.com/repos/huggingface/datasets/issues/346
https://github.com/huggingface/datasets/pull/346
346
Add emotion dataset
closed
9
2020-07-07T06:35:41
2022-05-30T15:16:44
2020-07-13T14:39:38
lewtun
[]
Hello 🤗 team! I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/me...
true
651,761,201
https://api.github.com/repos/huggingface/datasets/issues/345
https://github.com/huggingface/datasets/issues/345
345
Supporting documents in ELI5
closed
2
2020-07-06T19:14:13
2020-10-27T15:38:45
2020-10-27T15:38:45
saverymax
[]
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to ...
false
651,495,246
https://api.github.com/repos/huggingface/datasets/issues/344
https://github.com/huggingface/datasets/pull/344
344
Search qa
closed
1
2020-07-06T12:23:16
2020-07-16T08:58:16
2020-07-16T08:58:16
mariamabarham
[]
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
true
651,419,630
https://api.github.com/repos/huggingface/datasets/issues/343
https://github.com/huggingface/datasets/pull/343
343
Fix nested tensorflow format
closed
0
2020-07-06T10:13:45
2020-07-06T13:11:52
2020-07-06T13:11:51
lhoestq
[]
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added ...
true
651,333,194
https://api.github.com/repos/huggingface/datasets/issues/342
https://github.com/huggingface/datasets/issues/342
342
Features should be updated when `map()` changes schema
closed
1
2020-07-06T08:03:23
2020-07-23T10:15:16
2020-07-23T10:15:16
thomwolf
[]
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
false
650,611,969
https://api.github.com/repos/huggingface/datasets/issues/341
https://github.com/huggingface/datasets/pull/341
341
add fever dataset
closed
0
2020-07-03T13:53:07
2020-07-06T13:03:48
2020-07-06T13:03:47
mariamabarham
[]
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
true
650,533,920
https://api.github.com/repos/huggingface/datasets/issues/340
https://github.com/huggingface/datasets/pull/340
340
Update cfq.py
closed
1
2020-07-03T11:23:19
2020-07-03T12:33:50
2020-07-03T12:33:50
brainshawn
[]
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
true
650,156,468
https://api.github.com/repos/huggingface/datasets/issues/339
https://github.com/huggingface/datasets/pull/339
339
Add dataset.export() to TFRecords
closed
18
2020-07-02T19:26:27
2020-07-22T09:16:12
2020-07-22T09:16:12
jarednielsen
[]
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitt...
true
650,057,253
https://api.github.com/repos/huggingface/datasets/issues/338
https://github.com/huggingface/datasets/pull/338
338
Run `make style`
closed
0
2020-07-02T16:19:47
2020-07-02T18:03:10
2020-07-02T18:03:10
jarednielsen
[]
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
true
650,035,887
https://api.github.com/repos/huggingface/datasets/issues/337
https://github.com/huggingface/datasets/issues/337
337
[Feature request] Export Arrow dataset to TFRecords
closed
0
2020-07-02T15:47:12
2020-07-22T09:16:12
2020-07-22T09:16:12
jarednielsen
[]
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wik...
false
649,914,203
https://api.github.com/repos/huggingface/datasets/issues/336
https://github.com/huggingface/datasets/issues/336
336
[Dataset requests] New datasets for Open Question Answering
closed
0
2020-07-02T13:03:03
2020-07-16T09:04:22
2020-07-16T09:04:22
thomwolf
[ "help wanted", "dataset request" ]
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al....
false
649,765,179
https://api.github.com/repos/huggingface/datasets/issues/335
https://github.com/huggingface/datasets/pull/335
335
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
closed
2
2020-07-02T09:03:41
2020-07-15T08:02:07
2020-07-15T08:02:07
PetrosStav
[]
true
649,661,791
https://api.github.com/repos/huggingface/datasets/issues/334
https://github.com/huggingface/datasets/pull/334
334
Add dataset.shard() method
closed
1
2020-07-02T06:05:19
2020-07-06T12:35:36
2020-07-06T12:35:36
jarednielsen
[]
Fixes https://github.com/huggingface/nlp/issues/312
true
649,236,516
https://api.github.com/repos/huggingface/datasets/issues/333
https://github.com/huggingface/datasets/pull/333
333
fix variable name typo
closed
2
2020-07-01T19:13:50
2020-07-24T15:43:31
2020-07-24T08:32:16
stas00
[]
true
649,140,135
https://api.github.com/repos/huggingface/datasets/issues/332
https://github.com/huggingface/datasets/pull/332
332
Add wiki_dpr
closed
2
2020-07-01T17:12:00
2020-07-06T12:21:17
2020-07-06T12:21:16
lhoestq
[]
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73G...
true
648,533,199
https://api.github.com/repos/huggingface/datasets/issues/331
https://github.com/huggingface/datasets/issues/331
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
closed
5
2020-06-30T22:21:33
2020-07-09T13:03:40
2020-07-09T13:03:40
jxmorris12
[ "dataset bug" ]
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
false
648,525,720
https://api.github.com/repos/huggingface/datasets/issues/330
https://github.com/huggingface/datasets/pull/330
330
Doc red
closed
0
2020-06-30T22:05:31
2020-07-06T12:10:39
2020-07-05T12:27:29
ghomasHudson
[]
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to ...
true
648,446,979
https://api.github.com/repos/huggingface/datasets/issues/329
https://github.com/huggingface/datasets/issues/329
329
[Bug] FileLock dependency incompatible with filesystem
closed
11
2020-06-30T19:45:31
2024-12-26T15:13:39
2020-06-30T21:33:06
jarednielsen
[]
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like thi...
false
648,326,841
https://api.github.com/repos/huggingface/datasets/issues/328
https://github.com/huggingface/datasets/issues/328
328
Fork dataset
closed
5
2020-06-30T16:42:53
2020-07-06T21:43:59
2020-07-06T21:43:59
timothyjlaurent
[]
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and...
false
648,312,858
https://api.github.com/repos/huggingface/datasets/issues/327
https://github.com/huggingface/datasets/pull/327
327
set seed for suffling tests
closed
0
2020-06-30T16:21:34
2020-07-02T08:34:05
2020-07-02T08:34:04
lhoestq
[]
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
true
648,126,103
https://api.github.com/repos/huggingface/datasets/issues/326
https://github.com/huggingface/datasets/issues/326
326
Large dataset in Squad2-format
closed
8
2020-06-30T12:18:59
2020-07-09T09:01:50
2020-07-09T09:01:50
flozi00
[]
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
false
647,601,592
https://api.github.com/repos/huggingface/datasets/issues/325
https://github.com/huggingface/datasets/pull/325
325
Add SQuADShifts dataset
closed
1
2020-06-29T19:11:16
2020-06-30T17:07:31
2020-06-30T17:07:31
millerjohnp
[]
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
true
647,525,725
https://api.github.com/repos/huggingface/datasets/issues/324
https://github.com/huggingface/datasets/issues/324
324
Error when calculating glue score
closed
4
2020-06-29T16:53:48
2020-07-09T09:13:34
2020-07-09T09:13:34
D-i-l-r-u-k-s-h-i
[]
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------...
false
647,521,308
https://api.github.com/repos/huggingface/datasets/issues/323
https://github.com/huggingface/datasets/pull/323
323
Add package path to sys when downloading package as github archive
closed
2
2020-06-29T16:46:01
2020-07-30T14:00:23
2020-07-30T14:00:23
yjernite
[]
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importli...
true
647,483,850
https://api.github.com/repos/huggingface/datasets/issues/322
https://github.com/huggingface/datasets/pull/322
322
output nested dict in get_nearest_examples
closed
0
2020-06-29T15:47:47
2020-07-02T08:33:33
2020-07-02T08:33:32
lhoestq
[]
As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example: ```python my_examples = dataset[0:10] print(type(my_examples)) # >>> dict print(my_examples["my_column"][0] # >>> this is the first element of the colum...
true
647,271,526
https://api.github.com/repos/huggingface/datasets/issues/321
https://github.com/huggingface/datasets/issues/321
321
ERROR:root:mwparserfromhell
closed
10
2020-06-29T11:10:43
2022-02-14T15:21:46
2022-02-14T15:21:46
Shiro-LK
[ "dataset bug" ]
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token sta...
false
647,188,167
https://api.github.com/repos/huggingface/datasets/issues/320
https://github.com/huggingface/datasets/issues/320
320
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
closed
2
2020-06-29T07:36:35
2020-06-29T14:44:42
2020-06-29T14:44:42
mariamabarham
[ "nlp-viewer" ]
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dat...
false
646,792,487
https://api.github.com/repos/huggingface/datasets/issues/319
https://github.com/huggingface/datasets/issues/319
319
Nested sequences with dicts
closed
1
2020-06-27T23:45:17
2020-07-03T10:22:00
2020-07-03T10:22:00
ghomasHudson
[]
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. The original data is in this form...
false
646,682,840
https://api.github.com/repos/huggingface/datasets/issues/318
https://github.com/huggingface/datasets/pull/318
318
Multitask
closed
18
2020-06-27T13:27:29
2022-07-06T15:19:57
2022-07-06T15:19:57
ghomasHudson
[]
Following our discussion in #217, I've implemented a first working version of `MultiDataset`. There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage. I've implemented many of the `nlp.Datas...
true
646,555,384
https://api.github.com/repos/huggingface/datasets/issues/317
https://github.com/huggingface/datasets/issues/317
317
Adding a dataset with multiple subtasks
closed
1
2020-06-26T23:14:19
2020-10-27T15:36:52
2020-10-27T15:36:52
erickrf
[]
I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks. For example, in [QE 201...
false
646,366,450
https://api.github.com/repos/huggingface/datasets/issues/316
https://github.com/huggingface/datasets/pull/316
316
add AG News dataset
closed
1
2020-06-26T16:11:58
2020-06-30T09:58:08
2020-06-30T08:31:55
jxmorris12
[]
adds support for the AG-News topic classification dataset
true
645,888,943
https://api.github.com/repos/huggingface/datasets/issues/315
https://github.com/huggingface/datasets/issues/315
315
[Question] Best way to batch a large dataset?
open
11
2020-06-25T22:30:20
2020-10-27T15:38:17
null
jarednielsen
[ "generic discussion" ]
I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow: ```python train_tf_dataset = train_tf_dataset.filter(...
false
645,461,174
https://api.github.com/repos/huggingface/datasets/issues/314
https://github.com/huggingface/datasets/pull/314
314
Fixed singlular very minor spelling error
closed
1
2020-06-25T10:45:59
2020-06-26T08:46:41
2020-06-25T12:43:59
SchizoidBat
[]
An instance of "independantly" was changed to "independently". That's all.
true
645,390,088
https://api.github.com/repos/huggingface/datasets/issues/313
https://github.com/huggingface/datasets/pull/313
313
Add MWSC
closed
1
2020-06-25T09:22:02
2020-06-30T08:28:11
2020-06-30T08:28:11
ghomasHudson
[]
Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose. Code is heavily bo...
true
645,025,561
https://api.github.com/repos/huggingface/datasets/issues/312
https://github.com/huggingface/datasets/issues/312
312
[Feature request] Add `shard()` method to dataset
closed
2
2020-06-24T22:48:33
2020-07-06T12:35:36
2020-07-06T12:35:36
jarednielsen
[]
Currently, to shard a dataset into 10 pieces on different ranks, you can run ```python rank = 3 # for example size = 10 dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]") ``` However, this breaks down if you have a number of ranks that doesn't divide cleanly...
false
645,013,131
https://api.github.com/repos/huggingface/datasets/issues/311
https://github.com/huggingface/datasets/pull/311
311
Add qa_zre
closed
0
2020-06-24T22:17:22
2020-06-29T16:37:38
2020-06-29T16:37:38
ghomasHudson
[]
Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/). A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`.
true
644,806,720
https://api.github.com/repos/huggingface/datasets/issues/310
https://github.com/huggingface/datasets/pull/310
310
add wikisql
closed
1
2020-06-24T18:00:35
2020-06-25T12:32:25
2020-06-25T12:32:25
ghomasHudson
[]
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - ...
true
644,783,822
https://api.github.com/repos/huggingface/datasets/issues/309
https://github.com/huggingface/datasets/pull/309
309
Add narrative qa
closed
11
2020-06-24T17:26:18
2020-09-03T09:02:10
2020-09-03T09:02:09
Varal7
[]
Test cases for dummy data don't pass Only contains data for summaries (not whole story)
true
644,195,251
https://api.github.com/repos/huggingface/datasets/issues/308
https://github.com/huggingface/datasets/pull/308
308
Specify utf-8 encoding for MRPC files
closed
0
2020-06-23T22:44:36
2020-06-25T12:52:21
2020-06-25T12:16:10
patpizio
[]
Fixes #307, again probably a Windows-related issue.
true
644,187,262
https://api.github.com/repos/huggingface/datasets/issues/307
https://github.com/huggingface/datasets/issues/307
307
Specify encoding for MRPC
closed
0
2020-06-23T22:24:49
2020-06-25T12:16:09
2020-06-25T12:16:09
patpizio
[]
Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset: ```python dataset = nlp.load_dataset('glue', 'mrpc') ``` ```python Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache...
false
644,176,078
https://api.github.com/repos/huggingface/datasets/issues/306
https://github.com/huggingface/datasets/pull/306
306
add pg19 dataset
closed
12
2020-06-23T22:03:52
2020-07-06T07:55:59
2020-07-06T07:55:59
lucidrains
[]
https://github.com/huggingface/nlp/issues/274 Add functioning PG19 dataset with dummy data `cos_e.py` was just auto-linted by `make style`
true
644,148,149
https://api.github.com/repos/huggingface/datasets/issues/305
https://github.com/huggingface/datasets/issues/305
305
Importing downloaded package repository fails
closed
0
2020-06-23T21:09:05
2020-07-30T16:44:23
2020-07-30T16:44:23
yjernite
[ "metric bug" ]
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh). Currently however, the code seems to...
false
644,091,970
https://api.github.com/repos/huggingface/datasets/issues/304
https://github.com/huggingface/datasets/issues/304
304
Problem while printing doc string when instantiating multiple metrics.
closed
0
2020-06-23T19:32:05
2020-07-22T09:50:58
2020-07-22T09:50:58
codehunk628
[ "metric bug" ]
When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy. Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem ...
false
643,912,464
https://api.github.com/repos/huggingface/datasets/issues/303
https://github.com/huggingface/datasets/pull/303
303
allow to move files across file systems
closed
0
2020-06-23T14:56:08
2020-06-23T15:08:44
2020-06-23T15:08:43
lhoestq
[]
Users are allowed to use the `cache_dir` that they want. Therefore it can happen that we try to move files across filesystems. We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`. This should fix #301
true
643,910,418
https://api.github.com/repos/huggingface/datasets/issues/302
https://github.com/huggingface/datasets/issues/302
302
Question - Sign Language Datasets
closed
3
2020-06-23T14:53:40
2020-11-25T11:25:33
2020-11-25T11:25:33
AmitMY
[ "enhancement", "generic discussion" ]
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An exa...
false
643,763,525
https://api.github.com/repos/huggingface/datasets/issues/301
https://github.com/huggingface/datasets/issues/301
301
Setting cache_dir gives error on wikipedia download
closed
2
2020-06-23T11:31:44
2020-06-24T07:05:07
2020-06-24T07:05:07
hallvagi
[]
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError ...
false
643,688,304
https://api.github.com/repos/huggingface/datasets/issues/300
https://github.com/huggingface/datasets/pull/300
300
Fix bertscore references
closed
0
2020-06-23T09:38:59
2020-06-23T14:47:38
2020-06-23T14:47:37
lhoestq
[]
I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list. Moreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code. Both...
true
643,611,557
https://api.github.com/repos/huggingface/datasets/issues/299
https://github.com/huggingface/datasets/pull/299
299
remove some print in snli file
closed
1
2020-06-23T07:46:06
2020-06-23T08:10:46
2020-06-23T08:10:44
mariamabarham
[]
This PR removes unwanted `print` statements in some files such as `snli.py`
true
643,603,804
https://api.github.com/repos/huggingface/datasets/issues/298
https://github.com/huggingface/datasets/pull/298
298
Add searchable datasets
closed
8
2020-06-23T07:33:03
2020-06-26T07:50:44
2020-06-26T07:50:43
lhoestq
[]
# Better support for Numpy format + Add Indexed Datasets I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib. ## Better support for Numpy format New features: - New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up...
true
643,444,625
https://api.github.com/repos/huggingface/datasets/issues/297
https://github.com/huggingface/datasets/issues/297
297
Error in Demo for Specific Datasets
closed
3
2020-06-23T00:38:42
2020-07-17T17:43:06
2020-07-17T17:43:06
s-jse
[ "nlp-viewer" ]
Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following. ![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
false
643,423,717
https://api.github.com/repos/huggingface/datasets/issues/296
https://github.com/huggingface/datasets/issues/296
296
snli -1 labels
closed
4
2020-06-22T23:33:30
2020-06-23T14:41:59
2020-06-23T14:41:58
jxmorris12
[]
I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels? ``` import nlp from collections import Counter data = nlp.load_dataset('snli')['train'] print(Counter(data['label'])) Counter({0: 183416, 2: 183187, 1: 182764, -1: 785}) ```
false
643,245,412
https://api.github.com/repos/huggingface/datasets/issues/295
https://github.com/huggingface/datasets/issues/295
295
Improve input warning for evaluation metrics
closed
0
2020-06-22T17:28:57
2020-06-23T14:47:37
2020-06-23T14:47:37
Tiiiger
[]
Hi, I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes inpu...
false
643,181,179
https://api.github.com/repos/huggingface/datasets/issues/294
https://github.com/huggingface/datasets/issues/294
294
Cannot load arxiv dataset on MacOS?
closed
4
2020-06-22T15:46:55
2020-06-30T15:25:10
2020-06-30T15:25:10
JohnGiorgi
[ "dataset bug" ]
I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with: ```python arxiv = nlp.load_dataset("scientific_papers", "arxiv") ``` I get the following stack trace: ```bash JSONDecodeError Traceback (most recen...
false
642,942,182
https://api.github.com/repos/huggingface/datasets/issues/293
https://github.com/huggingface/datasets/pull/293
293
Don't test community datasets
closed
0
2020-06-22T10:15:33
2020-06-22T11:07:00
2020-06-22T11:06:59
lhoestq
[]
This PR disables testing for community datasets on aws. It should fix the CI that is currently failing.
true
642,897,797
https://api.github.com/repos/huggingface/datasets/issues/292
https://github.com/huggingface/datasets/pull/292
292
Update metadata for x_stance dataset
closed
3
2020-06-22T09:13:26
2020-06-23T08:07:24
2020-06-23T08:07:24
jvamvas
[]
Thank you for featuring the x_stance dataset in your library. This PR updates some metadata: - Citation: Replace preprint with proceedings - URL: Use a URL with long-term availability
true
642,688,450
https://api.github.com/repos/huggingface/datasets/issues/291
https://github.com/huggingface/datasets/pull/291
291
break statement not required
closed
3
2020-06-22T01:40:55
2020-06-23T17:57:58
2020-06-23T09:37:02
mayurnewase
[]
true
641,978,286
https://api.github.com/repos/huggingface/datasets/issues/290
https://github.com/huggingface/datasets/issues/290
290
ConnectionError - Eli5 dataset download
closed
2
2020-06-19T13:40:33
2020-06-20T13:22:24
2020-06-20T13:22:24
JovanNj
[]
Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow I would appreciate if you could help me with this issue.
false
641,934,194
https://api.github.com/repos/huggingface/datasets/issues/289
https://github.com/huggingface/datasets/pull/289
289
update xsum
closed
3
2020-06-19T12:28:32
2020-06-22T13:27:26
2020-06-22T07:20:07
mariamabarham
[]
This PR makes the following update to the xsum dataset: - Manual download is not required anymore - dataset can be loaded as follow: `nlp.load_dataset('xsum')` **Important** Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum...
true
641,888,610
https://api.github.com/repos/huggingface/datasets/issues/288
https://github.com/huggingface/datasets/issues/288
288
Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'
closed
5
2020-06-19T11:01:22
2020-06-21T09:05:11
2020-06-21T09:05:11
wutong8023
[]
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /Users/...
false
641,800,227
https://api.github.com/repos/huggingface/datasets/issues/287
https://github.com/huggingface/datasets/pull/287
287
fix squad_v2 metric
closed
0
2020-06-19T08:24:46
2020-06-19T08:33:43
2020-06-19T08:33:41
lhoestq
[]
Fix #280 The imports were wrong
true
641,585,758
https://api.github.com/repos/huggingface/datasets/issues/286
https://github.com/huggingface/datasets/pull/286
286
Add ANLI dataset.
closed
1
2020-06-18T22:27:30
2020-06-22T12:23:27
2020-06-22T12:23:27
easonnie
[]
I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors.
true
641,360,702
https://api.github.com/repos/huggingface/datasets/issues/285
https://github.com/huggingface/datasets/pull/285
285
Consistent formatting of citations
closed
1
2020-06-18T16:25:23
2020-06-22T08:09:25
2020-06-22T08:09:24
mariamabarham
[]
#283
true
641,337,217
https://api.github.com/repos/huggingface/datasets/issues/284
https://github.com/huggingface/datasets/pull/284
284
Fix manual download instructions
closed
5
2020-06-18T15:59:57
2020-06-19T08:24:21
2020-06-19T08:24:19
patrickvonplaten
[]
This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`. Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs. After some brainstorming with @mariamabarham and @lhoestq...
true
641,270,439
https://api.github.com/repos/huggingface/datasets/issues/283
https://github.com/huggingface/datasets/issues/283
283
Consistent formatting of citations
closed
0
2020-06-18T14:48:45
2020-06-22T17:30:46
2020-06-22T17:30:46
srush
[]
The citations are all of a different format, some have "```" and have text inside, others are proper bibtex. Can we make it so that they all are proper citations, i.e. parse by the bibtex spec: https://bibtexparser.readthedocs.io/en/master/
false
641,217,759
https://api.github.com/repos/huggingface/datasets/issues/282
https://github.com/huggingface/datasets/pull/282
282
Update dataset_info from gcs
closed
0
2020-06-18T13:41:15
2020-06-18T16:24:52
2020-06-18T16:24:51
lhoestq
[]
Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local fi...
true
641,067,856
https://api.github.com/repos/huggingface/datasets/issues/281
https://github.com/huggingface/datasets/issues/281
281
Private/sensitive data
closed
3
2020-06-18T09:47:27
2020-06-20T13:15:12
2020-06-20T13:15:12
MFreidank
[]
Hi all, Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch. Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information. Is there support/a plan to support such data with NLP, e.g. by readin...
false
640,677,615
https://api.github.com/repos/huggingface/datasets/issues/280
https://github.com/huggingface/datasets/issues/280
280
Error with SquadV2 Metrics
closed
0
2020-06-17T19:10:54
2020-06-19T08:33:41
2020-06-19T08:33:41
avinregmi
[]
I can't seem to import squad v2 metrics. **squad_metric = nlp.load_metric('squad_v2')** **This throws me an error.:** ``` ImportError Traceback (most recent call last) <ipython-input-8-170b6a170555> in <module> ----> 1 squad_metric = nlp.load_metric('squad_v2') ~/env/lib6...
false
640,611,692
https://api.github.com/repos/huggingface/datasets/issues/279
https://github.com/huggingface/datasets/issues/279
279
Dataset Preprocessing Cache with .map() function not working as expected
closed
5
2020-06-17T17:17:21
2021-07-06T21:43:28
2021-04-18T23:43:49
sarahwie
[]
I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system. Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I ...
false
640,518,917
https://api.github.com/repos/huggingface/datasets/issues/278
https://github.com/huggingface/datasets/issues/278
278
MemoryError when loading German Wikipedia
closed
7
2020-06-17T15:06:21
2020-06-19T12:53:02
2020-06-19T12:53:02
gregburman
[]
Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :) I'm trying to download the German Wikipedia dataset as follows: ``` wiki = nlp.load_dataset("wikip...
false
640,163,053
https://api.github.com/repos/huggingface/datasets/issues/277
https://github.com/huggingface/datasets/issues/277
277
Empty samples in glue/qqp
closed
2
2020-06-17T05:54:52
2020-06-21T00:21:45
2020-06-21T00:21:45
richarddwang
[]
``` qqp = nlp.load_dataset('glue', 'qqp') print(qqp['train'][310121]) print(qqp['train'][362225]) ``` ``` {'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137} {'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246} ``` Notice that que...
false
639,490,858
https://api.github.com/repos/huggingface/datasets/issues/276
https://github.com/huggingface/datasets/pull/276
276
Fix metric compute (original_instructions missing)
closed
2
2020-06-16T08:52:01
2020-06-18T07:41:45
2020-06-18T07:41:44
lhoestq
[]
When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset. However metrics load data the same way but don't need instructions (we use one single file). In this PR I just make `original_instructions` optional when reading files to load a `Datas...
true
639,439,052
https://api.github.com/repos/huggingface/datasets/issues/275
https://github.com/huggingface/datasets/issues/275
275
NonMatchingChecksumError when loading pubmed dataset
closed
1
2020-06-16T07:31:51
2020-06-19T07:37:07
2020-06-19T07:37:07
DavideStenner
[ "dataset bug" ]
I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`. The error is: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-7742dea167d0> in <module...
false
639,156,625
https://api.github.com/repos/huggingface/datasets/issues/274
https://github.com/huggingface/datasets/issues/274
274
PG-19
closed
4
2020-06-15T21:02:26
2020-07-06T15:35:02
2020-07-06T15:35:02
lucidrains
[ "dataset request" ]
Hi, and thanks for all your open-sourced work, as always! I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling.
false
638,968,054
https://api.github.com/repos/huggingface/datasets/issues/273
https://github.com/huggingface/datasets/pull/273
273
update cos_e to add cos_e v1.0
closed
0
2020-06-15T16:03:22
2020-06-16T08:25:54
2020-06-16T08:25:52
mariamabarham
[]
This PR updates the cos_e dataset to add v1.0 as requested here #163 @nazneenrajani
true
638,307,313
https://api.github.com/repos/huggingface/datasets/issues/272
https://github.com/huggingface/datasets/pull/272
272
asd
closed
0
2020-06-14T08:20:38
2020-06-14T09:16:41
2020-06-14T09:16:41
sn696
[]
true
638,135,754
https://api.github.com/repos/huggingface/datasets/issues/271
https://github.com/huggingface/datasets/pull/271
271
Fix allociné dataset configuration
closed
6
2020-06-13T10:12:10
2020-06-18T07:41:21
2020-06-18T07:41:20
TheophileBlard
[]
This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with : ```python dataset = load_dataset('allocine', 'allocine') ``` This is redundant, as there is only one "dataset configuration", and should only be: ```python dataset = load_dataset('allocine') ``` This ...
true
638,121,617
https://api.github.com/repos/huggingface/datasets/issues/270
https://github.com/huggingface/datasets/issues/270
270
c4 dataset is not viewable in nlpviewer demo
closed
1
2020-06-13T08:26:16
2020-10-27T15:35:29
2020-10-27T15:35:13
rajarsheem
[ "nlp-viewer" ]
I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/) ```python ModuleNotFoundError: No module named 'langdetect' Traceback: File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__d...
false
638,106,774
https://api.github.com/repos/huggingface/datasets/issues/269
https://github.com/huggingface/datasets/issues/269
269
Error in metric.compute: missing `original_instructions` argument
closed
0
2020-06-13T06:26:54
2020-06-18T07:41:44
2020-06-18T07:41:44
zphang
[ "metric bug" ]
I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example: ```python import nlp rte_metric = nlp.load_metric('glue', name="rte") rte_metric.compute( [0, 0, 1, 1], [0, 1, 0, 1], ) ``` ``` 181 # Read the predictio...
false
637,848,056
https://api.github.com/repos/huggingface/datasets/issues/268
https://github.com/huggingface/datasets/pull/268
268
add Rotten Tomatoes Movie Review sentences sentiment dataset
closed
1
2020-06-12T15:53:59
2020-06-18T07:46:24
2020-06-18T07:46:23
jxmorris12
[]
Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/
true
637,415,545
https://api.github.com/repos/huggingface/datasets/issues/267
https://github.com/huggingface/datasets/issues/267
267
How can I load/find WMT en-romanian?
closed
1
2020-06-12T01:09:37
2020-06-19T08:24:19
2020-06-19T08:24:19
sshleifer
[]
I believe it is from `wmt16` When I run ```python wmt = nlp.load_dataset('wmt16') ``` I get: ```python AssertionError: The dataset wmt16 with config cs-en requires manual data. Please follow the manual download instructions: Some of the wmt configs here, require a manual download. Please look into wm...
false
637,156,392
https://api.github.com/repos/huggingface/datasets/issues/266
https://github.com/huggingface/datasets/pull/266
266
Add sort, shuffle, test_train_split and select methods
closed
4
2020-06-11T16:22:20
2020-06-18T16:23:25
2020-06-18T16:23:24
thomwolf
[]
Add a bunch of methods to reorder/split/select rows in a dataset: - `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be sm...
true
637,139,220
https://api.github.com/repos/huggingface/datasets/issues/265
https://github.com/huggingface/datasets/pull/265
265
Add pyarrow warning colab
closed
0
2020-06-11T15:57:51
2020-08-02T18:14:36
2020-06-12T08:14:16
lhoestq
[]
When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow. This is an issue because `nlp` requires the updated version to work correctly. In this PR I added en error that is shown to the user in google colab if...
true
637,106,170
https://api.github.com/repos/huggingface/datasets/issues/264
https://github.com/huggingface/datasets/pull/264
264
Fix small issues creating dataset
closed
0
2020-06-11T15:20:16
2020-06-12T08:15:57
2020-06-12T08:15:56
lhoestq
[]
Fix many small issues mentioned in #249: - don't force to install apache beam for commands - fix None cache dir when using `dl_manager.download_custom` - added new extras in `setup.py` named `dev` that contains tests and quality dependencies - mock dataset sizes when running tests with dummy data - add a note abou...
true
637,028,015
https://api.github.com/repos/huggingface/datasets/issues/263
https://github.com/huggingface/datasets/issues/263
263
[Feature request] Support for external modality for language datasets
closed
5
2020-06-11T13:42:18
2022-02-10T13:26:35
2022-02-10T13:26:35
aleSuglia
[ "enhancement", "generic discussion" ]
# Background In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10...
false
636,702,849
https://api.github.com/repos/huggingface/datasets/issues/262
https://github.com/huggingface/datasets/pull/262
262
Add new dataset ANLI Round 1
closed
1
2020-06-11T04:14:57
2020-06-12T22:03:03
2020-06-12T22:03:03
easonnie
[]
Adding new dataset [ANLI](https://github.com/facebookresearch/anli/). I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them.
true
636,372,380
https://api.github.com/repos/huggingface/datasets/issues/261
https://github.com/huggingface/datasets/issues/261
261
Downloading dataset error with pyarrow.lib.RecordBatch
closed
2
2020-06-10T16:04:19
2020-06-11T14:35:12
2020-06-11T14:35:12
cuent
[]
I am trying to download `sentiment140` and I have the following error ``` /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=...
false
636,261,118
https://api.github.com/repos/huggingface/datasets/issues/260
https://github.com/huggingface/datasets/pull/260
260
Consistency fixes
closed
0
2020-06-10T13:44:42
2020-06-11T10:34:37
2020-06-11T10:34:36
julien-c
[]
A few bugs I've found while hacking
true
636,239,529
https://api.github.com/repos/huggingface/datasets/issues/259
https://github.com/huggingface/datasets/issues/259
259
documentation missing how to split a dataset
closed
7
2020-06-10T13:18:13
2023-03-14T13:56:07
2020-06-18T22:20:24
fotisj
[]
I am trying to understand how to split a dataset ( as arrow_dataset). I know I can do something like this to access a split which is already in the original dataset : `ds_test = nlp.load_dataset('imdb, split='test') ` But how can I split ds_test into a test and a validation set (without reading the data into m...
false
635,859,525
https://api.github.com/repos/huggingface/datasets/issues/258
https://github.com/huggingface/datasets/issues/258
258
Why is dataset after tokenization far more larger than the orginal one ?
closed
4
2020-06-10T01:27:07
2020-06-10T12:46:34
2020-06-10T12:46:34
richarddwang
[]
I tokenize wiki dataset by `map` and cache the results. ``` def tokenize_tfm(example): example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text'])) return example wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train'] wiki.map(token...
false
635,620,979
https://api.github.com/repos/huggingface/datasets/issues/257
https://github.com/huggingface/datasets/issues/257
257
Tokenizer pickling issue fix not landed in `nlp` yet?
closed
2
2020-06-09T17:12:34
2020-06-10T21:45:32
2020-06-09T17:26:53
sarahwie
[]
Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function: ``` dataset = nlp.load_dataset('cos_e') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir) for split in datase...
false
635,596,295
https://api.github.com/repos/huggingface/datasets/issues/256
https://github.com/huggingface/datasets/issues/256
256
[Feature request] Add a feature to dataset
closed
5
2020-06-09T16:38:12
2020-06-09T16:51:42
2020-06-09T16:51:42
sarahwie
[]
Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?
false
635,300,822
https://api.github.com/repos/huggingface/datasets/issues/255
https://github.com/huggingface/datasets/pull/255
255
Add dataset/piaf
closed
1
2020-06-09T10:16:01
2020-06-12T08:31:27
2020-06-12T08:31:27
RachelKer
[]
Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf)
true
635,057,568
https://api.github.com/repos/huggingface/datasets/issues/254
https://github.com/huggingface/datasets/issues/254
254
[Feature request] Be able to remove a specific sample of the dataset
closed
1
2020-06-09T02:22:13
2020-06-09T08:41:38
2020-06-09T08:41:38
astariul
[]
As mentioned in #117, it's currently not possible to remove a sample of the dataset. But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the datase...
false
634,791,939
https://api.github.com/repos/huggingface/datasets/issues/253
https://github.com/huggingface/datasets/pull/253
253
add flue dataset
closed
10
2020-06-08T17:11:09
2023-09-24T09:46:03
2020-07-16T07:50:59
mariamabarham
[]
This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
true
634,563,239
https://api.github.com/repos/huggingface/datasets/issues/252
https://github.com/huggingface/datasets/issues/252
252
NonMatchingSplitsSizesError error when reading the IMDB dataset
closed
4
2020-06-08T12:26:24
2021-08-27T15:20:58
2020-06-08T14:01:26
antmarakis
[]
Hi! I am trying to load the `imdb` dataset with this line: `dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')` but I am getting the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mounts/Users/cisintern/antmarakis/anaconda3/...
false