id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
705,672,208
https://api.github.com/repos/huggingface/datasets/issues/655
https://github.com/huggingface/datasets/pull/655
655
added Winogrande debiased subset
closed
2
2020-09-21T14:51:08
2020-09-21T16:20:40
2020-09-21T16:16:04
TevenLeScao
[]
The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.
true
705,511,058
https://api.github.com/repos/huggingface/datasets/issues/654
https://github.com/huggingface/datasets/pull/654
654
Allow empty inputs in metrics
closed
0
2020-09-21T11:26:36
2020-10-06T03:51:48
2020-09-21T16:13:38
lhoestq
[]
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
true
705,482,391
https://api.github.com/repos/huggingface/datasets/issues/653
https://github.com/huggingface/datasets/pull/653
653
handle data alteration when trying type
closed
0
2020-09-21T10:41:49
2020-09-21T16:13:06
2020-09-21T16:13:05
lhoestq
[]
Fix #649 The bug came from the type inference that didn't handle a weird case in Pyarrow. Indeed this code runs without error but alters the data in arrow: ```python import pyarrow as pa type = pa.struct({"a": pa.struct({"b": pa.string()})}) array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}...
true
705,390,850
https://api.github.com/repos/huggingface/datasets/issues/652
https://github.com/huggingface/datasets/pull/652
652
handle connection error in download_prepared_from_hf_gcs
closed
0
2020-09-21T08:21:11
2020-09-21T08:28:43
2020-09-21T08:28:42
lhoestq
[]
Fix #647
true
705,212,034
https://api.github.com/repos/huggingface/datasets/issues/651
https://github.com/huggingface/datasets/issues/651
651
Problem with JSON dataset format
open
2
2020-09-20T23:57:14
2020-09-21T12:14:24
null
vikigenius
[]
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records i...
false
704,861,844
https://api.github.com/repos/huggingface/datasets/issues/650
https://github.com/huggingface/datasets/issues/650
650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
closed
4
2020-09-19T11:07:03
2020-09-22T11:54:10
2020-09-22T11:54:09
richarddwang
[]
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` d...
false
704,838,415
https://api.github.com/repos/huggingface/datasets/issues/649
https://github.com/huggingface/datasets/issues/649
649
Inconsistent behavior in map
closed
1
2020-09-19T08:41:12
2020-09-21T16:13:05
2020-09-21T16:13:05
krandiash
[ "bug" ]
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples d...
false
704,753,123
https://api.github.com/repos/huggingface/datasets/issues/648
https://github.com/huggingface/datasets/issues/648
648
offset overflow when multiprocessing batched map on large datasets.
closed
6
2020-09-19T02:15:11
2025-06-17T12:56:07
2020-09-19T16:46:31
richarddwang
[ "bug" ]
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time. ``` def bprocess(examples): examples['len'] = [] for text in examples['text']: examples['len'].append(len(text)) return examples wiki.map(brpocess, batched=True, num_proc=8) ``` ``` ----------------------------...
false
704,734,764
https://api.github.com/repos/huggingface/datasets/issues/647
https://github.com/huggingface/datasets/issues/647
647
Cannot download dataset_info.json
closed
4
2020-09-19T01:35:15
2020-09-21T08:28:42
2020-09-21T08:28:42
chiyuzhang94
[]
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text...
false
704,607,371
https://api.github.com/repos/huggingface/datasets/issues/646
https://github.com/huggingface/datasets/pull/646
646
Fix docs typos
closed
0
2020-09-18T19:32:27
2020-09-21T16:30:54
2020-09-21T16:14:12
mariosasko
[]
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add th...
true
704,542,234
https://api.github.com/repos/huggingface/datasets/issues/645
https://github.com/huggingface/datasets/pull/645
645
Don't use take on dataset table in pyarrow 1.0.x
closed
4
2020-09-18T17:31:34
2023-09-19T07:59:19
2020-09-19T16:46:31
lhoestq
[]
Fix #615
true
704,534,501
https://api.github.com/repos/huggingface/datasets/issues/644
https://github.com/huggingface/datasets/pull/644
644
Better windows support
closed
1
2020-09-18T17:17:36
2020-09-25T14:02:30
2020-09-25T14:02:28
lhoestq
[]
There are a few differences in the behavior of python and pyarrow on windows. For example there are restrictions when accessing/deleting files that are open Fix #590
true
704,477,164
https://api.github.com/repos/huggingface/datasets/issues/643
https://github.com/huggingface/datasets/issues/643
643
Caching processed dataset at wrong folder
closed
13
2020-09-18T15:41:26
2022-02-16T14:53:29
2022-02-16T14:53:29
mrm8488
[ "bug" ]
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
false
704,397,499
https://api.github.com/repos/huggingface/datasets/issues/642
https://github.com/huggingface/datasets/pull/642
642
Rename wnut fields
closed
0
2020-09-18T13:51:31
2020-09-18T17:18:31
2020-09-18T17:18:30
lhoestq
[]
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
true
704,373,940
https://api.github.com/repos/huggingface/datasets/issues/641
https://github.com/huggingface/datasets/pull/641
641
Add Polyglot-NER Dataset
closed
7
2020-09-18T13:21:44
2020-09-20T03:04:43
2020-09-20T03:04:43
joeddav
[]
Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.
true
704,311,758
https://api.github.com/repos/huggingface/datasets/issues/640
https://github.com/huggingface/datasets/pull/640
640
Make shuffle compatible with temp_seed
closed
0
2020-09-18T11:38:58
2020-09-18T11:47:51
2020-09-18T11:47:50
lhoestq
[]
This code used to return different dataset at each run ```python import dataset as ds dataset = ... with ds.temp_seed(42): shuffled = dataset.shuffle() ``` Now it returns the same one since the seed is set
true
704,217,963
https://api.github.com/repos/huggingface/datasets/issues/639
https://github.com/huggingface/datasets/pull/639
639
Update glue QQP checksum
closed
0
2020-09-18T09:08:15
2020-09-18T11:37:08
2020-09-18T11:37:07
lhoestq
[]
Fix #638
true
704,146,956
https://api.github.com/repos/huggingface/datasets/issues/638
https://github.com/huggingface/datasets/issues/638
638
GLUE/QQP dataset: NonMatchingChecksumError
closed
1
2020-09-18T07:09:10
2020-09-18T11:37:07
2020-09-18T11:37:07
richarddwang
[]
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_data...
false
703,539,909
https://api.github.com/repos/huggingface/datasets/issues/637
https://github.com/huggingface/datasets/pull/637
637
Add MATINF
closed
0
2020-09-17T12:24:53
2020-09-17T13:23:18
2020-09-17T13:23:17
JetRunner
[]
true
702,883,989
https://api.github.com/repos/huggingface/datasets/issues/636
https://github.com/huggingface/datasets/pull/636
636
Consistent ner features
closed
0
2020-09-16T15:56:25
2020-09-17T09:52:59
2020-09-17T09:52:58
lhoestq
[]
As discussed in #613 , this PR aims at making NER feature names consistent across datasets. I changed the feature names of LinCE and XTREME/PAN-X
true
702,822,439
https://api.github.com/repos/huggingface/datasets/issues/635
https://github.com/huggingface/datasets/pull/635
635
Loglevel
closed
2
2020-09-16T14:37:53
2020-09-17T09:52:19
2020-09-17T09:52:18
lhoestq
[]
Continuation of #618
true
702,676,041
https://api.github.com/repos/huggingface/datasets/issues/634
https://github.com/huggingface/datasets/pull/634
634
Add ConLL-2000 dataset
closed
0
2020-09-16T11:14:11
2020-09-17T10:38:10
2020-09-17T10:38:10
vblagoje
[]
Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR
true
702,440,484
https://api.github.com/repos/huggingface/datasets/issues/633
https://github.com/huggingface/datasets/issues/633
633
Load large text file for LM pre-training resulting in OOM
open
27
2020-09-16T04:33:15
2021-02-16T12:02:01
null
leethu2012
[]
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
false
702,358,124
https://api.github.com/repos/huggingface/datasets/issues/632
https://github.com/huggingface/datasets/pull/632
632
Fix typos in the loading datasets docs
closed
1
2020-09-16T00:27:41
2020-09-21T16:31:11
2020-09-16T06:52:44
mariosasko
[]
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
true
701,711,255
https://api.github.com/repos/huggingface/datasets/issues/631
https://github.com/huggingface/datasets/pull/631
631
Fix text delimiter
closed
5
2020-09-15T08:08:42
2020-09-22T15:03:06
2020-09-15T08:26:25
lhoestq
[]
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
true
701,636,350
https://api.github.com/repos/huggingface/datasets/issues/630
https://github.com/huggingface/datasets/issues/630
630
Text dataset not working with large files
closed
11
2020-09-15T06:02:36
2020-09-25T22:21:43
2020-09-25T22:21:43
ksjae
[]
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
false
701,517,550
https://api.github.com/repos/huggingface/datasets/issues/629
https://github.com/huggingface/datasets/issues/629
629
straddling object straddles two block boundaries
closed
1
2020-09-15T00:30:46
2020-09-15T00:36:17
2020-09-15T00:32:17
bharaniabhishek123
[]
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", li...
false
701,496,053
https://api.github.com/repos/huggingface/datasets/issues/628
https://github.com/huggingface/datasets/pull/628
628
Update docs links in the contribution guideline
closed
1
2020-09-14T23:27:19
2020-11-02T21:03:23
2020-09-15T06:19:35
M-Salti
[]
Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.
true
701,411,661
https://api.github.com/repos/huggingface/datasets/issues/627
https://github.com/huggingface/datasets/pull/627
627
fix (#619) MLQA features names
closed
0
2020-09-14T20:41:59
2020-11-02T21:04:32
2020-09-16T06:54:11
M-Salti
[]
Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.
true
701,352,605
https://api.github.com/repos/huggingface/datasets/issues/626
https://github.com/huggingface/datasets/pull/626
626
Update GLUE URLs (now hosted on FB)
closed
0
2020-09-14T19:05:39
2020-09-16T06:53:18
2020-09-16T06:53:18
jeswan
[]
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/dat...
true
701,057,799
https://api.github.com/repos/huggingface/datasets/issues/625
https://github.com/huggingface/datasets/issues/625
625
dtype of tensors should be preserved
closed
9
2020-09-14T12:38:05
2021-08-17T08:30:04
2021-08-17T08:30:04
BramVanroy
[]
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
false
700,541,628
https://api.github.com/repos/huggingface/datasets/issues/624
https://github.com/huggingface/datasets/issues/624
624
Add learningq dataset
open
0
2020-09-13T10:20:27
2020-09-14T09:50:02
null
krrishdholakia
[ "dataset request" ]
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
false
700,235,308
https://api.github.com/repos/huggingface/datasets/issues/623
https://github.com/huggingface/datasets/issues/623
623
Custom feature types in `load_dataset` from CSV
closed
7
2020-09-12T13:21:34
2020-09-30T19:51:43
2020-09-30T08:39:54
lvwerra
[ "enhancement" ]
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
false
700,225,826
https://api.github.com/repos/huggingface/datasets/issues/622
https://github.com/huggingface/datasets/issues/622
622
load_dataset for text files not working
closed
41
2020-09-12T12:49:28
2020-10-28T11:07:31
2020-10-28T11:07:30
BramVanroy
[ "dataset bug" ]
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
false
700,171,097
https://api.github.com/repos/huggingface/datasets/issues/621
https://github.com/huggingface/datasets/pull/621
621
[docs] Index: The native emoji looks kinda ugly in large size
closed
0
2020-09-12T09:48:40
2020-09-15T06:20:03
2020-09-15T06:20:02
julien-c
[]
true
699,815,135
https://api.github.com/repos/huggingface/datasets/issues/620
https://github.com/huggingface/datasets/issues/620
620
map/filter multiprocessing raises errors and corrupts datasets
closed
22
2020-09-11T22:30:06
2020-10-08T16:31:47
2020-10-08T16:31:46
timothyjlaurent
[ "bug" ]
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
false
699,733,612
https://api.github.com/repos/huggingface/datasets/issues/619
https://github.com/huggingface/datasets/issues/619
619
Mistakes in MLQA features names
closed
1
2020-09-11T20:46:23
2020-09-16T06:59:19
2020-09-16T06:59:19
M-Salti
[]
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et...
false
699,684,831
https://api.github.com/repos/huggingface/datasets/issues/618
https://github.com/huggingface/datasets/pull/618
618
sync logging utils with transformers
closed
12
2020-09-11T19:46:13
2020-09-17T15:40:59
2020-09-17T09:53:47
stas00
[]
sync the docs/code with the recent changes in transformers' `logging` utils: 1. change the default level to `WARNING` 2. add `DATASETS_VERBOSITY` env var 3. expand docs
true
699,472,596
https://api.github.com/repos/huggingface/datasets/issues/617
https://github.com/huggingface/datasets/issues/617
617
Compare different Rouge implementations
closed
7
2020-09-11T15:49:32
2023-03-22T12:08:44
2020-10-02T09:52:18
ibeltagy
[]
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
false
699,462,293
https://api.github.com/repos/huggingface/datasets/issues/616
https://github.com/huggingface/datasets/issues/616
616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
open
14
2020-09-11T15:39:16
2021-07-22T21:12:21
null
BramVanroy
[]
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
false
699,410,773
https://api.github.com/repos/huggingface/datasets/issues/615
https://github.com/huggingface/datasets/issues/615
615
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
closed
16
2020-09-11T14:50:38
2024-05-02T06:53:15
2020-09-19T16:46:31
lhoestq
[]
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-38...
false
699,177,110
https://api.github.com/repos/huggingface/datasets/issues/614
https://github.com/huggingface/datasets/pull/614
614
[doc] Update deploy.sh
closed
0
2020-09-11T11:06:13
2020-09-14T08:49:19
2020-09-14T08:49:17
thomwolf
[]
true
699,117,070
https://api.github.com/repos/huggingface/datasets/issues/613
https://github.com/huggingface/datasets/pull/613
613
Add CoNLL-2003 shared task dataset
closed
7
2020-09-11T10:02:30
2020-10-05T10:43:05
2020-09-17T10:36:38
vblagoje
[]
Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also fo...
true
699,008,644
https://api.github.com/repos/huggingface/datasets/issues/612
https://github.com/huggingface/datasets/pull/612
612
add multi-proc to dataset dict
closed
0
2020-09-11T08:18:13
2020-09-11T10:20:13
2020-09-11T10:20:11
thomwolf
[]
Add multi-proc to `DatasetDict`
true
698,863,988
https://api.github.com/repos/huggingface/datasets/issues/611
https://github.com/huggingface/datasets/issues/611
611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
closed
6
2020-09-11T05:29:12
2022-06-01T15:11:43
2022-06-01T15:11:43
sangyx
[]
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
false
698,349,388
https://api.github.com/repos/huggingface/datasets/issues/610
https://github.com/huggingface/datasets/issues/610
610
Load text file for RoBERTa pre-training.
closed
43
2020-09-10T18:41:38
2022-11-22T13:51:24
2022-11-22T13:51:23
chiyuzhang94
[]
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
false
698,323,989
https://api.github.com/repos/huggingface/datasets/issues/609
https://github.com/huggingface/datasets/pull/609
609
Update GLUE URLs (now hosted on FB)
closed
2
2020-09-10T18:16:32
2020-09-14T19:06:02
2020-09-14T19:06:01
jeswan
[]
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
true
698,291,156
https://api.github.com/repos/huggingface/datasets/issues/608
https://github.com/huggingface/datasets/issues/608
608
Don't use the old NYU GLUE dataset URLs
closed
1
2020-09-10T17:47:02
2020-09-16T06:53:18
2020-09-16T06:53:18
jeswan
[]
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR? See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/111...
false
698,094,442
https://api.github.com/repos/huggingface/datasets/issues/607
https://github.com/huggingface/datasets/pull/607
607
Add transmit_format wrapper and tests
closed
0
2020-09-10T15:03:50
2020-09-10T15:21:48
2020-09-10T15:21:47
lhoestq
[]
Same as #605 but using a decorator on-top of dataset transforms that are not in place
true
698,050,442
https://api.github.com/repos/huggingface/datasets/issues/606
https://github.com/huggingface/datasets/pull/606
606
Quick fix :)
closed
1
2020-09-10T14:32:06
2020-09-10T16:18:32
2020-09-10T16:18:30
thomwolf
[]
`nlp` => `datasets`
true
697,887,401
https://api.github.com/repos/huggingface/datasets/issues/605
https://github.com/huggingface/datasets/pull/605
605
[Datasets] Transmit format to children
closed
1
2020-09-10T12:30:18
2023-09-24T09:49:47
2020-09-10T16:15:21
thomwolf
[]
Transmit format to children obtained when processing a dataset. Added a test. When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults.
true
697,774,581
https://api.github.com/repos/huggingface/datasets/issues/604
https://github.com/huggingface/datasets/pull/604
604
Update bucket prefix
closed
0
2020-09-10T11:01:13
2020-09-10T12:45:33
2020-09-10T12:45:32
lhoestq
[]
cc @julien-c
true
697,758,750
https://api.github.com/repos/huggingface/datasets/issues/603
https://github.com/huggingface/datasets/pull/603
603
Set scripts version to master
closed
0
2020-09-10T10:47:44
2020-09-10T11:02:05
2020-09-10T11:02:04
lhoestq
[]
By default the scripts version is master, so that if the library is installed with ``` pip install git+http://github.com/huggingface/nlp.git ``` or ``` git clone http://github.com/huggingface/nlp.git pip install -e ./nlp ``` will use the latest scripts, and not the ones from the previous version.
true
697,636,605
https://api.github.com/repos/huggingface/datasets/issues/602
https://github.com/huggingface/datasets/pull/602
602
apply offset to indices in multiprocessed map
closed
0
2020-09-10T08:54:30
2020-09-10T11:03:39
2020-09-10T11:03:37
lhoestq
[]
Fix #597 I fixed the indices by applying an offset. I added the case to our tests to make sure it doesn't happen again. I also added the message proposed by @thomwolf in #597 ```python >>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False) Done writing 10 ...
true
697,574,848
https://api.github.com/repos/huggingface/datasets/issues/601
https://github.com/huggingface/datasets/pull/601
601
check if trasnformers has PreTrainedTokenizerBase
closed
0
2020-09-10T07:54:56
2020-09-10T11:01:37
2020-09-10T11:01:36
lhoestq
[]
Fix #598
true
697,496,913
https://api.github.com/repos/huggingface/datasets/issues/600
https://github.com/huggingface/datasets/issues/600
600
Pickling error when loading dataset
closed
5
2020-09-10T06:28:08
2020-09-25T14:31:54
2020-09-25T14:31:54
kandorm
[]
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_da...
false
697,377,786
https://api.github.com/repos/huggingface/datasets/issues/599
https://github.com/huggingface/datasets/pull/599
599
Add MATINF dataset
closed
2
2020-09-10T03:31:09
2023-09-24T09:50:08
2020-09-17T12:17:25
JetRunner
[]
@lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :(
true
697,156,501
https://api.github.com/repos/huggingface/datasets/issues/598
https://github.com/huggingface/datasets/issues/598
598
The current version of the package on github has an error when loading dataset
closed
3
2020-09-09T21:03:23
2020-09-10T06:25:21
2020-09-09T22:57:28
zeyuyun1
[]
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``...
false
697,112,029
https://api.github.com/repos/huggingface/datasets/issues/597
https://github.com/huggingface/datasets/issues/597
597
Indices incorrect with multiprocessing
closed
2
2020-09-09T19:50:56
2020-09-10T11:03:37
2020-09-10T11:03:37
joeddav
[]
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10...
false
696,928,139
https://api.github.com/repos/huggingface/datasets/issues/596
https://github.com/huggingface/datasets/pull/596
596
[style/quality] Moving to isort 5.0.0 + style/quality on datasets and metrics
closed
1
2020-09-09T15:47:21
2020-09-10T10:05:04
2020-09-10T10:05:03
thomwolf
[]
Move the repo to isort 5.0.0. Also start testing style/quality on datasets and metrics. Specific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies. Maybe we could add this in datasets but while cleaning this I've seen many example of really unused i...
true
696,892,304
https://api.github.com/repos/huggingface/datasets/issues/595
https://github.com/huggingface/datasets/issues/595
595
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
closed
2
2020-09-09T15:01:52
2020-09-09T16:20:19
2020-09-09T16:20:18
sudarshan85
[]
Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p...
false
696,816,893
https://api.github.com/repos/huggingface/datasets/issues/594
https://github.com/huggingface/datasets/pull/594
594
Fix germeval url
closed
0
2020-09-09T13:29:35
2020-09-09T13:34:35
2020-09-09T13:34:34
lhoestq
[]
Continuation of #593 but without the dummy data hack
true
696,679,182
https://api.github.com/repos/huggingface/datasets/issues/593
https://github.com/huggingface/datasets/pull/593
593
GermEval 2014: new download urls
closed
5
2020-09-09T10:07:29
2020-09-09T14:16:54
2020-09-09T13:35:15
stefan-it
[]
Hi, unfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive. I changed the URLs and bump version from 1.0.0 to 2.0.0.
true
696,619,986
https://api.github.com/repos/huggingface/datasets/issues/592
https://github.com/huggingface/datasets/pull/592
592
Test in memory and on disk
closed
0
2020-09-09T08:59:30
2020-09-09T13:50:04
2020-09-09T13:50:03
lhoestq
[]
I added test parameters to do every test both in memory and on disk. I also found a bug in concatenate_dataset thanks to the new tests and fixed it.
true
696,530,413
https://api.github.com/repos/huggingface/datasets/issues/591
https://github.com/huggingface/datasets/pull/591
591
fix #589 (backward compat)
closed
0
2020-09-09T07:33:13
2020-09-09T08:57:56
2020-09-09T08:57:55
thomwolf
[]
Fix #589
true
696,501,827
https://api.github.com/repos/huggingface/datasets/issues/590
https://github.com/huggingface/datasets/issues/590
590
The process cannot access the file because it is being used by another process (windows)
closed
7
2020-09-09T07:01:36
2020-09-25T14:02:28
2020-09-25T14:02:28
saareliad
[]
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
false
696,488,447
https://api.github.com/repos/huggingface/datasets/issues/589
https://github.com/huggingface/datasets/issues/589
589
Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging'
closed
0
2020-09-09T06:46:53
2020-09-09T08:57:54
2020-09-09T08:57:54
ksjae
[]
``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp...
false
695,249,809
https://api.github.com/repos/huggingface/datasets/issues/588
https://github.com/huggingface/datasets/pull/588
588
Support pathlike obj in load dataset
closed
0
2020-09-07T16:13:21
2020-09-08T07:45:19
2020-09-08T07:45:18
lhoestq
[]
Fix #582 (I recreated the PR, I got an issue with git)
true
695,246,018
https://api.github.com/repos/huggingface/datasets/issues/587
https://github.com/huggingface/datasets/pull/587
587
Support pathlike obj in load dataset
closed
0
2020-09-07T16:09:16
2020-09-07T16:10:35
2020-09-07T16:10:35
lhoestq
[]
Fix #582
true
695,237,999
https://api.github.com/repos/huggingface/datasets/issues/586
https://github.com/huggingface/datasets/pull/586
586
Better message when data files is empty
closed
0
2020-09-07T15:59:57
2020-09-09T09:00:09
2020-09-09T09:00:08
lhoestq
[]
Fix #581
true
695,191,209
https://api.github.com/repos/huggingface/datasets/issues/585
https://github.com/huggingface/datasets/pull/585
585
Fix select for pyarrow < 1.0.0
closed
0
2020-09-07T15:02:52
2020-09-08T07:43:17
2020-09-08T07:43:15
lhoestq
[]
Fix #583
true
695,186,652
https://api.github.com/repos/huggingface/datasets/issues/584
https://github.com/huggingface/datasets/pull/584
584
Use github versioning
closed
1
2020-09-07T14:58:15
2020-09-09T13:37:35
2020-09-09T13:37:34
lhoestq
[]
Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version. To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certai...
true
695,166,265
https://api.github.com/repos/huggingface/datasets/issues/583
https://github.com/huggingface/datasets/issues/583
583
ArrowIndexError on Dataset.select
closed
0
2020-09-07T14:36:29
2020-09-08T07:43:15
2020-09-08T07:43:15
lhoestq
[]
If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0 Example: ```python from nlp import load_dataset mnli = load_dataset("glue", "mnli", split="train") shuffled = mnli.shuffle(seed=42) mnli.select(list(range(len(mnli)))) ``` rai...
false
695,126,456
https://api.github.com/repos/huggingface/datasets/issues/582
https://github.com/huggingface/datasets/issues/582
582
Allow for PathLike objects
closed
0
2020-09-07T13:54:51
2020-09-08T07:45:17
2020-09-08T07:45:17
BramVanroy
[]
Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error. ```python files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt")) dataset = load_dataset("text", data_files=files) ``` Traceback: ``` Traceback (most recent call last): File "C:/dev/python/dut...
false
695,120,517
https://api.github.com/repos/huggingface/datasets/issues/581
https://github.com/huggingface/datasets/issues/581
581
Better error message when input file does not exist
closed
0
2020-09-07T13:47:59
2020-09-09T09:00:07
2020-09-09T09:00:07
BramVanroy
[]
In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y. ```python dataset = load_dataset("text", data_files=[]) ``` Example err...
false
694,954,551
https://api.github.com/repos/huggingface/datasets/issues/580
https://github.com/huggingface/datasets/issues/580
580
nlp re-creates already-there caches when using a script, but not within a shell
closed
2
2020-09-07T10:23:50
2020-09-07T15:19:09
2020-09-07T14:26:41
TevenLeScao
[]
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell. Example: try running ``` import nlp hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0) hans_hard_data = nlp.load_dataset('hans', s...
false
694,947,599
https://api.github.com/repos/huggingface/datasets/issues/579
https://github.com/huggingface/datasets/pull/579
579
Doc metrics
closed
0
2020-09-07T10:15:24
2020-09-10T13:06:11
2020-09-10T13:06:10
thomwolf
[]
Adding documentation on metrics loading/using/sharing
true
694,849,940
https://api.github.com/repos/huggingface/datasets/issues/578
https://github.com/huggingface/datasets/pull/578
578
Add CommonGen Dataset
closed
0
2020-09-07T08:17:17
2020-09-07T11:50:29
2020-09-07T11:49:07
JetRunner
[]
CC Authors: @yuchenlin @MichaelZhouwang
true
694,607,148
https://api.github.com/repos/huggingface/datasets/issues/577
https://github.com/huggingface/datasets/issues/577
577
Some languages in wikipedia dataset are not loading
closed
16
2020-09-07T01:16:29
2023-04-11T22:50:48
2022-10-11T11:16:04
gaguilar
[]
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
false
694,348,645
https://api.github.com/repos/huggingface/datasets/issues/576
https://github.com/huggingface/datasets/pull/576
576
Fix the code block in doc
closed
1
2020-09-06T11:40:55
2020-09-07T07:37:32
2020-09-07T07:37:18
JetRunner
[]
true
693,691,611
https://api.github.com/repos/huggingface/datasets/issues/575
https://github.com/huggingface/datasets/issues/575
575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
closed
6
2020-09-04T21:46:25
2020-09-22T10:41:36
2020-09-22T10:41:36
sudarshan85
[]
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
false
693,364,853
https://api.github.com/repos/huggingface/datasets/issues/574
https://github.com/huggingface/datasets/pull/574
574
Add modules cache
closed
2
2020-09-04T16:30:03
2020-09-22T10:27:08
2020-09-07T09:01:35
lhoestq
[]
As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions. I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`. In this directory, a module `nlp_modules` is created so that datasets can ...
true
693,091,790
https://api.github.com/repos/huggingface/datasets/issues/573
https://github.com/huggingface/datasets/pull/573
573
Faster caching for text dataset
closed
0
2020-09-04T11:58:34
2020-09-04T12:53:24
2020-09-04T12:53:23
lhoestq
[]
As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time. To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each...
true
692,598,231
https://api.github.com/repos/huggingface/datasets/issues/572
https://github.com/huggingface/datasets/pull/572
572
Add CLUE Benchmark (11 datasets)
closed
3
2020-09-04T01:57:40
2020-09-07T09:59:11
2020-09-07T09:59:10
JetRunner
[]
Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE).
true
692,109,287
https://api.github.com/repos/huggingface/datasets/issues/571
https://github.com/huggingface/datasets/pull/571
571
Serialization
closed
4
2020-09-03T16:21:38
2020-09-07T07:46:08
2020-09-07T07:46:07
lhoestq
[]
I added `save` and `load` method to serialize/deserialize a dataset object in a folder. It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`. Example: ```python import ...
true
691,846,397
https://api.github.com/repos/huggingface/datasets/issues/570
https://github.com/huggingface/datasets/pull/570
570
add reuters21578 dataset
closed
0
2020-09-03T10:25:47
2020-09-03T10:46:52
2020-09-03T10:46:51
jplu
[]
Reopen a PR this the merge.
true
691,832,720
https://api.github.com/repos/huggingface/datasets/issues/569
https://github.com/huggingface/datasets/pull/569
569
Revert "add reuters21578 dataset"
closed
0
2020-09-03T10:06:16
2020-09-03T10:07:13
2020-09-03T10:07:12
jplu
[]
Reverts huggingface/nlp#471
true
691,638,656
https://api.github.com/repos/huggingface/datasets/issues/568
https://github.com/huggingface/datasets/issues/568
568
`metric.compute` throws `ArrowInvalid` error
closed
3
2020-09-03T04:56:57
2020-10-05T16:33:53
2020-10-05T16:33:53
ibeltagy
[]
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_st...
false
691,430,245
https://api.github.com/repos/huggingface/datasets/issues/567
https://github.com/huggingface/datasets/pull/567
567
Fix BLEURT metrics for backward compatibility
closed
0
2020-09-02T21:22:35
2020-09-03T07:29:52
2020-09-03T07:29:50
thomwolf
[]
Fix #565
true
691,160,208
https://api.github.com/repos/huggingface/datasets/issues/566
https://github.com/huggingface/datasets/pull/566
566
Remove logger pickling to fix gg colab issues
closed
0
2020-09-02T16:16:21
2020-09-03T16:31:53
2020-09-03T16:31:52
lhoestq
[]
A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells. It creates some issues in google colab right now. Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in...
true
691,039,121
https://api.github.com/repos/huggingface/datasets/issues/565
https://github.com/huggingface/datasets/issues/565
565
No module named 'nlp.logging'
closed
2
2020-09-02T13:49:50
2020-09-03T07:29:50
2020-09-03T07:29:50
melody-ju
[]
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l...
false
691,000,020
https://api.github.com/repos/huggingface/datasets/issues/564
https://github.com/huggingface/datasets/pull/564
564
Wait for writing in distributed metrics
closed
7
2020-09-02T12:58:50
2020-09-09T09:13:23
2020-09-09T09:13:22
lhoestq
[]
There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing. To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it
true
690,908,674
https://api.github.com/repos/huggingface/datasets/issues/563
https://github.com/huggingface/datasets/pull/563
563
[Large datasets] Speed up download and processing
closed
2
2020-09-02T10:31:54
2020-09-09T09:03:33
2020-09-09T09:03:32
thomwolf
[]
Various improvements to speed-up creation and processing of large scale datasets. Currently: - distributed downloads - remove etag from datafiles hashes to spare a request when restarting a failed download
true
690,907,604
https://api.github.com/repos/huggingface/datasets/issues/562
https://github.com/huggingface/datasets/pull/562
562
[Reproductibility] Allow to pin versions of datasets/metrics
closed
1
2020-09-02T10:30:13
2023-09-24T09:49:42
2020-09-09T13:04:54
thomwolf
[]
Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts: ``` dataset = nlp.load_dataset('squad', version='1.0.0') metric = nlp.load_metric('squad', version='1.0.0') ``` Notes: - version number are the release version of the library - curre...
true
690,871,415
https://api.github.com/repos/huggingface/datasets/issues/561
https://github.com/huggingface/datasets/pull/561
561
Made `share_dataset` more readable
closed
0
2020-09-02T09:34:48
2020-09-03T09:00:30
2020-09-03T09:00:29
TevenLeScao
[]
true
690,488,764
https://api.github.com/repos/huggingface/datasets/issues/560
https://github.com/huggingface/datasets/issues/560
560
Using custom DownloadConfig results in an error
closed
6
2020-09-01T22:23:02
2022-10-04T17:23:45
2022-10-04T17:23:45
ynouri
[]
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
false
690,411,263
https://api.github.com/repos/huggingface/datasets/issues/559
https://github.com/huggingface/datasets/pull/559
559
Adding the KILT knowledge source and tasks
closed
1
2020-09-01T20:05:13
2020-09-04T18:05:47
2020-09-04T18:05:47
yjernite
[]
This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with: ``` import nlp kilt_wikipedia = nlp.load_dataset('kilt_wikipedia') kilt_tasks = nlp.load_dataset('kilt_tasks') triviaqa = nlp.load_dataset('trivia_qa',...
true
690,318,105
https://api.github.com/repos/huggingface/datasets/issues/558
https://github.com/huggingface/datasets/pull/558
558
Rerun pip install -e
closed
0
2020-09-01T17:24:39
2020-09-01T17:24:51
2020-09-01T17:24:50
lhoestq
[]
Hopefully it fixes the github actions
true
690,220,135
https://api.github.com/repos/huggingface/datasets/issues/557
https://github.com/huggingface/datasets/pull/557
557
Fix a few typos
closed
0
2020-09-01T15:03:24
2020-09-02T07:39:08
2020-09-02T07:39:07
julien-c
[]
true
690,218,423
https://api.github.com/repos/huggingface/datasets/issues/556
https://github.com/huggingface/datasets/pull/556
556
Add DailyDialog
closed
0
2020-09-01T15:01:15
2020-09-03T15:42:03
2020-09-03T15:38:39
julien-c
[]
http://yanran.li/dailydialog.html https://arxiv.org/pdf/1710.03957.pdf
true