html_url
stringlengths
48
51
title
stringlengths
5
155
comments
stringlengths
63
15.7k
body
stringlengths
0
17.7k
comment_length
int64
16
949
text
stringlengths
164
23.7k
https://github.com/huggingface/datasets/issues/720
OSError: Cannot find data file when not using the dummy dataset in RAG
Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. ``` 99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock ``` ``` --------...
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour...
387
OSError: Cannot find data file when not using the dummy dataset in RAG ## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set...
https://github.com/huggingface/datasets/issues/720
OSError: Cannot find data file when not using the dummy dataset in RAG
An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors.
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour...
20
OSError: Cannot find data file when not using the dummy dataset in RAG ## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set...
https://github.com/huggingface/datasets/issues/709
How to use similarity settings other then "BM25" in Elasticsearch index ?
Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed...
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:** https://huggingface.co/docs/datasets/faiss_and_ea.html **context :** =...
88
How to use similarity settings other then "BM25" in Elasticsearch index ? **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:*...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
26
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
19
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
26
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
By default the datasets loaded with `load_dataset` live on disk. It's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`. Small correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice t...
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
51
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
Great! Thanks a lot. I did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data. ```python features = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names) features.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask']) ...
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
170
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%. In disk: ``` book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]') book_corpus = book_corpus.map(encode, batc...
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
247
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
45
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/708
Datasets performance slow? - 6.4x slower than in memory dataset
My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
21
Datasets performance slow? - 6.4x slower than in memory dataset I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you do...
https://github.com/huggingface/datasets/issues/707
Requirements should specify pyarrow<1
@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni...
18
Requirements should specify pyarrow<1 I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having insta...
https://github.com/huggingface/datasets/issues/707
Requirements should specify pyarrow<1
Hello @mathcass 1. I did fork the repository and clone the same on my local system. 2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation. 3. Then I Perplexity document link that you shared above. I created a colab link from there keep ...
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni...
103
Requirements should specify pyarrow<1 I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having insta...
https://github.com/huggingface/datasets/issues/707
Requirements should specify pyarrow<1
Thanks for looking at this @punitaojha and thanks for sharing the notebook. I just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid. Thanks again.
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni...
56
Requirements should specify pyarrow<1 I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having insta...
https://github.com/huggingface/datasets/issues/707
Requirements should specify pyarrow<1
I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install "pyarrow<1" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case). Please see the Colab below: https://colab.resear...
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni...
52
Requirements should specify pyarrow<1 I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having insta...
https://github.com/huggingface/datasets/issues/705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
Hi ! Thanks for reporting :) Indeed this is an issue on the `datasets` side. I'm creating a PR
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) ...
19
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from ma...
https://github.com/huggingface/datasets/issues/699
XNLI dataset is not loading
also i tried below code to solve checksum error `datasets-cli test ./datasets/xnli --save_infos --all_configs` and it shows ``` 2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): ...
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verifi...
170
XNLI dataset is not loading `dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Check...
https://github.com/huggingface/datasets/issues/699
XNLI dataset is not loading
Hi ! Yes the download url changed. It's updated on the master branch. I'm doing a release today to fix that :)
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verifi...
22
XNLI dataset is not loading `dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Check...
https://github.com/huggingface/datasets/issues/690
XNLI dataset: NonMatchingChecksumError
Thanks for reporting. The data file must have been updated by the host. I'll update the checksum with the new one.
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr...
21
XNLI dataset: NonMatchingChecksumError Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = lo...
https://github.com/huggingface/datasets/issues/690
XNLI dataset: NonMatchingChecksumError
I'll do a release in the next few days to make the fix available for everyone. In the meantime you can load `xnli` with ``` xnli = load_dataset('xnli', script_version="master") ``` This will use the latest version of the xnli script (available on master branch), instead of the old one.
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr...
49
XNLI dataset: NonMatchingChecksumError Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = lo...
https://github.com/huggingface/datasets/issues/687
`ArrowInvalid` occurs while running `Dataset.map()` function
Hi ! This is because `encode` expects one single text as input (str), or one tokenized text (List[str]). I believe that you actually wanted to use `encode_batch` which expects a batch of texts. However this method is only available for our "fast" tokenizers (ex: BertTokenizerFast). BertJapanese is not one of them...
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=Non...
128
`ArrowInvalid` occurs while running `Dataset.map()` function It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='st...
https://github.com/huggingface/datasets/issues/687
`ArrowInvalid` occurs while running `Dataset.map()` function
Thank you very much for the kind and precise suggestion! I'm looking forward to seeing BertJapaneseTokenizer built into the "fast" tokenizers. I tried `map` with multiprocessing as follows, and it worked! ```python # There was a Pickle problem if I use `lambda` for multiprocessing def encode(examples): re...
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=Non...
61
`ArrowInvalid` occurs while running `Dataset.map()` function It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='st...
https://github.com/huggingface/datasets/issues/686
Dataset browser url is still https://huggingface.co/nlp/viewer/
Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)
Might be worth updating to https://huggingface.co/datasets/viewer/
26
Dataset browser url is still https://huggingface.co/nlp/viewer/ Might be worth updating to https://huggingface.co/datasets/viewer/ Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)
https://github.com/huggingface/datasets/issues/678
The download instructions for c4 datasets are not contained in the error message
Also not that C4 is a dataset that needs an Apache Beam runtime to be generated. For example Dataflow, Spark, Flink etc. Usually we generate the dataset on our side once and for all, but we haven't done it for C4 yet. More info about beam datasets [here](https://huggingface.co/docs/datasets/beam_dataset.html) L...
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff...
56
The download instructions for c4 datasets are not contained in the error message The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830...
https://github.com/huggingface/datasets/issues/676
train_test_split returns empty dataset item
Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config)
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) pri...
20
train_test_split returns empty dataset item I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split...
https://github.com/huggingface/datasets/issues/676
train_test_split returns empty dataset item
We'll do a release pretty soon to include the fix :) In the meantime you can install the lib from source if you want to
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) pri...
25
train_test_split returns empty dataset item I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split...
https://github.com/huggingface/datasets/issues/674
load_dataset() won't download in Windows
I have the same issue. Tried to download a few of them and not a single one is downloaded successfully. This is the output: ``` >>> dataset = load_dataset('blended_skill_talk', split='train') Using custom data configuration default <-- This step never ends ```
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa...
41
load_dataset() won't download in Windows I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin...
https://github.com/huggingface/datasets/issues/674
load_dataset() won't download in Windows
This was fixed in #644 I'll do a new release soon :) In the meantime you can run it by installing from source
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa...
23
load_dataset() won't download in Windows I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin...
https://github.com/huggingface/datasets/issues/674
load_dataset() won't download in Windows
Closing since version 1.1.0 got released with Windows support :) Let me know if it works for you now
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa...
19
load_dataset() won't download in Windows I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
We should try to regenerate the data using the official script. But iirc that's what we used in the first place, so not sure why it didn't match in the first place. I'll let you know when the dataset is updated
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
41
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
Thanks, looking forward to hearing your update on this thread. This is a blocking issue for us; would appreciate any progress on this front. We can also help with the fix, if you deem it appropriately.
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
36
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
I just started the generation on my side, I'll let you know how it goes :)
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
16
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
Hmm after a first run I'm still missing 136668/226711 urls. I'll relaunch it tomorrow to try to get the remaining ones.
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
21
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
So I managed to download them all but when parsing only 226,181/226,711 worked. Not sure if it's worth digging and debugging parsing at this point :/
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
26
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
Thanks @lhoestq It would be great to improve coverage, but IDs are the really crucial part for us. We'd really appreciate an update to the dataset with IDs either way!
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
30
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
I gave up at an even earlier point. The dataset I use has 204,017 train examples.
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
16
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
@lhoestq @sshleifer like @jbragg said earlier, the main issue for us is that the current XSUM dataset (in your package) does not have IDs suggested by the original dataset ([here is the file](https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json).) Would apprec...
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
63
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
>So I managed to download them all but when parsing only 226,181/226,711 worked. @lhoestq any chance we could update the HF-hosted dataset with the IDs in your new version? Happy to help if there's something I can do.
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
38
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/672
Questions about XSUM
Well I couldn't parse what I downloaded. Unfortunately I think I won't be able to take a look at it this week. I can try to send you what I got if you want to give it a shot @jbragg Otherwise feel free to re-run the xsum download script, maybe you'll be luckier than me
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
55
Questions about XSUM Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype...
https://github.com/huggingface/datasets/issues/669
How to skip a example when running dataset.map
Hi @xixiaoyao, Depending on what you want to do you can: - use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter - or directly detect the invalid examples inside the callable used with `map` and return them unchanged or ...
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
95
How to skip a example when running dataset.map in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. Hi @xixiaoyao, Depending on what you want to do...
https://github.com/huggingface/datasets/issues/667
Loss not decrease with Datasets and Transformers
Hi did you manage to fix your issue ? If so feel free to share your fix and close this thread
HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data...
21
Loss not decrease with Datasets and Transformers HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fi...
https://github.com/huggingface/datasets/issues/666
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
No they are other similar copies but they are not provided by the official Bert models authors.
17
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? No they are other similar copies but they are not provided by the official Bert models authors.
https://github.com/huggingface/datasets/issues/665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
Hi ! It works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast. Which version of transformers/datasets are you using ?
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
22
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [...
https://github.com/huggingface/datasets/issues/665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Colab reproducing the error for us to be able to debug this error.
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
33
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [...
https://github.com/huggingface/datasets/issues/665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I have the same issue with `transformers/BertJapaneseTokenizer`. ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=None) # }, num_rows: 99999) t = BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking'...
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
861
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [...
https://github.com/huggingface/datasets/issues/665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
> I have the same issue with `transformers/BertJapaneseTokenizer`. It looks like it this tokenizer is not supported unfortunately. This is because `t.word_tokenizer.mecab` is a `fugashi.fugashi.GenericTagger` which is not compatible with pickle nor dill. We need objects passes to `map` to be picklable for our ca...
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
153
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [...
https://github.com/huggingface/datasets/issues/665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
We can also update the `BertJapaneseTokenizer` in `transformers` as you just shown @lhoestq to make it compatible with pickle. It will be faster than asking on fugashi 's repo and good for the other users of `transformers` as well. I'm currently working on `transformers` I'll include it in the https://github.com/hug...
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
57
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [...
https://github.com/huggingface/datasets/issues/665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
Thank you for the rapid and polite response! @lhoestq Thanks for the suggestion! I've passed the pickle phase, but another `ArrowInvalid` problem occored. I created another issue #687 . @thomwolf Wow, really fast work. I'm looking forward to the next release 🤗
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
42
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [...
https://github.com/huggingface/datasets/issues/664
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
Hi ! Thanks for reporting. It looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script. Could you check that there exist at least one dataset builder class ?
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ...
34
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise e...
https://github.com/huggingface/datasets/issues/664
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
It happened when try to change the old project which use 'nlp' to new project which use 'datasets'. You should check you old 'my_squad.py' file, change the inherit class from `nlp.xxx` to `datasets.xxx`. Otherwise datasets - load.py - import_main_class() `if inspect.isclass(obj) and issubclass(obj, main_cls_type):` can...
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ...
49
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise e...
https://github.com/huggingface/datasets/issues/657
Squad Metric Description & Feature Mismatch
Thanks for reporting ! There indeed a mismatch between the features and the kwargs description I believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[...
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
63
Squad Metric Description & Feature Mismatch The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also...
https://github.com/huggingface/datasets/issues/657
Squad Metric Description & Feature Mismatch
But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https://github.com/huggingface/datasets/pull/658/files).
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
23
Squad Metric Description & Feature Mismatch The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also...
https://github.com/huggingface/datasets/issues/651
Problem with JSON dataset format
Currently the `json` dataset doesn't support this format unfortunately. However you could load it with ```python from datasets import Dataset import pandas as pd df = pd.read_json("path_to_local.json", orient="index") dataset = Dataset.from_pandas(df) ```
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records i...
32
Problem with JSON dataset format I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note ...
https://github.com/huggingface/datasets/issues/650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi :) In your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files. Let me know if it helps
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` d...
25
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ s...
https://github.com/huggingface/datasets/issues/650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Thanks for your comment @lhoestq , Just for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but actually kind of circumvent it. But since we will test the real data so it is ok ?
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` d...
43
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ s...
https://github.com/huggingface/datasets/issues/650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Yes it's fine for now. We plan to add a job for slow tests. And at one point we'll also do another pass on the dummy data handling and consider extracting files.
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` d...
32
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ s...
https://github.com/huggingface/datasets/issues/649
Inconsistent behavior in map
Thanks for reporting ! This issue must have appeared when we refactored type inference in `nlp` By default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples d...
45
Inconsistent behavior in map I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field...
https://github.com/huggingface/datasets/issues/647
Cannot download dataset_info.json
Thanks for reporting ! We should add support for servers without internet connection indeed I'll do that early next week
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text...
20
Cannot download dataset_info.json I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com...
https://github.com/huggingface/datasets/issues/647
Cannot download dataset_info.json
Right now the recommended way is to create the dataset on a server with internet connection and then to save it and copy the serialized dataset to the server without internet connection.
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text...
32
Cannot download dataset_info.json I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com...
https://github.com/huggingface/datasets/issues/647
Cannot download dataset_info.json
#652 should allow you to load text/json/csv/pandas datasets without an internet connection **IF** you've the dataset script locally. Example: If you have `datasets/text/text.py` locally, then you can do `load_dataset("./datasets/text", data_files=...)`
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text...
30
Cannot download dataset_info.json I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
Thanks for reporting ! It uses a temporary file to write the data. However it looks like the temporary file is not placed in the right directory during the processing
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
30
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected. Which version of `datasets` are you using ?
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
29
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
It looks like a pyarrow issue with google colab. For some reason this code increases the disk usage of google colab while it actually writes into google drive: ```python import pyarrow as pa stream = pa.OSFile("/content/drive/My Drive/path/to/file.arrow", "wb") writer = pa.RecordBatchStreamWriter(stream, schem...
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
74
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
Actually I did more tests it doesn't >.< I'll let you know if I find a way to fix that
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
20
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
Actually I also have the issue when writing a regular text file ```python f = open("/content/drive/My Drive/path/to/file", "w") f.write(("a"*511 + "\n") * ((1 << 30) // 512)) # 1GiB f.close() ``` Is that supposed to happen ?
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
37
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
The code you wrote should write a 1GB file in the Google Drive folder. Doesn't it?
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
16
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
I could check it and as you say as I write to te Drive disk the colab disk also increases...
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
20
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
To reproduce it: ```bash !df -h | grep sda1 ``` ```python f = open("/content/drive/My Drive/test_to_remove.txt", "w") f.write(("a"*511 + "\n") * ((1 << 30) // 512)) # 1GiB f.write(("a"*511 + "\n") * ((1 << 30) // 512)) # 1GiB f.close() ``` ```bash !ls -lh /content/drive/My\ Drive/test_to_remove.txt !df...
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
56
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/643
Caching processed dataset at wrong folder
Apparently, Colab uses a local cache of the data files read/written from Google Drive. See: - https://github.com/googlecolab/colabtools/issues/2087#issuecomment-860818457 - https://github.com/googlecolab/colabtools/issues/1915#issuecomment-804234540 - https://github.com/googlecolab/colabtools/issues/2147#issuecommen...
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
21
Caching processed dataset at wrong folder Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncati...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
21
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
22
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq @sgugger Thanks for your comments. I have install from source code as you told, but the problem is still there. To reproduce the issue, just replace [these lines](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L241-L258) with: (load_dataset and Da...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
80
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Same here. Pre-training on wikitext-103 to do some test. At the end of the training it takes 32GB of RAM + ~30GB of SWAP. I installed dataset==1.1.0, not built from source. I will try uninstalling and building from source when it finish.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
42
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
This seems to be on the `transformers` library side. If you have more informations (pip env) or even better, a colab reproducing the error we can investigate.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
27
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
It seems like it's solved with freshed versions of transformers. I have tried to replicate the error doing a fresh pip install transformers & datasets on colab and the error doesn't continue. On colab it keeps stable on 5GB! (Y) Edit: **Thanks for your great work**. Have a good day.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
50
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@gaceladri witch version transformers and datasets are you using now? I want to try again. Thanks.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
16
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
It's happening to me again. After 4 hours of pre-training, my ram memory gets full and the kernel dies. I am using the last transformers version as today. 4.4.0 and the last version of datasets 1.2.1, both installed from master. The memory consumption keeps increasing.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
45
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Thanks for the investigation @gaceladri Apparently this happens when `num_workers>0` and has to do with objects being copied-on-write. Did you try setting num_workers to 0 @gaceladri ? If the issue doesn't happen with `num_workers=0` then this would confirm that it's indeed related to this python/pytorch issue. ...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
114
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Hmmm so this might come from another issue... Since it doesn't seem to be related to multiprocessing it should be easier to investigate though. Do you have some ideas @gaceladri ?
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
31
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq I looked quickly to a previously spoted bug in my env wandb /sdk/interface/interface.py, because sometimes when I load the dataset I got a multiprocessing error at line 510 in wandb...interface.py This bug is reported here https://github.com/huggingface/datasets/issues/847 ``` --------------------------...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
396
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq But despite this, I got lost into the [class Dataset()](https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset) reading the pyarrow files. Edit: but you should be rigth, that it does not have to be related to multiprocessing since it keeps happening when `num_workers=0`
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
37
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Or maybe wandb uses multiprocessing ? One process for wandb logging and one for actual training ? If this is the case then even setting `num_workers=0` would cause the process to be forked for wandb and therefore cause the memory issue.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
41
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq could be, but if we set wandb to false this should not happen. I am going to try.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
19
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq It keeps happening. I have uninstalled wandb from my env, setted `%env WANDB_DISABLED=true` on my notebook, and commented this func: ``` def get_available_reporting_integrations(): integrations = [] if is_azureml_available(): integrations.append("azure_ml") if is_comet_available(): ...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
65
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Thanks for checking @gaceladri . Let's investigate the single process setting then. If you have some sort of colab notebook with a minimal code example that shows this behavior feel free to share it @gaceladri so that we can play around with it to find what causes this. Otherwise I'll probably try to reproduce on my s...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
60
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq sure. Here you have https://colab.research.google.com/drive/1ba09ZOpyHGAOQLcsxiQAHRXl10qnMU5o?usp=sharing let me know if the link works and it reproduces the issue. To me, it reproduces the issue, since if you start the training the ram memory keeps increasing. Let me know. Thanks!
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
39
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Could the bug be comming from tokenizers? I got this warning at the terminal from my jupyter notebook: ``` huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `to...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
63
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
I've never experienced memory issues with tokenizers so I don't know Cc @n1t0 are you aware of any issue that would cause memory to keep increasing when the tokenizer is used in the Data Collator for language modeling ?
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
39
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
@lhoestq Thanks for pointing to n1t0, just to clarify. That warning was doing fine-tuning, without collator: ``` from datasets import load_dataset, load_metric import numpy as np GLUE_TASKS = [ "cola", "mnli", "mnli-mm", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", ...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
468
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Thanks for sharing your results. So you still had the issue for fine-tuning ? And the issue still appears with a bare-bone dataset from an arrow file...
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
27
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/633
Load large text file for LM pre-training resulting in OOM
Yes, on both cases. Fine-tuning a pre-trained model and pre-training from scratch with a local arrow file already pre-processed.
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
19
Load large text file for LM pre-training resulting in OOM I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(Dat...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Basically ~600MB txt files(UTF-8) * 59. contents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\n``` Also, it gets stuck for a loooong time at ```Testing the mapped function outputs```, for more than 12 hours(currently ongoing)
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
36
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
It gets stuck while doing `.map()` ? Are you using multiprocessing ? If you could provide a code snippet it could be very useful
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
24
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
From transformers/examples/language-modeling/run-language-modeling.py : ``` def get_dataset( args: DataTrainingArguments, tokenizer: PreTrainedTokenizer, evaluate: bool = False, cache_dir: Optional[str] = None, ): file_path = args.eval_data_file if evaluate else args.train_data_file if ...
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
71
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
I am not able to reproduce on my side :/ Could you send the version of `datasets` and `pyarrow` you're using ? Could you try to update the lib and try again ? Or do you think you could try to reproduce it on google colab ?
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
47
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Huh, weird. It's fixed on my side too. But now ```Caching processed dataset``` is taking forever - how can I disable it? Any flags?
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
24
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Right after `Caching processed dataset`, your function is applied to the dataset and there's a progress bar that shows how much time is left. How much time does it take for you ? Also caching isn't supposed to slow down your processing. But if you still want to disable it you can do `.map(..., load_from_cache_file=F...
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
55
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
Ah, it’s much faster now(Takes around 15~20min). BTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :(
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
29
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
> Ah, it’s much faster now(Takes around 15~20min). Glad to see that it's faster now. What did you change exactly ? > BTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :( Oh I didn't know about that. Feel free to open an is...
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
92
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...
https://github.com/huggingface/datasets/issues/630
Text dataset not working with large files
>>> Glad to see that it's faster now. What did you change exactly ? I don't know, it just worked...? Sorry I couldn't be more helpful. Setting with numpy array is a great idea! Thanks.
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
35
Text dataset not working with large files ``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir...