html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings listlengths 768 768 |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/546 | Very slow data loading on large dataset | Also, @agemagician you have to follow the step I indicate in my previous message [here](https://github.com/huggingface/nlp/issues/546#issuecomment-684648927) to use the new text loading script.
Just doing `pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` like you did w... | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_fil... | 46 | Very slow data loading on large dataset
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread durin... | [
-0.15381377935409546,
-0.21949464082717896,
-0.09504812210798264,
0.1538352072238922,
-0.023923972621560097,
0.04571409896016121,
0.1359894722700119,
0.4179362952709198,
0.31395360827445984,
-0.1768573522567749,
0.13525426387786865,
0.24177992343902588,
-0.13237060606479645,
0.257071197032... |
https://github.com/huggingface/datasets/issues/546 | Very slow data loading on large dataset | No problem, I will regenerate it. This will make us see if we solved both issues and now both the data generation step, as well as the hashing step, is fast. | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_fil... | 31 | Very slow data loading on large dataset
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread durin... | [
-0.1519211083650589,
-0.14883294701576233,
-0.11571253091096878,
0.23963642120361328,
0.03762277215719223,
0.04945758357644081,
0.13784225285053253,
0.41570284962654114,
0.22574345767498016,
-0.2202199250459671,
0.18117448687553406,
0.2084626406431198,
-0.14864467084407806,
0.2225391119718... |
https://github.com/huggingface/datasets/issues/546 | Very slow data loading on large dataset | Ok so now the text files won't be hashed.
I also updated #548 to include this change.
Let us know if it helps @agemagician :) | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_fil... | 25 | Very slow data loading on large dataset
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread durin... | [
-0.1468622237443924,
-0.1677475869655609,
-0.11598917096853256,
0.2507624328136444,
0.011675553396344185,
0.12867335975170135,
0.20286056399345398,
0.43605637550354004,
0.28534242510795593,
-0.21578577160835266,
0.14113685488700867,
0.18515856564044952,
-0.17779096961021423,
0.177263125777... |
https://github.com/huggingface/datasets/issues/539 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | Hi @gaguilar
If you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) and runni... | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appea... | 68 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update t... | [
-0.06165247783064842,
0.44370290637016296,
-0.058161377906799316,
0.06641692668199539,
-0.30721062421798706,
0.11532735824584961,
-0.27325019240379333,
0.5395025610923767,
-0.0016359854489564896,
-0.11965851485729218,
0.08576350659132004,
0.22511529922485352,
0.07897767424583435,
-0.122294... |
https://github.com/huggingface/datasets/issues/539 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | Hi @thomwolf
Thanks for the details! I just created a PR with the updated `dataset_infos.json` file (#550). | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appea... | 17 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update t... | [
-0.06165247783064842,
0.44370290637016296,
-0.058161377906799316,
0.06641692668199539,
-0.30721062421798706,
0.11532735824584961,
-0.27325019240379333,
0.5395025610923767,
-0.0016359854489564896,
-0.11965851485729218,
0.08576350659132004,
0.22511529922485352,
0.07897767424583435,
-0.122294... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | `NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.
Either the file you downloaded was corrupted along the way, or the host updated the file.
Could you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an update i... | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 68 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | I just cleared the cache an run it again. The error persists ):
```
nlp (master) $ rm -rf /Users/abarbosa/.cache/huggingface/
nlp (master) $ python
Python 3.8.5 (default, Aug 5 2020, 03:39:04)
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
... | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 147 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | Dealing with the same issue please update the checksum on nlp library end. The data seems to have changed on their end. | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 22 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | We have a discussion on this datasets here: https://github.com/huggingface/nlp/pull/540
Feel free to participate if you have some opinion on the scope of data which should be included in this dataset. | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 30 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia a... | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 61 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | > At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia... | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 67 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | > > At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikiped... | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | 108 | [Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
... | [
-0.34843987226486206,
0.3749223053455353,
-0.022354383021593094,
0.24240745604038239,
0.2333742082118988,
0.05917945131659508,
0.2440309375524521,
0.4674772322177887,
0.2553079426288605,
-0.14036880433559418,
-0.06672495603561401,
-0.09044588357210159,
-0.3057556748390198,
0.12286902964115... |
https://github.com/huggingface/datasets/issues/534 | `list_datasets()` is broken. | Thanks for reporting !
This has been fixed in #475 and the fix will be available in the next release | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/py... | 20 | `list_datasets()` is broken.
version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.... | [
-0.2925504744052887,
0.1287723183631897,
-0.11031749844551086,
0.25301191210746765,
0.15474098920822144,
0.06974969059228897,
0.3828651010990143,
0.4221464991569519,
-0.07663951069116592,
-0.04944173991680145,
-0.17518746852874756,
0.4431261718273163,
-0.24997739493846893,
-0.0311814341694... |
https://github.com/huggingface/datasets/issues/534 | `list_datasets()` is broken. | What you can do instead to get the list of the datasets is call
```python
print([dataset.id for dataset in nlp.list_datasets()])
``` | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/py... | 21 | `list_datasets()` is broken.
version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.... | [
-0.2925504744052887,
0.1287723183631897,
-0.11031749844551086,
0.25301191210746765,
0.15474098920822144,
0.06974969059228897,
0.3828651010990143,
0.4221464991569519,
-0.07663951069116592,
-0.04944173991680145,
-0.17518746852874756,
0.4431261718273163,
-0.24997739493846893,
-0.0311814341694... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Could you try to run `dataset = load_dataset("text", data_files=file_path, split="train")` once before calling the script ?
It looks like several processes try to create the dataset in arrow format at the same time. If the dataset is already created it should be fine | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 43 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Thanks! I tested on 328MB text data on `n1-standard-8 (8 vCPUs, 30 GB memory)`. The main script ran without any issue, but it seems to require a huge space in the drive.
As suggested, I ran the following script before running the pre-training command with `xla_spawn.py`.
```python
from nlp import load_dataset
... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 336 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Again it looks like every process tries to tokenize the full dataset at the same time.
If you do the tokenization before calling `xla_spawn.py` once, then each process will then use the tokenized cached file `cache-f90f341e5308a74698d872bcc88f9c0e.arrow` and not recompute it.
Not sure if there's a better way to do ... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 53 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | I wrote a separate script just for preparing a cached file, including tokenization. Each process did use the tokenized cached file.
Currently I'm testing the pipeline on 24GB text data. It took about 1.5 hour to create a cached file on `n1-highmem-16 (16 vCPUs, 104 GB memory)`. I assume loading this cached file in t... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 127 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Sorry, I thought it was working, but actually the second call doesn't use the cached file that was generated separately, and it will generate another cache-****.arrorw file with a different name. If I run the training script again (with `xla_spawn.py`), it will use the second cached file, which was generated by the tra... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 124 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | So if I understand correctly it means that the cached file generated by your separated script is different by the one used by the training script ? | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 27 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Yes.
1. `cache-69633651476e943b93c89ace715f9487.arrow` was generated with a separate script.
2. I ran the entire script with `xla_spawn.py`.
3. `cache-69633651476e943b93c89ace715f9487.arrow` is not used.
4. `cache-0d77dfce704493dbe63f071eed6a5431.arrow` is created.
5. training starts...
Now, if I kill the pr... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 85 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | 1. Here's the log from the first step.
```
Downloading and preparing dataset text/default-e84dd29acc4ad9ef (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/
447f2bcfa2a721a37bc8fdf23800... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 539 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Thanks for all the details.
The two cached files are supposed to be the same. I suspect that the caching has a problem with the tokenizer.
Which tokenizer did you use ? | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 32 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | I trained a byte-level BPE tokenizer on my data with `tokenziers` library following this [example](https://github.com/huggingface/tokenizers/blob/master/bindings/python/examples/train_bytelevel_bpe.py).
And I put these model files in a directory named `"model_name"`. I also put config.json, which is the original RoB... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 73 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | In my separated script for caching, I'm using `use_fast=True` when initializing a tokenizer.
```python
tokenizer = AutoTokenizer.from_pretrained(args.config_name, use_fast=True)
```
I wasn't using that option in the main script. That could be the reason... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 33 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Yea it could definitely explain why you have two different cache files.
Let me know if using the same tokenizers on both sides fixes the issue | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 26 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | It still creates a new file even if I remove `use_fast=True`...
Here's the script used to create a cached file.
```python
#!/usr/bin/env python3
import argparse
from transformers import AutoTokenizer
from nlp import load_dataset
def main():
parser = argparse.ArgumentParser(description='descrip... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 207 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | You need this part in the main script or it will use the dataset that is not tokenized
| Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 18 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | I can see that the tokenizer in `run_language_modeling.py` is not instantiated the same way as in your separated script.
Indeed we can see L196:
```python
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
```
Could you try to make it so they are instantiated the e... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 46 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | I updated my separated script, but it's creating a cached file again. If I don't use the `model_args.cache_dir`, both will get `None`, so they should be the same.
```python
#!/usr/bin/env python3
import argparse
from transformers import AutoTokenizer
from nlp import load_dataset
def main():
parser = ar... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 143 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Could you also check that the `args.block_size` used in the lambda function is the same as well ? | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 18 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | Here's a minimal working example to reproduce this issue.
Assumption:
- You have access to TPU.
- You have installed `transformers` and `nlp`.
- You have tokenizer files (`config.json`, `merges.txt`, `vocab.json`) under the directory named `model_name`.
- You have `xla_spawn.py` (Download from https://github.com... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 482 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | I ended up specifying the `cache_file_name` argument when I call `map` function.
```python
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size),
batched=True,
cache_file_name=cache_file_name)
```
... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | 59 | File exists error when used with TPU
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_lan... | [
-0.07983724772930145,
-0.14423392713069916,
0.1469174325466156,
0.1047479584813118,
0.2392238974571228,
-0.18302634358406067,
0.5247988104820251,
0.25875264406204224,
-0.2769026458263397,
0.14120501279830933,
0.10993175953626633,
-0.25917619466781616,
-0.03965635970234871,
-0.2304338514804... |
https://github.com/huggingface/datasets/issues/525 | wmt download speed example | Thanks for creating the issue :)
The download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.
Is this mirror official ?
Also it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool
Also cc @patric... | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | 59 | wmt download speed example
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-e... | [
-0.036691997200250626,
-0.29237815737724304,
0.03856395557522774,
0.20778803527355194,
0.02634264901280403,
-0.12157795578241348,
0.13951259851455688,
0.17278990149497986,
0.12272409349679947,
-0.024006646126508713,
0.10713952034711838,
0.44866684079170227,
-0.0746833011507988,
0.075954340... |
https://github.com/huggingface/datasets/issues/525 | wmt download speed example | Shall we host the files ourselves or it is fine to use this mirror in your opinion ? | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | 18 | wmt download speed example
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-e... | [
-0.09701886773109436,
-0.2302701622247696,
0.07600017637014389,
0.23279888927936554,
-0.19301632046699524,
-0.1516973376274109,
0.40992647409439087,
0.06606203317642212,
0.25848641991615295,
0.0476582795381546,
-0.05514199286699295,
0.26849862933158875,
-0.168104350566864,
0.03842897340655... |
https://github.com/huggingface/datasets/issues/525 | wmt download speed example | Should we add an argument in `load_dataset` to override some URL with a custom URL (e.g. mirror) or a local path?
This could also be used to provide local files instead of the original files as requested by some users (e.g. when you made a dataset with the same format than SQuAD and what to use it instead of the off... | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | 63 | wmt download speed example
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-e... | [
-0.14017358422279358,
-0.04306548833847046,
0.06624481081962585,
0.0506720170378685,
0.0602065734565258,
-0.18122991919517517,
0.24890580773353577,
0.1119348332285881,
0.11344056576490402,
-0.013772432692348957,
0.06607144325971603,
0.5972368717193604,
-0.057672999799251556,
0.045457534492... |
https://github.com/huggingface/datasets/issues/525 | wmt download speed example | @lhoestq I think we should host it ourselves. I'll put the subset of wmt (without preprocessed files) that we need on s3 and post a link over the weekend. | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | 29 | wmt download speed example
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-e... | [
-0.21556590497493744,
-0.12062264233827591,
-0.020493386313319206,
0.13008467853069305,
0.0658164918422699,
-0.1955837905406952,
0.20022286474704742,
0.1801559031009674,
0.14339831471443176,
0.1119241863489151,
-0.03411022201180458,
0.4751238226890564,
-0.17757052183151245,
0.2044602632522... |
https://github.com/huggingface/datasets/issues/525 | wmt download speed example | Is there a solution yet? The download speed is still too slow. 60-70kbps download for wmt16 and around 100kbps for wmt19. @sshleifer | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | 22 | wmt download speed example
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-e... | [
-0.1923045516014099,
-0.21065278351306915,
0.03816923499107361,
0.16114920377731323,
0.09751488268375397,
-0.15854744613170624,
0.19040438532829285,
0.15334023535251617,
0.12041546404361725,
0.0007969891303218901,
0.06805356591939926,
0.45227739214897156,
-0.1424294412136078,
0.11495441943... |
https://github.com/huggingface/datasets/issues/519 | [BUG] Metrics throwing new error on master since 0.4.0 | Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
... | 18 | [BUG] Metrics throwing new error on master since 0.4.0
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add... | [
-0.10627347975969315,
-0.007290058769285679,
0.022217581048607826,
0.22181972861289978,
0.3199838399887085,
-0.08042650669813156,
0.17598029971122742,
0.39019712805747986,
0.03430454805493355,
0.17569862306118011,
-0.08454132080078125,
0.2548734247684479,
-0.42888569831848145,
0.0719977542... |
https://github.com/huggingface/datasets/issues/519 | [BUG] Metrics throwing new error on master since 0.4.0 | Closing - seems to be just forgetting to tokenize. And found the helpful discussion in #137 | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
... | 16 | [BUG] Metrics throwing new error on master since 0.4.0
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add... | [
-0.10258305817842484,
0.030752552673220634,
0.003468491602689028,
0.1723008155822754,
0.29735827445983887,
-0.09955496340990067,
0.16471680998802185,
0.3669821321964264,
-0.06181563436985016,
0.11896298080682755,
-0.06702108681201935,
0.31947463750839233,
-0.3788498342037201,
0.03343711048... |
https://github.com/huggingface/datasets/issues/517 | add MLDoc dataset | This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies. | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages... | 16 | add MLDoc dataset
Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories i... | [
-0.3504599630832672,
0.047695282846689224,
-0.17982694506645203,
0.1780564785003662,
-0.04010515287518501,
0.09070207923650742,
0.21025606989860535,
0.03258417919278145,
-0.20794521272182465,
-0.05216022580862045,
-0.09145761281251907,
0.28899505734443665,
-0.38603565096855164,
0.239302188... |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | 25 | dataset.shuffle(keep_in_memory=True) is never allowed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory o... | [
-0.15739257633686066,
0.11012835055589676,
-0.05681515112519264,
0.12800715863704681,
0.17058579623699188,
-0.06848229467868805,
-0.15489445626735687,
0.2861291468143463,
0.03388092294335365,
0.21171802282333374,
0.06588821858167648,
0.5071398019790649,
-0.3157266676425934,
-0.415928840637... |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | Maybe I'm a bit tired but I fail to see the issue here.
Since `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no? | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | 30 | dataset.shuffle(keep_in_memory=True) is never allowed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory o... | [
-0.03923515975475311,
0.027958175167441368,
-0.04439690336585045,
0.24104958772659302,
0.11972106248140335,
-0.02150893583893776,
-0.24547089636325836,
0.277064710855484,
-0.053434763103723526,
0.21022391319274902,
0.21283233165740967,
0.37444478273391724,
-0.290228009223938,
-0.5664215087... |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`.
Thus, `select()` is called with `keep_in_memo... | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | 131 | dataset.shuffle(keep_in_memory=True) is never allowed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory o... | [
0.05825241655111313,
0.17153801023960114,
0.023640703409910202,
0.17293517291545868,
0.15414120256900787,
-0.1635662466287613,
-0.3068186044692993,
0.30157020688056946,
-0.12715810537338257,
0.20359981060028076,
0.1368856430053711,
0.5172120332717896,
-0.30258092284202576,
-0.4314013719558... |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed. | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | 16 | dataset.shuffle(keep_in_memory=True) is never allowed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory o... | [
-0.10787259042263031,
0.031477928161621094,
-0.05921577289700508,
0.17983406782150269,
0.22716084122657776,
-0.05696001276373863,
-0.07815898954868317,
0.3618612587451935,
-0.017298275604844093,
0.310203492641449,
0.06176691874861717,
0.5769489407539368,
-0.2770043909549713,
-0.40121304988... |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | My bad. This is actually not fixed in #513. Sorry about that...
The new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well.
The buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my ... | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | 76 | dataset.shuffle(keep_in_memory=True) is never allowed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory o... | [
-0.16279062628746033,
0.10089720785617828,
-0.047236260026693344,
0.14130493998527527,
0.19698765873908997,
-0.08948785066604614,
-0.07246893644332886,
0.3746209144592285,
0.02849728614091873,
0.3297639489173889,
0.05408607795834541,
0.5689107179641724,
-0.2949672043323517,
-0.407060801982... |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | 17 | dataset.shuffle(keep_in_memory=True) is never allowed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory o... | [
-0.16453984379768372,
0.04706374928355217,
-0.06451323628425598,
0.1816314458847046,
0.22622837126255035,
-0.06582216918468475,
-0.0984029769897461,
0.3320511281490326,
-0.007561858277767897,
0.339921236038208,
0.04442260414361954,
0.5816398859024048,
-0.2912329435348511,
-0.42862540483474... |
https://github.com/huggingface/datasets/issues/511 | dataset.shuffle() and select() resets format. Intended? | Hi @vegarab yes feel free to open a discussion here.
This design choice was not very much thought about.
Since `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).
... | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later... | 164 | dataset.shuffle() and select() resets format. Intended?
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the process... | [
-0.138212651014328,
-0.22472034394741058,
0.023964349180459976,
0.24767418205738068,
0.12812045216560364,
-0.13620170950889587,
0.2032124400138855,
0.0824568048119545,
-0.6032927632331848,
-0.00227336841635406,
-0.2842715084552765,
0.32275083661079407,
-0.1951236128807068,
0.17383168637752... |
https://github.com/huggingface/datasets/issues/511 | dataset.shuffle() and select() resets format. Intended? | I think it's ok to keep the format.
If we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed. | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later... | 32 | dataset.shuffle() and select() resets format. Intended?
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the process... | [
-0.138212651014328,
-0.22472034394741058,
0.023964349180459976,
0.24767418205738068,
0.12812045216560364,
-0.13620170950889587,
0.2032124400138855,
0.0824568048119545,
-0.6032927632331848,
-0.00227336841635406,
-0.2842715084552765,
0.32275083661079407,
-0.1951236128807068,
0.17383168637752... |
https://github.com/huggingface/datasets/issues/511 | dataset.shuffle() and select() resets format. Intended? | Since datasets 1.0.0 the format is not reset anymore.
Closing this one, but feel free to re-open if you have other questions | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later... | 22 | dataset.shuffle() and select() resets format. Intended?
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the process... | [
-0.138212651014328,
-0.22472034394741058,
0.023964349180459976,
0.24767418205738068,
0.12812045216560364,
-0.13620170950889587,
0.2032124400138855,
0.0824568048119545,
-0.6032927632331848,
-0.00227336841635406,
-0.2842715084552765,
0.32275083661079407,
-0.1951236128807068,
0.17383168637752... |
https://github.com/huggingface/datasets/issues/509 | Converting TensorFlow dataset example | Do you want to convert a dataset script to the tfds format ?
If so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.
I think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).
If you manage to make it work in reve... | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| 73 | Converting TensorFlow dataset example
Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
Do you want to convert a dataset script to the tfds format ?
If so, we currently have a comversion script nlp/commands/conv... | [
-0.13548947870731354,
-0.056553248316049576,
-0.109858438372612,
-0.19693239033222198,
0.16571618616580963,
0.09159853309392929,
0.3340388536453247,
0.49406781792640686,
-0.14726479351520538,
0.1399838924407959,
0.03020819090306759,
0.3426106572151184,
0.019533907994627953,
0.3617250919342... |
https://github.com/huggingface/datasets/issues/508 | TypeError: Receiver() takes no arguments | Which version of Apache Beam do you have (can you copy your full environment info here)? | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | 16 | TypeError: Receiver() takes no arguments
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p... | [
-0.09465434402227402,
0.318128764629364,
0.08749552816152573,
0.3629250228404999,
0.27408042550086975,
-0.09895392507314682,
0.28718283772468567,
0.23696587979793549,
-0.028674224391579628,
0.06884192675352097,
0.23884807527065277,
0.32970190048217773,
-0.09087193757295609,
-0.336324423551... |
https://github.com/huggingface/datasets/issues/508 | TypeError: Receiver() takes no arguments | apache-beam==2.23.0
nlp==0.4.0
For me this was resolved by running the same python script on Linux (or really WSL). | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | 18 | TypeError: Receiver() takes no arguments
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p... | [
-0.09465434402227402,
0.318128764629364,
0.08749552816152573,
0.3629250228404999,
0.27408042550086975,
-0.09895392507314682,
0.28718283772468567,
0.23696587979793549,
-0.028674224391579628,
0.06884192675352097,
0.23884807527065277,
0.32970190048217773,
-0.09087193757295609,
-0.336324423551... |
https://github.com/huggingface/datasets/issues/508 | TypeError: Receiver() takes no arguments | Do you manage to run a dummy beam pipeline with python on windows ?
You can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py)
If you get the same error, it means that the issue comes from apache beam.
Otherwise we'll investigat... | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | 45 | TypeError: Receiver() takes no arguments
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p... | [
-0.09465434402227402,
0.318128764629364,
0.08749552816152573,
0.3629250228404999,
0.27408042550086975,
-0.09895392507314682,
0.28718283772468567,
0.23696587979793549,
-0.028674224391579628,
0.06884192675352097,
0.23884807527065277,
0.32970190048217773,
-0.09087193757295609,
-0.336324423551... |
https://github.com/huggingface/datasets/issues/508 | TypeError: Receiver() takes no arguments | Still, same error, so I guess it is on apache beam then.
Thanks for the investigation. | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | 16 | TypeError: Receiver() takes no arguments
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p... | [
-0.09465434402227402,
0.318128764629364,
0.08749552816152573,
0.3629250228404999,
0.27408042550086975,
-0.09895392507314682,
0.28718283772468567,
0.23696587979793549,
-0.028674224391579628,
0.06884192675352097,
0.23884807527065277,
0.32970190048217773,
-0.09087193757295609,
-0.336324423551... |
https://github.com/huggingface/datasets/issues/508 | TypeError: Receiver() takes no arguments | Thanks for trying
Let us know if you find clues of what caused this issue, or if you find a fix | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | 21 | TypeError: Receiver() takes no arguments
I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_p... | [
-0.09465434402227402,
0.318128764629364,
0.08749552816152573,
0.3629250228404999,
0.27408042550086975,
-0.09895392507314682,
0.28718283772468567,
0.23696587979793549,
-0.028674224391579628,
0.06884192675352097,
0.23884807527065277,
0.32970190048217773,
-0.09087193757295609,
-0.336324423551... |
https://github.com/huggingface/datasets/issues/507 | Errors when I use | Looks like an issue with 3.0.2 transformers version. Works fine when I use "master" version of transformers. | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoToke... | 17 | Errors when I use
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization... | [
0.14258988201618195,
-0.38626593351364136,
0.10190001130104065,
0.288279265165329,
-0.0408770777285099,
-0.0037297706585377455,
0.3607887923717499,
0.2866320312023163,
-0.23267023265361786,
-0.03616439923644066,
0.11111897230148315,
0.1749110221862793,
-0.3020103871822357,
-0.1347762793302... |
https://github.com/huggingface/datasets/issues/501 | Caching doesn't work for map (non-deterministic) | Thanks for reporting !
To store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.
The hash doesn't seem to stay the same over sessions for the tokenizer.
Apparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function.
I'm... | The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
def conv... | 59 | Caching doesn't work for map (non-deterministic)
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.Au... | [
-0.013686769641935825,
0.04759864881634712,
0.06730715930461884,
-0.04567546024918556,
0.0919945240020752,
-0.18352064490318298,
0.2613241970539093,
0.1327962577342987,
0.0008188170613721013,
-0.34393078088760376,
0.08850178867578506,
0.24871207773685455,
-0.1410466879606247,
-0.2732982039... |
https://github.com/huggingface/datasets/issues/492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.
Could you try to update `nlp` ?
Also, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack. | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dse... | 35 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("titl... | [
-0.06439647078514099,
0.11732547730207443,
0.026808762922883034,
0.12526902556419373,
0.1567067950963974,
-0.07730801403522491,
0.34048715233802795,
0.16924795508384705,
-0.32196173071861267,
-0.3533983826637268,
0.07657520473003387,
0.3673611879348755,
-0.06835414469242096,
0.327581882476... |
https://github.com/huggingface/datasets/issues/492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.
Since `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There... | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dse... | 55 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("titl... | [
-0.00003257002026657574,
0.053868018090724945,
0.030973156914114952,
0.152777299284935,
0.13154132664203644,
-0.009702049195766449,
0.29055655002593994,
0.1126789078116417,
-0.42409002780914307,
-0.32210206985473633,
0.13241733610630035,
0.30092915892601013,
0.07467884570360184,
0.21840797... |
https://github.com/huggingface/datasets/issues/492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | I'm doing a refactor of type inference in #363 . Both text fields should match after that | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dse... | 17 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("titl... | [
-0.016118690371513367,
0.041833046823740005,
0.04975464940071106,
0.17914751172065735,
0.22096018493175507,
-0.15513737499713898,
0.34854698181152344,
0.17719478905200958,
-0.23378241062164307,
-0.24968983232975006,
0.05285659804940224,
0.2427440583705902,
0.041850872337818146,
0.300952613... |
https://github.com/huggingface/datasets/issues/492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | It should be good now. I was able to run
```python
>>> from nlp import concatenate_datasets, load_dataset
>>>
>>> bookcorpus = load_dataset("bookcorpus", split="train")
>>> wiki = load_dataset("wikipedia", "20200501.en", split="train")
>>> wiki.remove_columns_("title") # only keep the text
>>>
>>> assert boo... | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dse... | 48 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("titl... | [
0.019575901329517365,
0.10143724828958511,
0.0100494883954525,
0.15250758826732635,
0.2617858648300171,
-0.026244690641760826,
0.39929768443107605,
0.18358732759952545,
-0.2757072150707245,
-0.3310970067977905,
0.05789146199822426,
0.29932790994644165,
-0.09611599892377853,
0.3490906655788... |
https://github.com/huggingface/datasets/issues/488 | issues with downloading datasets for wmt16 and wmt19 | I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:
```
wget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00
wget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01
wget... | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no... | 37 | issues with downloading datasets for wmt16 and wmt19
I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save ... | [
-0.27560320496559143,
-0.09381303936243057,
-0.01651349849998951,
0.5474310517311096,
0.0739605501294136,
-0.08159122616052628,
0.01840847358107567,
0.19257698953151703,
0.18017283082008362,
-0.07401444017887115,
-0.06084269657731056,
0.022961637005209923,
-0.16825562715530396,
0.070697434... |
https://github.com/huggingface/datasets/issues/488 | issues with downloading datasets for wmt16 and wmt19 | Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing.
Fixed locally for summarization needs, by running:
```
pip install sacrebleu
sacrebleu -t wmt19 -l ru-en --echo src > test.source
sacrebleu -t wmt19 -l ru-en --echo ref > test.target
```
h/t @sshleifer | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no... | 45 | issues with downloading datasets for wmt16 and wmt19
I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save ... | [
-0.27560320496559143,
-0.09381303936243057,
-0.01651349849998951,
0.5474310517311096,
0.0739605501294136,
-0.08159122616052628,
0.01840847358107567,
0.19257698953151703,
0.18017283082008362,
-0.07401444017887115,
-0.06084269657731056,
0.022961637005209923,
-0.16825562715530396,
0.070697434... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).
Do you know if there exist some copies without this issue ?
How would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Coul... | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 69 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
0.1711835116147995,
0.22540909051895142,
-0.06824127584695816,
0.3483978807926178,
-0.11431343853473663,
-0.25599440932273865,
-0.12094093859195709,
0.23544760048389435,
-0.4317491948604584,
0.12445797771215439,
-0.08785979449748993,
0.2266252189874649,
0.3232100307941437,
-0.1945850700139... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | I'm afraid that I don't know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue.
Going through the raw text in this version, it's apparent that NLTK's TreebankWordTokenizer was applied on it (I gave some examples in my original post), f... | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 146 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
0.05026843771338463,
0.3206239640712738,
-0.0320776142179966,
0.2856236398220062,
-0.1109393835067749,
-0.3141340911388397,
0.06598183512687683,
0.2292080819606781,
-0.3773547112941742,
0.13208578526973724,
-0.15040704607963562,
0.3783097565174103,
0.209166020154953,
-0.08471911400556564,
... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | Ok I get it, that would be very cool indeed
What kinds of patterns the detokenizer can't retrieve ? | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 19 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
0.06040318310260773,
0.2112901210784912,
0.004969588480889797,
0.4674898684024811,
-0.1865660846233368,
-0.3642662763595581,
0.030845798552036285,
0.24870386719703674,
-0.48580676317214966,
0.14313015341758728,
-0.1751265972852707,
0.46551501750946045,
0.1896485686302185,
-0.11763101071119... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text:
```
Dwayne "The Rock" Johnson
```
will result in:
```
Dwayne `` The Rock '' Johnson
```
where the left and right quotation marks are turned into distinct symbols. ... | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 244 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
-0.013097300194203854,
0.20901402831077576,
-0.0026562390848994255,
0.31737446784973145,
-0.19569310545921326,
-0.2996237874031067,
-0.017301660031080246,
0.20061936974525452,
-0.4918517768383026,
0.08162152022123337,
-0.17209725081920624,
0.2842736840248108,
0.16366872191429138,
-0.234940... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https://huggingface.co/datasets/bookcorpus
Or does this preprocessing exactly match that of the papers? | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 37 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
0.02966606430709362,
0.19844092428684235,
-0.01938154175877571,
0.3028750717639923,
-0.08007986843585968,
-0.22160972654819489,
0.10674025118350983,
0.1523846983909607,
-0.36693257093429565,
-0.00788987334817648,
-0.2142251878976822,
0.31679147481918335,
0.23267862200737,
-0.11113773286342... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https://github.com/soskek/bookcorpus | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 27 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
0.04481663927435875,
0.14664492011070251,
-0.06266240775585175,
0.31894588470458984,
-0.048128094524145126,
-0.28032687306404114,
-0.0028588061686605215,
0.2543206214904785,
-0.42640575766563416,
0.0911979004740715,
-0.13541753590106964,
0.3492497503757477,
0.2933694124221802,
-0.115134373... |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | Yes actually the BookCorpus on hugginface is based on [this](https://github.com/soskek/bookcorpus/issues/24#issuecomment-643933352). And I kind of regret naming it as "BookCorpus" instead of something like "BookCorpusLike".
But there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a ... | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | 60 | Bookcorpus data contains pretokenized text
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes... | [
0.0938437283039093,
-0.12072635442018509,
0.005747504997998476,
0.34772586822509766,
-0.04504964128136635,
-0.2574845254421234,
0.02311970852315426,
0.19239689409732819,
-0.34666404128074646,
0.12043941020965576,
-0.23184601962566376,
0.04984401911497116,
0.2508268356323242,
0.281133830547... |
https://github.com/huggingface/datasets/issues/482 | Bugs : dataset.map() is frozen on ELI5 | This comes from an overflow in pyarrow's array.
It is stuck inside the loop that reduces the batch size to avoid the overflow.
I'll take a look | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta... | 27 | Bugs : dataset.map() is frozen on ELI5
Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.... | [
0.005086652934551239,
-0.2545405626296997,
-0.09758821874856949,
0.1058630645275116,
0.1502094268798828,
-0.16980840265750885,
0.43429896235466003,
0.11805427819490433,
-0.28568169474601746,
0.249966099858284,
-0.16753698885440826,
0.41189947724342346,
0.14685313403606415,
0.14851658046245... |
https://github.com/huggingface/datasets/issues/482 | Bugs : dataset.map() is frozen on ELI5 | I created a PR to fix the issue.
It was due to an overflow check that handled badly an empty list.
You can try the changes by using
```
!pip install git+https://github.com/huggingface/nlp.git@fix-bad-type-in-overflow-check
```
Also I noticed that the first 1000 examples have an empty list in the `title_urls`... | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta... | 147 | Bugs : dataset.map() is frozen on ELI5
Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.... | [
0.12892559170722961,
-0.3091128170490265,
-0.06825624406337738,
0.1572483479976654,
0.21350863575935364,
-0.17345528304576874,
0.35337647795677185,
0.15098917484283447,
-0.15824133157730103,
0.11419451981782913,
-0.23894517123699188,
0.4564895033836365,
0.10832195729017258,
0.1284251064062... |
https://github.com/huggingface/datasets/issues/482 | Bugs : dataset.map() is frozen on ELI5 | @lhoestq mapping the function `make_input_target` was passed by your fixing.
However, there is another error in the final step of `valid_dataset.map(convert_to_features, batched=True)`
`ArrowInvalid: Could not convert Thepiratebay.vg with type str: converting to null type`
(The [same colab notebook above with ne... | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta... | 94 | Bugs : dataset.map() is frozen on ELI5
Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.... | [
0.05393768101930618,
-0.18700675666332245,
-0.07427844405174255,
0.06929241120815277,
0.1881323605775833,
-0.09184050559997559,
0.42785367369651794,
0.2430596649646759,
-0.38700175285339355,
0.12082359939813614,
-0.25425487756729126,
0.47243532538414,
0.175965815782547,
0.20339606702327728... |
https://github.com/huggingface/datasets/issues/482 | Bugs : dataset.map() is frozen on ELI5 | I got this issue too and fixed it by specifying `writer_batch_size=3_000` in `.map`.
This is because Arrow didn't expect `Thepiratebay.vg` in `title_urls `, as all previous examples have empty lists in `title_urls ` | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta... | 33 | Bugs : dataset.map() is frozen on ELI5
Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.... | [
0.08828794211149216,
-0.08221670985221863,
-0.08991141617298126,
0.10879279673099518,
0.11859827488660812,
-0.08892413973808289,
0.47578784823417664,
0.20818965137004852,
-0.2753668427467346,
0.14080661535263062,
-0.21474148333072662,
0.4715833365917206,
0.16467875242233276,
0.061681743711... |
https://github.com/huggingface/datasets/issues/478 | Export TFRecord to GCP bucket | Nevermind, I restarted my python session and it worked fine...
---
I had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :) | Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord')` works fine,
but `dataset.... | 39 | Export TFRecord to GCP bucket
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord... | [
0.10430876910686493,
-0.019654996693134308,
0.04187310114502907,
0.11389346420764923,
0.03208280727267265,
-0.14755794405937195,
0.19832752645015717,
0.13782674074172974,
-0.42943859100341797,
0.10862118750810623,
-0.2599082589149475,
0.2367110550403595,
-0.2770296633243561,
0.190220430493... |
https://github.com/huggingface/datasets/issues/477 | Overview.ipynb throws exceptions with nlp 0.4.0 | Thanks for reporting this issue
There was a bug where numpy arrays would get returned instead of tensorflow tensors.
This is fixed on master.
I tried to re-run the colab and encountered this error instead:
```
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'
... | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: trai... | 83 | Overview.ipynb throws exceptions with nlp 0.4.0
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-4890... | [
-0.0391484797000885,
0.0825120285153389,
-0.0073066516779363155,
-0.053079769015312195,
0.03180349990725517,
-0.048214226961135864,
0.8157610297203064,
0.27924036979675293,
-0.06842253357172012,
0.17899121344089508,
0.03167039155960083,
0.2038189172744751,
-0.29703065752983093,
-0.05307336... |
https://github.com/huggingface/datasets/issues/477 | Overview.ipynb throws exceptions with nlp 0.4.0 | Hi, I got another error (on Colab):
```python
# You can read a few attributes of the datasets before loading them (they are python dataclasses)
from dataclasses import asdict
for key, value in asdict(datasets[6]).items():
print('👉 ' + key + ': ' + str(value))
-------------------------------------------... | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: trai... | 110 | Overview.ipynb throws exceptions with nlp 0.4.0
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-4890... | [
-0.0391484797000885,
0.0825120285153389,
-0.0073066516779363155,
-0.053079769015312195,
0.03180349990725517,
-0.048214226961135864,
0.8157610297203064,
0.27924036979675293,
-0.06842253357172012,
0.17899121344089508,
0.03167039155960083,
0.2038189172744751,
-0.29703065752983093,
-0.05307336... |
https://github.com/huggingface/datasets/issues/474 | test_load_real_dataset when config has BUILDER_CONFIGS that matter | The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing`
As mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS | It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa... | 43 | test_load_real_dataset when config has BUILDER_CONFIGS that matter
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_datase... | [
-0.2808130383491516,
0.038649044930934906,
0.01634429395198822,
0.17772145569324493,
0.05833282694220543,
0.2016589343547821,
0.09010370075702667,
0.3195054829120636,
-0.16773656010627747,
0.05236176401376724,
0.042072974145412445,
0.35784322023391724,
-0.16560369729995728,
-0.065439611673... |
https://github.com/huggingface/datasets/issues/474 | test_load_real_dataset when config has BUILDER_CONFIGS that matter | This was fixed in #527
Closing this one, but feel free to re-open if you have other questions | It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa... | 18 | test_load_real_dataset when config has BUILDER_CONFIGS that matter
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_datase... | [
-0.3385564088821411,
-0.053646985441446304,
0.04242774471640587,
0.11086296290159225,
0.16344083845615387,
0.07833638787269592,
0.12288331240415573,
0.3306729793548584,
-0.0973314717411995,
-0.0040304409340023994,
0.05377328768372536,
0.4402483403682709,
-0.15711860358715057,
0.04146115109... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow)
It can be done with `dataset.set_format("torch", columns=columns)` (or "tensorflow").
Note that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the list of column... | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 57 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | Hello . Yes, I did set the output format as below for the two columns
`train_dataset.set_format('torch',columns=['Text','Label'])`
| I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 16 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | I think you're having this issue because you try to format strings as pytorch tensors, which is not possible.
Indeed by having "Text" in `columns=['Text','Label']`, you try to convert the text values to pytorch tensors.
Instead I recommend you to first tokenize your dataset using a tokenizer from transformers. For ... | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 133 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.
I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error
----... | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 115 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | > Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.
> I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error
>
... | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 160 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ? | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 21 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | > I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?
i'm sorry that i met this issue in another place (not in huggingface repo). | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 36 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | @akhilkapil do you have strings in your dataset ? When you set the dataset format to "pytorch" you should exclude columns with strings as pytorch can't make tensors out of strings | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | 31 | invalid data type 'str' at _convert_outputs in arrow_dataset.py
I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert... | [
-0.22282367944717407,
0.024249443784356117,
0.10737121850252151,
0.23030731081962585,
0.5109527707099915,
-0.10725964605808258,
0.5323303937911987,
0.07759005576372147,
-0.3410615921020508,
-0.12477600574493408,
0.03665774315595627,
0.3864072263240814,
-0.36969542503356934,
0.0422178655862... |
https://github.com/huggingface/datasets/issues/468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | Indeed. Solution 1 is the simplest.
This is actually a recurring problem.
I think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.
And probably add a test in the CI to forbid using this in the future. | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-inp... | 45 | UnicodeDecodeError while loading PAN-X task of XTREME dataset
Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError ... | [
-0.3326014578342438,
-0.09301040321588516,
-0.1336478590965271,
0.25543028116226196,
0.35217538475990295,
-0.027749450877308846,
0.22498254477977753,
0.34964630007743835,
-0.028698710724711418,
0.03822888806462288,
-0.09041059017181396,
0.1502790004014969,
-0.07617444545030594,
-0.19703699... |
https://github.com/huggingface/datasets/issues/468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | I've created a simple function that seems to do the trick:
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"""
... | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-inp... | 200 | UnicodeDecodeError while loading PAN-X task of XTREME dataset
Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError ... | [
-0.3326014578342438,
-0.09301040321588516,
-0.1336478590965271,
0.25543028116226196,
0.35217538475990295,
-0.027749450877308846,
0.22498254477977753,
0.34964630007743835,
-0.028698710724711418,
0.03822888806462288,
-0.09041059017181396,
0.1502790004014969,
-0.07617444545030594,
-0.19703699... |
https://github.com/huggingface/datasets/issues/468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | I realised I was overthinking the problem, so decided to just run the regexp over the codebase and make the PR. In other words, we can ignore my comments about using the CLI 😸 | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-inp... | 34 | UnicodeDecodeError while loading PAN-X task of XTREME dataset
Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError ... | [
-0.3326014578342438,
-0.09301040321588516,
-0.1336478590965271,
0.25543028116226196,
0.35217538475990295,
-0.027749450877308846,
0.22498254477977753,
0.34964630007743835,
-0.028698710724711418,
0.03822888806462288,
-0.09041059017181396,
0.1502790004014969,
-0.07617444545030594,
-0.19703699... |
https://github.com/huggingface/datasets/issues/444 | Keep loading old file even I specify a new file in load_dataset | This is the only fix I could come up with without touching the repo's code.
```python
from nlp.builder import FORCE_REDOWNLOAD
dataset = load_dataset('csv', data_file='./a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1')
```
You'll have to change the version each time you want to load a different csv file.
... | I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset seems to remain the old 'a.csv' and not loading new csv file.
Even... | 88 | Keep loading old file even I specify a new file in load_dataset
I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset see... | [
-0.08887548744678497,
0.28237995505332947,
-0.0012340639950707555,
0.16190031170845032,
-0.16254766285419464,
0.0777330994606018,
0.143216073513031,
0.27374643087387085,
0.36268478631973267,
-0.11332878470420837,
0.11638201773166656,
0.1898297518491745,
0.18027657270431519,
-0.082129672169... |
https://github.com/huggingface/datasets/issues/443 | Cannot unpickle saved .pt dataset with torch.save()/load() | This seems to be fixed in a non-released version.
Installing nlp from source
```
git clone https://github.com/huggingface/nlp
cd nlp
pip install .
```
solves the issue. | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype... | 26 | Cannot unpickle saved .pt dataset with torch.save()/load()
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text... | [
-0.1386733502149582,
-0.1840369999408722,
0.14232327044010162,
0.28548911213874817,
0.3872588872909546,
0.07398682087659836,
0.42423391342163086,
0.1544944792985916,
-0.18186251819133759,
0.0088986586779356,
-0.25410956144332886,
0.7546939849853516,
-0.34292924404144287,
-0.162720710039138... |
https://github.com/huggingface/datasets/issues/439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | `DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.
Right now you can experiment with it by installing `transformers` from the master branch.
You can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html).
Moreove... | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t... | 50 | Issues: Adding a FAISS or Elastic Search index to a Dataset
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nl... | [
0.03650211542844772,
-0.14092455804347992,
-0.07937219738960266,
-0.13696248829364777,
-0.15599055588245392,
-0.3242332935333252,
-0.15913234651088715,
0.1821870654821396,
-0.34619370102882385,
0.015834258869290352,
-0.06638766080141068,
0.32067564129829407,
-0.11251838505268097,
-0.044296... |
https://github.com/huggingface/datasets/issues/439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | @lhoestq I tried installing transformer from the master branch. Python imports for DPR again didnt' work. Anyways, Looking forward to trying it in the next release of nlp | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t... | 28 | Issues: Adding a FAISS or Elastic Search index to a Dataset
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nl... | [
0.05924390256404877,
-0.014179443009197712,
-0.12304695695638657,
-0.17468445003032684,
-0.21369974315166473,
-0.30943408608436584,
-0.20244024693965912,
0.1878228336572647,
-0.2731277048587799,
-0.018002504482865334,
0.1258617639541626,
0.4488859474658966,
-0.12000568956136703,
0.01280918... |
https://github.com/huggingface/datasets/issues/438 | New Datasets: IWSLT15+, ITTB | Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ?
The tutorial on writing a new dataset loading script is here: https://huggingface.co/nlp/add_dataset.html
And the part on how to share a new ... | **Links:**
[iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html)
Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)

Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)

Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)

Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)

(Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case we'll just wait for you all to update) | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest... | 43 | Google Colab - load_dataset - PyArrow exception
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running thi... | [
-0.36298102140426636,
0.25222259759902954,
0.029283100739121437,
0.06686647981405258,
-0.06560645252466202,
-0.317952960729599,
0.3687325716018677,
0.14556506276130676,
-0.13778656721115112,
0.15375657379627228,
-0.09344098716974258,
0.42297065258026123,
-0.11837048083543777,
0.02392191439... |
https://github.com/huggingface/datasets/issues/436 | Google Colab - load_dataset - PyArrow exception | Came to raise this issue, great to see other already have and it's being fixed so soon!
As an aside, since no one wrote this already, it seems like the version check only looks at the second part of the version number making sure it is >16, but pyarrow newest version is 1.0.0 so the second past is 0! | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest... | 59 | Google Colab - load_dataset - PyArrow exception
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running thi... | [
-0.3563595414161682,
0.2692832946777344,
0.04035410284996033,
0.025508802384138107,
-0.1082252785563469,
-0.2124163806438446,
0.16984067857265472,
0.22092726826667786,
-0.14622630178928375,
0.1146729439496994,
0.09565439820289612,
0.5587818026542664,
-0.2984774708747864,
0.0380703285336494... |
https://github.com/huggingface/datasets/issues/436 | Google Colab - load_dataset - PyArrow exception | > Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq
Yes definitely | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest... | 17 | Google Colab - load_dataset - PyArrow exception
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running thi... | [
-0.3427741527557373,
0.29933488368988037,
0.027866246178746223,
0.0008247460355050862,
-0.05337360128760338,
-0.23227818310260773,
0.25824329257011414,
0.1790958195924759,
-0.23289674520492554,
0.10980717092752457,
-0.003500430379062891,
0.6347991228103638,
-0.2590232789516449,
0.110694922... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.