html_url stringlengths 48 51 | title stringlengths 5 155 | comments stringlengths 63 15.7k | body stringlengths 0 17.7k | comment_length int64 16 949 | text stringlengths 164 23.7k |
|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..
And then for y... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 156 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | If the arrow format is basically lists, why is the intermediate step to numpy necessary? I am a bit confused about that part.
Thanks for your suggestion. as I have currently implemented this, I cast to torch.Tensor in my collate_fn to save disk space (so I do not have to save padded tensors to max_len but can pad up... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 89 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | I'm glad you managed to figure something out :)
Casting from arrow to numpy can be 100x faster than casting from arrow to list.
This is because arrow has an integration with numpy that allows it to instantiate numpy arrays with zero-copy from arrow.
On the other hand to create python lists it is slow since it has ... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 70 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | I encountered a simliar issue: `datasets` converted my float numpy array to `torch.float64` tensors, while many pytorch operations require `torch.float32` inputs and it's very troublesome.
I tried @lhoestq 's solution, but since it's mixed with the preprocess function, it's not very intuitive.
I just want to sh... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 96 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | Reopening since @bhavitvyamalik started looking into it !
Also I'm posting here a function that could be helpful to support preserving the dtype of tensors.
It's used to build a pyarrow array out of a numpy array and:
- it doesn't convert the numpy array to a python list
- it keeps the precision of the numpy ar... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 206 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | @lhoestq Have you thought about this further?
We have a use case where we're attempting to load data containing numpy arrays using the `datasets` library.
When using one of the "standard" methods (`[Value(...)]` or `Sequence()`) we see ~200 samples processed per second during the call to `_prepare_split`. This sl... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 239 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/625 | dtype of tensors should be preserved | Hi !
It would be awesome to achieve this speed for numpy arrays !
For now we have to use `encode_nested_example` to convert numpy arrays to python lists since pyarrow doesn't support multidimensional numpy arrays (only 1D).
Maybe let's start a new PR from your PR @bhavitvyamalik (idk why we didn't answer your PR... | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | 185 | dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-... |
https://github.com/huggingface/datasets/issues/623 | Custom feature types in `load_dataset` from CSV | Currently `csv` doesn't support the `features` attribute (unlike `json`).
What you can do for now is cast the features using the in-place transform `cast_`
```python
from datasets import load_dataset
dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'])
dataset.cast... | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | 38 | Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotio... |
https://github.com/huggingface/datasets/issues/623 | Custom feature types in `load_dataset` from CSV | Hi @lhoestq we've tried out your suggestion but are now running into the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-163-81ffd5ac18c9> in <module>
----> 1 dataset.cast_(... | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | 168 | Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotio... |
https://github.com/huggingface/datasets/issues/623 | Custom feature types in `load_dataset` from CSV | In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq?
Should I add it? | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | 26 | Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotio... |
https://github.com/huggingface/datasets/issues/623 | Custom feature types in `load_dataset` from CSV | > In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq?
>
> Should I add it?
Sure let's add it. Setting the convert options should do the job
> Hi @lhoestq we've tried out your suggestion but are now running into the following error:
>
> ```
... | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | 136 | Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotio... |
https://github.com/huggingface/datasets/issues/623 | Custom feature types in `load_dataset` from CSV | PR is open for the `ValueError: Target schema's field names are not matching the table's field names` error.
I'm adding the features parameter to csv | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | 25 | Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotio... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | @thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.
Linux (Ubuntu 18.04) - Python 3.8
======================
Package - Version
---------------------
certifi 2020.6.20
chardet 3.0.4
click 7.1.2
datasets 1.0.1
di... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 194 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Downgrading to 3.7 does not help. Here is a dummy text file:
```text
Verzekering weigert vaker te betalen
Bedrijven van verzekeringen erkennen steeds minder arbeidsongevallen .
In 2012 weigerden de bedrijven te betalen voor 21.055 ongevallen op het werk .
Dat is 11,8 % van alle ongevallen op het werk .
Nog nooi... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 120 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | @banunitte Please do not post screenshots in the future but copy-paste your code and the errors. That allows others to copy-and-paste your code and test it. You may also want to provide the Python version that you are using. | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 39 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | I have the same problem on Linux of the script crashing with a CSV error. This may be caused by 'CRLF', when changed 'CRLF' to 'LF', the problem solved. | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 29 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | I pushed a fix for `pyarrow.lib.ArrowInvalid: CSV parse error`. Let me know if you still have this issue.
Not sure about the windows one yet | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 25 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default):
``... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 107 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | 
win10, py3.6
```
from datasets import Features, Value, ClassLabel, load_dataset
features = Features({'text': Value('string'), 'ctext': Value('string')})
file_dict = {'train': PATH/'summary.csv'}
... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 31 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | ```python
Traceback` (most recent call last):
File "main.py", line 281, in <module>
main()
File "main.py", line 190, in main
train_data, test_data = data_factory(
File "main.py", line 129, in data_factory
train_data = load_dataset('text',
File "/home/me/Downloads/datasets/src/datasets/load.... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 135 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | > 
> win10, py3.6
>
> ```
> from datasets import Features, Value, ClassLabel, load_dataset
>
>
> features = Features({'text': Value('string'), 'ctext': Value('string')})
> file_dict = {'train': PATH/... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 184 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | > To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default):
... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 206 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Hi @raruidol
To fix the RAM issue you'll need to shard your text files into smaller files (see https://github.com/huggingface/datasets/issues/610#issuecomment-691672919 for example)
I'm not sure why you're having the csv error on linux.
Do you think you could to to reproduce it on google colab for example ?
Or s... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 59 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | @lhoestq
The crash message shows up when loading the dataset:
```
print('Loading corpus...')
files = glob.glob('corpora/shards/*')
-> dataset = load_dataset('text', script_version='master', data_files=files)
print('Corpus loaded.')
```
And this is the exact message:
```
Traceback (most recent call last)... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 207 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | I tested on google colab which is also linux using this code:
- first download an arbitrary text file
```bash
wget https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt
```
- then run
```python
from datasets import load_dataset
d = load_dataset("text", data_files="all_train.t... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 156 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Update: also tested the above code in a docker container from [jupyter/minimal-notebook](https://hub.docker.com/r/jupyter/minimal-notebook/) (based on ubuntu) and still not able to reproduce | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 21 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | It looks like with your text input file works without any problem. I have been doing some experiments this morning with my input files and I'm almost certain that the crash is caused by some unexpected pattern in the files. However, I've not been able to spot the main cause of it. What I find strange is that this same ... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 92 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Under the hood it does
```python
import pyarrow as pa
import pyarrow.csv
# Use csv reader from Pyarrow with one column for text files
# To force the one-column setting, we set an arbitrary character
# that is not in text files as delimiter, such as \b or \v.
# The bell character, \b, was used to make beeps b... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 107 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Could you try with `\a` instead of `\b` ? It looks like the bell character is \a in python and not \b | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 22 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | I was just exploring if the crash was happening in every shard or not, and which shards were generating the error message. With \b I got the following list of shards crashing:
```
Errors on files: ['corpora/shards/shard_0069', 'corpora/shards/shard_0043', 'corpora/shards/shard_0014', 'corpora/shards/shard_0032', '... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 205 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Hmmm I was expecting it to work with \a, not sure why they appear in your text files though | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 19 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Hi @lhoestq, is there any input length restriction which was not before the update of the nlp library? | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 18 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | No we never set any input length restriction on our side (maybe arrow but I don't think so) | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 18 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | @lhoestq Can you ever be certain that a delimiter character is not present in a plain text file? In other formats (e.g. CSV) , rules are set of what is allowed and what isn't so that it actually constitutes a CSV file. In a text file you basically have "anything goes", so I don't think you can ever be entirely sure tha... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 118 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones:
_4. DE L’ORGANITZACIÓ ESTAMENTAL A L’ORGANITZACIÓ EN CLASSES A mesura que es desenvolupava un sistema econòmic capitalista i naixia una classe burgesa cada vegada... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 949 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | So we're using the csv reader to read text files because arrow doesn't have a text reader.
To workaround the fact that text files are just csv with one column, we want to set a delimiter that doesn't appear in text files.
Until now I thought that it would do the job but unfortunately it looks like even characters lik... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 289 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | > Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones
Thanks for digging into it !
Characters like \a or \b are not shown when printing the text, so as it is I can't tell if it contains unexpected characters.
Mayb... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 178 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have "\b" at the end? | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 28 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/622 | load_dataset for text files not working | > That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have "\b" at the end?
I don't think it would work since we only want one column, and "\b" is set to be the delimiter between two columns, so it will raise the same issue again. Pyarrow would think th... | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | 96 | load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loa... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | It seems that I ran into the same problem
```
def tokenize(cols, example):
for in_col, out_col in cols.items():
example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))
return example
cola = datasets.load_dataset('glue', 'cola')
tokenized_cola = cola.map(partial(token... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 121 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | same problem.
`encoded_dataset = core_data.map(lambda examples: tokenizer(examples["query"], examples["document"], padding=True, truncation='longest_first', return_tensors="pt", max_length=384), num_proc=16, keep_in_memory=True)`
it outputs:
```
Set __getitem__(key) output type to python objects for ['document', 'i... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 301 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Thanks for reporting.
Which tokenizers are you using ? What platform are you on ? Can you tell me which version of datasets and pyarrow you're using ? @timothyjlaurent @richarddwang @HuangLianzhe
Also if you're able to reproduce the issue on google colab that would be very helpful.
I tried to run your code ... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 64 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Hi, Sorry that I forgot to see what my version was.
But after updating datasets to master (editable install), and latest pyarrow.
It works now ~ | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 26 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Sorry, I just noticed this.
I'm running this on MACOS the version of datasets I'm was 1.0.0 but I've also tried it on 1.0.2. `pyarrow==1.0.1`, Python 3.6
Consider this code:
```python
loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py")
ds = load_dataset(
loader_path, name=... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 289 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | #659 should fix the `KeyError` issue. It was due to the formatting not getting updated the right way | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 18 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Also maybe @n1t0 knows why setting `TOKENIZERS_PARALLELISM=true` creates deadlock issues when calling `map` with multiprocessing ? | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 16 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | @lhoestq
Thanks for taking a look. I pulled the master but I still see the key error.
```
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
#0: 100%|█████████████████... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 299 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf https://github.com/huggingface/tokenizers/issue... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 75 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | > Thanks for taking a look. I pulled the master but I still see the key error.
I am no longer able to get the error since #659 was merged. Not sure why you still have it @timothyjlaurent
Maybe it is a cache issue ? Could you try to use `load_from_cache_file=False` in your `.map()` calls ? | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 56 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | > The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf [huggingface/tokenizers#187](https://github.c... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 140 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Hmmm I pulled the latest commit, `b93c5517f70a480533a44e0c42638392fd53d90`, and I'm still seeing both the hanging and the key error. | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 18 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Hi @timothyjlaurent
The hanging fix just got merged, that why you still had it.
For the key error it's possible that the code you ran reused cached datasets from where the KeyError bug was still there.
Could you try to clear your cache or make sure that it doesn't reuse cached data with `.map(..., load_from_cac... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 63 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Hi @lhoestq ,
Thanks for letting me know about the update.
So I don't think it's the caching - because hashing mechanism isn't stable for me -- but that's a different issue. In any case I `rm -rf ~/.cache/huggingface` to make a clean slate.
I synced with master and I see the key error has gone away, I tried w... | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 174 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Thanks for reporting.
I'm going to fix that and add a test case so that it doesn't happen again :)
I'll let you know when it's done
In the meantime if you could make a google colab that reproduces the issue it would be helpful ! @timothyjlaurent | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 47 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Thanks @timothyjlaurent ! I just merged a fix on master. I also checked your notebook and it looks like it's working now.
I added some tests to make sure it works as expected now :) | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 35 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | Great, @lhoestq . I'm trying to verify in the colab:
changed
```
!pip install datasets
```
to
```
!pip install git+https://github.com/huggingface/datasets@master
```
But I'm still seeing the error - I wonder why? | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 32 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | It works on my side @timothyjlaurent on google colab.
Did you try to uninstall datasets first, before updating it to master's version ? | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 23 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/620 | map/filter multiprocessing raises errors and corrupts datasets | I didn't -- it was a new sessions --- buuut - look like it's working today -- woot! I'll close this issue. Thanks @lhoestq | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | 24 | map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_d... |
https://github.com/huggingface/datasets/issues/619 | Mistakes in MLQA features names | Indeed you're right ! Thanks for reporting that
Could you open a PR to fix the features names ? | I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et... | 19 | Mistakes in MLQA features names
I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA dat... |
https://github.com/huggingface/datasets/issues/617 | Compare different Rouge implementations | Updates - the differences between the following three
(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)
(2) https://github.com/google-research/google-research/tree/master/rouge
(3) https://github.com/pltrdy/files2rouge (used in fairseq)
can be explained by two things, stemmi... | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | 145 | Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar... |
https://github.com/huggingface/datasets/issues/617 | Compare different Rouge implementations | This is a real issue, sorry for missing the mention @ibeltagy
We implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\n` so that rougeLsum scores match papers even... | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | 144 | Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar... |
https://github.com/huggingface/datasets/issues/617 | Compare different Rouge implementations | > This is a real issue, sorry for missing the mention @ibeltagy
>
> We implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\n` so that rougeLsum scores match paper... | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | 210 | Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar... |
https://github.com/huggingface/datasets/issues/617 | Compare different Rouge implementations | Hi, thanks for the solution.
I am not sure if this is a bug, but on line [510](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L510), are pred, tgt supposed to be swapped? | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | 25 | Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar... |
https://github.com/huggingface/datasets/issues/617 | Compare different Rouge implementations | Hi, so I took this example from the HF implementation. What I can see is that the precision of `Hello there` being summarized to `general kenobi` is 1. I don't understand how this calculation is correct.
Is the comparison just counting the words?
and if Yes, then how does this translates to summarization evaluation?... | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | 103 | Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://ar... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | I think the only way to avoid this warning would be to do a copy of the numpy array before providing it.
This would slow down a bit the iteration over the dataset but maybe it would be safer. We could disable the copy with a flag on the `set_format` command.
In most typical cases of training a NLP model, PyTorch ... | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 106 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | @thomwolf Would it be possible to have the array look writeable, but raise an error if it is actually written to?
I would like to keep my code free of warning, but I also wouldn't like to slow down the program because of unnecessary copy operations. | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 46 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | Well because I don't know the internal of numpy as well as you I guess hahahah, do you want to try to open a PR proposing a solution? | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 28 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | @thomwolf @AndreasMadsen I think this is a terrible idea, n/o, and I am very much against it. Modifying internals of an array in such a hacky way is bound to run into other (user) issues down the line. To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) ... | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 155 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | > To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) but then they get a warning that the array is not writeable. That's extremely confusing.
Confusion can be resolved with a helpful error message. In this case, that error message can be controlled b... | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 222 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | > The right argument here is that if code depends on `.flags.writable` being truthful (not just for warnings), then it will cause unavoidable errors. Although, I can't imagine such a use-case.
That's exactly the argument in my first sentence. Too often someone "cannot think of a use-case", but you can not foresee th... | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 198 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | > But this is not a plain use-case (because Pytorch does not support these read-only tensors).
By "plain", I mean the recommended way to use `datasets` with PyTorch according to the `datasets` documentation. | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 33 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | This error is what I see when I run the first lines of the Pytorch Quickstart. It should also say that it should be ignored and/or how to fix it. BTW, this is a Pytorch error message -- not a Huggingface error message. My code runs anyway. | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | 47 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be... |
https://github.com/huggingface/datasets/issues/615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | Related: https://issues.apache.org/jira/browse/ARROW-9773
It's definitely a size thing. I took a smaller dataset with 87000 rows and did:
```
for i in range(10,1000,20):
table = pa.concat_tables([dset._data]*i)
table.take([0])
```
and it broke at around i=300.
Also when `_indices` is not None, this ... | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-38... | 108 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowIn... |
https://github.com/huggingface/datasets/issues/615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | This specific issue has been fixed in https://github.com/huggingface/datasets/pull/645
If you still have this error, could you open a new issue and explain how to reproduce the error ? | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-38... | 28 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowIn... |
https://github.com/huggingface/datasets/issues/615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | Facing the same issue.
Steps to reproduce: (dataset is a few GB big so try in colab maybe)
Datasets version - 2.11.0
```
import datasets
import re
ds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train')
def get_text_caption(example):
regex_pattern = r'\s\d+x\d+|,\sLQ|,\sgrid|\.\w+... | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-38... | 72 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowIn... |
https://github.com/huggingface/datasets/issues/615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | Got this error on a very large data set (900m rows, 35 cols) performing a similar batch map operation. | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-38... | 19 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowIn... |
https://github.com/huggingface/datasets/issues/611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | ```
<class 'pandas.core.frame.DataFrame'>
Int64Index: 17136104 entries, 0 to 17136103
Data columns (total 6 columns):
# Column Dtype
--- ------ -----
0 item_id int64
1 item_titl object
2 start_price float64
3 shipping_fee float64
4 picture_url object
5... | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)... | 47 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most rece... |
https://github.com/huggingface/datasets/issues/611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | Thanks and some more on the `embeddings` and `picture_url` would be nice as well (type and max lengths of the elements) | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)... | 21 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most rece... |
https://github.com/huggingface/datasets/issues/611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | It looks like a Pyarrow limitation.
I was able to reproduce the error with
```python
import pandas as pd
import numpy as np
import pyarrow as pa
n = 1713614
df = pd.DataFrame.from_dict({"a": list(np.zeros((n, 128))), "b": range(n)})
pa.Table.from_pandas(df)
```
I also tried with 50% of the dataframe a... | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)... | 75 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most rece... |
https://github.com/huggingface/datasets/issues/611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | It looks like it's going to be fixed in pyarrow 2.0.0 :)
In the meantime I suggest to chunk big dataframes to create several small datasets, and then concatenate them using [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datas... | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)... | 32 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most rece... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | Could you try
```python
load_dataset('text', data_files='test.txt',cache_dir="./", split="train")
```
?
`load_dataset` returns a dictionary by default, like {"train": your_dataset} | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 18 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | Hi @lhoestq
Thanks for your suggestion.
I tried
```
dataset = load_dataset('text', data_files='test.txt',cache_dir="./", split="train")
print(dataset)
dataset.set_format(type='torch',columns=["text"])
dataloader = torch.utils.data.DataLoader(dataset, batch_size=8)
next(iter(dataloader))
```
But it still ... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 312 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | You need to tokenize the string inputs to convert them in integers before you can feed them to a pytorch dataloader.
You can read the quicktour of the datasets or the transformers libraries to know more about that:
- transformers: https://huggingface.co/transformers/quicktour.html
- dataset: https://huggingface.co... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 44 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).
But finally got it working. This is what I did after looking into the documentation.
1. split the whole dataset file into smaller files
```bash
mkdir ./shards
split -a 4 -l 256000 -d full_raw_corpus.txt ./shards/shard_
````
2. Pa... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 125 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | Thanks, @thomwolf and @sipah00 ,
I tried to implement your suggestions in my scripts.
Now, I am facing some connection time-out error. I am using my local file, I have no idea why the module request s3 database.
The log is:
```
Traceback (most recent call last):
File "/home/.local/lib/python3.6/site-packa... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 248 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | I noticed this is because I use a cloud server where does not provide for connections from our standard compute nodes to outside resources.
For the `datasets` package, it seems that if the loading script is not already cached in the library it will attempt to connect to an AWS resource to download the dataset loadi... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 76 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | I solved the above issue by downloading text.py manually and passing the path to the `load_dataset` function.
Now, I have a new issue with the Read-only file system.
The error is:
```
I0916 22:14:38.453380 140737353971520 filelock.py:274] Lock 140734268996072 acquired on /scratch/chiyuzh/roberta/text.py.lock
... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 214 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).
> But finally got it working. This is what I did after looking into the documentation.
>
> 1. split the whole dataset file into smaller files
>
> ```shell
> mkdir ./shards
> split -a 4 -l 256000 -d full_raw_corpus.txt ./shards/... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 254 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > > Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).
> > But finally got it working. This is what I did after looking into the documentation.
> >
> > 1. split the whole dataset file into smaller files
> >
> > ```shell
> > mkdir ./shards
> > split -a 4 -l 256000 -d full_raw_corp... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 331 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > ```python
> def encode(examples):
> return tokenizer(examples['text'], truncation=True, padding='max_length')
> ```
It is the same as suggested:
> def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length') | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 25 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > > ```python
> > def encode(examples):
> > return tokenizer(examples['text'], truncation=True, padding='max_length')
> > ```
>
> It is the same as suggested:
>
> > def encode(examples):
> > return tokenizer(examples['text'], truncation=True, padding='max_length')
Do you use this function in a `class` ob... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 60 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > > > ```python
> > > def encode(examples):
> > > return tokenizer(examples['text'], truncation=True, padding='max_length')
> > > ```
> >
> >
> > It is the same as suggested:
> > > def encode(examples):
> > > return tokenizer(examples['text'], truncation=True, padding='max_length')
>
> Do you use this fu... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 250 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > > > > ```python
> > > > def encode(examples):
> > > > return tokenizer(examples['text'], truncation=True, padding='max_length')
> > > > ```
> > >
> > >
> > > It is the same as suggested:
> > > > def encode(examples):
> > > > return tokenizer(examples['text'], truncation=True, padding='max_length')
> >
... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 357 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | @chiyuzhang94 Thanks for your reply. After some changes, currently, I managed to make the data loading process running.
I published it in case you might want to take a look. Thanks for your help!
https://github.com/shizhediao/Transformers_TPU | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 35 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | Hi @shizhediao ,
Thanks! It looks great!
But my problem still is the cache directory is a read-only file system.
[As I mentioned](https://github.com/huggingface/datasets/issues/610#issuecomment-693912285), I tried to change the cache directory but it didn't work.
Do you have any suggestions?
| I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 39 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > I installed datasets at /project/chiyuzh/evn_py36/datasets/src where is a writable directory.
> I also tried change the environment variables to the writable directory:
> `export HF_MODULES_PATH=/project/chiyuzh/evn_py36/datasets/cache_dir/`
I think it is `HF_MODULES_CACHE` and not `HF_MODULES_PATH` @chiyuzhang9... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 50 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | We should probably add a section in the doc on the caching system with the env variables in particular. | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 19 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | Hi @thomwolf , @lhoestq ,
Thanks for your suggestions. With the latest version of this package, I can load text data without Internet.
But I found the speed of dataset loading is very slow.
My scrips like this:
```
def token_encode(examples):
tokenizer_out = tokenizer(examples['text'], trunca... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 129 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | You can use multiprocessing by specifying `num_proc=` in `.map()`
Also it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total.
Am I right ? | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 32 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
https://github.com/huggingface/datasets/issues/610 | Load text file for RoBERTa pre-training. | > You can use multiprocessing by specifying `num_proc=` in `.map()`
>
> Also it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total.
> Am I right ?
Hi @lhoestq ,
Thanks. I will try it.
You are right. I have 1,123,870,657 lines totally in the path. ... | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | 141 | Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried t... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.